In the strictest sense, First Principles are the laws of nature: “ab initio”. We are not making any assumptions or using any empirical knowledge for a conclusion. We could translate “First Principle” into this: something that can’t be any different or can’t be made any different because we know or are very sure that it is true.
If I know that someone or something can change the software of a secure system, then this system can’t be called safe or secure. A system with modifiable software can’t be called trustworthy, or completely reliable (there is no exception to it). And providing safety means that we have a system that is trustworthy because we must rely on it when the normal security barriers have been breached.
But here is the problem: every software can be faulty and if we know that, we must have it updated asap. That means non-modifiable software is definitely not completely reliable either. This is a contradiction from which most conclude that trustworthy computing is impossible to achieve. I don’t want to accept that.
I believe that this problem, this contradiction, can be overcome pragmatically — but the current dominant computer architecture: the von-Neumann Architecture, is making this a bit more difficult. Some proposals suggest making a trustworthy system tamperproof or tamper-evident is sufficient. So, if the system was modified (covertly or illegitimately), then it won’t work anymore, or we would have evidence that this system was modified. That’s the best we can do so far — but this approach is not “Safety based on First Principles”.
There is a security technology called “Runtime Application Self-Protection” (RASP) that uses at runtime internal tools or subroutines to detect and possibly block computer attacks to continue. But the problem is that all measures to enable RASP can be removed by a sufficiently smart attacker (like an ASI – an Artificial Superintelligence) and the software without RASP can then be modified in every conceivable way. Or the ASI could simply rewrite the basic algorithms while it is continuing its attack without the protected software. There are many tools known in “Reverse Code Engineering” (RCE) that determine what value a critical comparison or subroutine is seeking — and this value is then being supplied. RCE is not too complicated to be mastered by an ASI.
The reason for the problem with trustworthiness is based on the dominant architecture of all computer systems: von-Neuman Architecture: there is no difference between instructions and passive data when they are being transported over the bus or stored in RAM. Software is being supplied from any source: normally it is from a read/write drive (HDD/SSD), or from a removable storage device/media, or from the Internet. Even if some of these sources are restricted, the operating system and the CPU wouldn’t mind accepting executables from these sources once they are in RAM — if associated attributes must be notified, then plenty of existing apps are there that could lend out these rights. Why? All computer systems have so many applications that it is impossible to know if there are any apps on the system that would assist malicious code to gain elevated admin rights — or in simple terms: complexity is the enemy of security. Once an attacker has admin rights, there is little that we can do to limit the damage; the device can’t be called trustworthy anymore. Everything that is on any read-writable device must be considered potentially malicious.
A fundamental tool for security is to create simplicity. If we separate data into executable data and passive data and if we separate the pathways to the CPU into separate pathways for trustworthy instructions and pathways for passive data (as in the Harvard-Architecture) then we only must make sure that we control the source of the trustworthy software, e.g. a single secure drive that exclusively contains all executable files.
A Deeper Dive into Harvard Architecture
A pure Harvard-Architecture would only accept machine instructions as software and no Bytecode or Scripts. We are in 2021, we have .Net, Java, Python, and many more script languages. It is inconceivable to advocate the abolishment of bytecode, VMs, scripts, or macros even for trustworthy computation. However, there is native .Net and Java (and Python), but I believe we don’t need to go that path.
Instead, we could manage to segment the RAM in the following ranges: “Secure Code RAM” for machine code, which is separated from that memory range: “Secure Data RAM” for Bytecode and other script code and a third range for passive content data from all not-trusted data sources. However, we must mark on the secure drive all files with bytecode, script, or VM code so that we can protect this code from being analyzed or manipulated. Do we need to mark the apps that can process Bytecode or Scripts? No. More about that later.
The biggest takeaway: if a CPU in Harvard-Mode can’t execute other apps, scripts, or other macros from other (non-secure) drives, then the trustworthiness of the system depends solely on the software from the secure drive. If we are sure, that software is only untrustworthy/unreliable because software contains bugs then there is no threat from code that can manipulate existing software on the system, no threat from RCE. Software bugs or untrustworthy code on the secure drive could threaten data, but that type of problem can be dealt with by developers outside the attacked system.
Deeper Dive into Software Updates
Ideally, all software from the secure drive should be read-only, so that we would have an additional layer of protection against hidden modification of that software, but this feature (as explained above) would be in contradiction to our need to fix erroneous software.
Instead, there are other methods/rules we should apply to facilitate updates:
- (1) we should not allow that software is updated from addresses that are being provided by untrustworthy content (means from read-write drives or Internet etc.);
- (2) we shouldn’t allow software from the OS or requested or processed (including decrypted or decompressed) by the CPU to write on the safe drive; there is no single person on earth who understand all this in detail and who could then say: we can trust all components involved in this.
- (3) we should not allow any encryption/decryption key involved within the exchange of data to appear in clear text (no exception) — if there is a remote chance that ASI could steal a key, then these keys must be considered as compromised. Item 3 in my post “Standards for ASI Safety” explains more details;
- (4) we should validate the hashcodes of each file and/or software package independently with values received from another independent hashcode providing server (also received encrypted, with no chance for exposing any key involved).
- (5) Actually, we should have all the above-mentioned operations (1) – (4) made by a Storage Watchdog as described in items 6 (and 7) within my post “Standards for ASI Safety”. The CPU or DMA is not allowed to touch any file before it went through the Storage Watchdog for Hashcode Validations. The Storage Watchdog is simpler and therefore easier to understand by people vetting that technology.
Then we are left with humans in charge of delivering high-quality code. But the good thing is: attackers, including ASI have much fewer options to do damage — except to smuggle in some code into the secure drive (see below more about that). If the system requires a browser, then everything related to that browser could be put additionally in a tight sandbox and validated via hashcode prior to any execution, and/or all scripts (within a browser page) are deactivated. The browser would reach out for scripts outside the secure drive in a RAM memory range: “Validated Data RAM”, as mentioned below — but for some systems, this might be prohibited via hardcoded restrictions (hardcoded as with hardware).
However, some processors or systems are required to have a “Validated Data RAM” for data/scripts/bytecode that was checked independently, but we can’t be 100% sure if it is really safe code. This situation could happen in computer centers when some enterprise software must run.
If users need to use additional (commercial) software, then this software needs to go through acceptable installation and update channels — not even a sysadmin is allowed to install new software. So, there is no reason why the software of a trustworthy (Harvard-based) system can’t be extended, by additional software in a trustworthy manner. I also believe that the argument that there are no safe updates can be disproven. There is no reason to believe that we can’t make the update process tamper-evident based on First Principles.
Protection Against Code From Within
So, whatever a user is trying to do on a really trustworthy system: getting some files from a removable drive or even using content from the internet, a trustworthy system must remain trustworthy. The “price” that needs to be paid: separation of data into machine instructions, secure script/bytecode data, and passive code from drives before they are being provided to the CPU that is then running in Harvard-Mode. Additionally, executables are managed by an independent Storage Watchdog which is operated in Harvard-Mode as well, so that we get still updates for our software, but in a more controlled and easier protected manner. For users, there should be no detectable difference, except that he is not able to access files on the secure drive directly (means won’t be able to see the bytes in a hex-code editor).
The worst-case scenario would be that an attacker (i.e. an ASI) was able to manipulate a file on the secure drive. That could have only happened during development. That malicious executable could do damage to the passive data, but this software can’t read, access, or manipulate neither its own executable code nor the instructions of other executables. The Harvard-Architecture strictly separates the RAM into memory for executables, RAM for secure script/bytecode data, and RAM for passive data. Based on First Principles, which are hardcoded in hardware and therefore insurmountable, the CPU executables in the secure RAM ranges could never be treated as passive data. And passive data from the non-secure drives can’t make it into the pathways for the executable instructions even if we had started a VM on a CPU in Harvard-Mode.
Because of concern for the handling of executables and bytecode/script data within the CPU’s cache (which is partly under the control of the OS and thereby software), we need to make some additional modifications to the CPU when it runs in Harvard-Mode to prevent the modification of any executable software and secure bytecode/script within the cache and via syncing also within RAM. I am making here an assumption: the actual circuits responsible for doing the computation, i.e. the ALU, FPU, and even the processor registers would be shared is being shared when the CPU is either in von-Neumann or Harvard-Mode.
Most von-Neumann systems are designed using a Modified Harvard Architecture. Modified Harvard-Architecture means that these CPUs have already an L1 cache that is separated in instructions and passive data — but otherwise have instructions/passive data on the same bus. Because we can’t trust software, even if it is validated, the L1 cache needs to be extended or enhanced:
- (1) Extended: we can introduce a 3rd L1 cache for the Harvard-Mode; then data from this cache are only taken when bytecode or scripts are processed and the CPU would know it because a corresponding status bit was set indicating that Bytecode or script code is processed. Additionally, if we would accept validated code, then we need a 4th L1 cache and another status bit in the CPU would indicate that validated code is being processed. Or
- (2) Enhanced: we could have only one L1 data cache and additional status bits that would indicate the type of RAM range: secure bytecode/scripts, validated bytecode/scripts, or passive data from other drives. A status bit within the CPU would determine what kind of data the CPU can accept/see as data within its (potentially modified) instruction set. These additional status bits could also be used in the higher-level cache, like in L2, L3, or L4 to prevent any confusion about what type the data in the cache are.
There is another conceptual difference between Harvard and von-Neumann Architecture beyond the separation of pathways and that is related to the internal handling of addresses in the ACU or AGU (Address Calculation/Generation Unit) — the von Neumann system has only one address pointing to the memory cell within the virtual address space. Harvard systems require at least 2 or more independent address values which must be physically separated in order to prevent that an algorithm is even theoretically able to copy data from the secure instruction/ bytecode memory range to the (unsecure) memory range with the regular passive data. However, the handling of address data within the processing unit may lead to the additional requirement of having (new) special CPU instructions (e.g. branch operators) related to the dealing of address-related values, when the CPU operates in Harvard-Mode.
The complexity of the instruction set is vast and intimidating. Stefan Heule (2016) has tried to answer the question of “How Many x86-64 Instruction Are There Anyway” and came to 2,034. Or see Fabian Giessen’s blog post (2016) “How many x86 instructions are there?” — 1,503 x86-instructions. There is no way that we would be able to trust all these instructions.
Any instruction that is providing an address as an output value and potentially instructions that are using values from Bytecode, Scripts, or VMware as input (within Harvard-mode) must potentially be replaced by a Harvard-Mode-specific instruction in a modified compiler. These additional instructions used by the Harvard Input could then also be used to prevent software developed for Harvard-Mode to be run on von-Neumann systems directly. Software compiled for one architecture would create instructions that are not usable on the other architecture. Nevertheless, these kinds of software could still be debugged/tested within the VMware that would run on top of a von-Neumann system.
What Steps Can Be Taken?
My suggestion would be to prepare the motherboard, including CPU, RAM, and DMA (Direct Memory Access) to run in both architectures, while sharing as many internal components as reasonably possible. However, the user should be able to choose (preferably with an electromechanical switch) between Harvard-Mode and von-Neumann-Mode and potentially another switch could be used to have the system accept (externally) validated code when in Harvard-Mode (for computing centers only). Furthermore, the bus modifications on the motherboard must also consider the possibility that the GPUs are currently receiving their data and instructions over the main bus. If we want to prevent ASI to compromise the security of a system, then we must prevent that ASI has any chance to get instructions into the GPU.
Some systems must be trustworthy no matter what — in particular in national security and defense. These systems must be protected with additional hardware or organizational measures so that they do not accidentally enter the von-Neumann-Mode because of human traitors.
Because the type of data (executable, .Net type, script, etc.) on regular drives are being detected by the OS and DMA via existing file-related attributes and file extensions, this kind of separation is important on secure drives to have immediately. We must be able to detect executable machine code and bytecode/script before the code is being analyzed by the OS. Therefore, we need to extend the attributes of the file system on the secure drive accordingly.
Additionally, if we are concerned that a developer or manufacturer could have left some spyware or backdoor on any of the apps on the secure drive, then we need to include port-certificates in the firewall so that we accept only certified exchange protocols which structure and purpose have been independently validated. The idea is being described in some more detail in item 8 within my post “Standards for ASI Safety”. However, port-certificate works only on non-HTTP ports.
In summary: A system with CPU in Harvard-Mode receiving its software instruction from a secure drive with a Storage Watchdog together with Port-Certificates in Firewalls is a trustworthy system beyond the reach of any Malware, Viruses, Ransomware, Rootkits, Spyware or Backdoors. I don’t see any reason why a CPU with Harvard architecture should not deliver the same performance as a von-Neumann system. For regular users, they are equivalent — indistinguishable; for developers, it depends. Compiler developers will see a difference; application developers should not care.
Additional Applications: Kill-ASI and PKI
The above notes are for trustworthy systems, not necessarily machines that users are using in their daily routine at home or even for their regular work. But there are some takeaways that need to be considered for all machines when we protect them against being retaken by ASI. In a Kill-ASI event, we must turn every regular device into a trustworthy device. Therefore we must switch systems into Harvard-Mode when the Kill-ASI switch is activated, and we must keep the device in Harvard-Mode as long as we need to prevent that the machine is being retaken by ASI.
To clarify my position, I am not advocating for the abolishment of von-Neumann-Architecture as the main operating mode for CPUs and use instead Harvard systems by default. Developers need von-Neumann systems for their debugging and the segmentation of RAM in Harvard-mode is not necessarily optimal for efficient memory management.
Harvard-Mode should only be used if we are very serious about trustworthiness, and that is beyond being angry or frustrated about malware, viruses, spyware, ransomware, or rootkits because this pest can be dealt with less radical tools than putting a CPU into Harvard-Mode.
Finally: We have often data on computer systems that must be protected based on First Principles: key data and access credentials. I believe we need to protect not only private keys (including private session keys) but also public keys. I discuss this issue in more detail in my post: “Are we in Denial?”. Right now public keys are being used as part of a certificate to authenticate users or services, but I believe that is short-sided. We assume that private keys can’t be extracted from a public key except via a brute-force attack or via a quantum computer. I could argue with First Principles: the current protection of PKI is based on a shared belief, on an empirical assumption, and not on a provable fact. There are additional reasons for my suggestion to make public key protected as well, but I will discuss this in a separate posting on Key-Safes.
Some Additional Remarks on First Principles in Kill-ASI and Crypto Certification:
(1) Switching off the power from a device with a presuming ASI is only a very temporary solution. After power is restored and the machine switched on again, ASI will operate as before. Even if we would try to reinstall the OS from a CD, ASI may have corrupted the BIOS and would survive the formatting/reinstallation of the OS. ASI may even control the entire process and will only show what the user expects to see. If we have legacy systems, then the only First-Principle Solution is to destroy the device without giving anyone a chance to reuse any of its components.
(2) ASI could have planned its own survival from the first moment it was starting to have a plan for itself — which could be much earlier than we were aware that ASI might have an agenda. Legacy devices would play a significant role in its survival plan. If we want to have a chance to eradicate ASI, then we must be sure that we have a full accounting of all network-capable devices – globally. In countries with many new devices, we have a much higher chance to detect and tag all legacy devices with the help of users/owners who could be alarmed if an unregistered/untagged device is being detected in his local network. The decision for generating an alarm would be made based on the detection of a new/unknown IP and MAC address. However, the MAC address can be spoofed by the OS; this is making the task of eradicating ASI more difficult and less reliable. We must be prepared that ASI can trick us successfully — it is uncertain if we can eradicating it.
(3) In a Kill-ASI situation, we would restart a device with ASI-Safety components in a safe mode with safe software from a secure drive with a Storage Watchdog. ASI will be removed from any HDD/SSD and the device could then serve as a honeypot for any ASI software trying to connect to some other instance or component of itself. Additionally, the Wi-Fi of these safe systems would also listen to other devices in its private network, in particular, if ASI is trying to get connected in a peer-2-peer manner, then we would have a suspicious sign for ASI activities.
(4) Also, switching off the internet or internal routers would not be a First-Principle solution, because ASI will be able to be just fine as it could use most network components in a Peer-2-Peer manner. The only way to eradicate ASI is to find it and to delete its entire memory (in RAM and on drives). Finding ASI is only possible if it tries to use the networking in a suspicious manner. It is quite possible that we would never stop searching for network signals from an ASI after we initiated the Kill ASI signal, and we might never be sure if it won’t reemerge later.
(5) the fastest and cheapest way to get rid of a possible hideout of ASI in the Third World is to give everyone a new phone with ASI-Safety and then destroy every single old one. In order to be sure, we also need to have people go literally door to door in all countries and to make sure that all devices have the Primary ASI Security Layer activated (see item 12 in my post “Standards for ASI Safety”) and all removable storage media have Secondary ASI Security measures active.
(6) There is a certain irony in how the government’s general insistence on certification (FIPS-140-2, FIPS-140-3, and ISO/IEC 19790) for encryption/decryption (crypto) hardware undermines the safety and security of these devices. Software is an integral part of the safety/security of crypto hardware. If a security bug is found in its software then the manufacturer must go through the certification process again, which costs money and a lot of time. So, the manufacturer has no real incentive to find problems. They are leaving it to outside experts, but mainly to motivated adversaries who certainly would say nothing. In the consequence, crypto hardware could be called certified safe but is in fact unsafe and untrustworthy, partly because the manufacturer doesn’t have an incentive to find problems and fix them.
(7) Crypto hardware has another problem. It is part of a system that is untrustworthy. The API to the Crypto hardware can be misused by any software on that system including software that was not considered by the system designer and security engineer. This fact is that the use of a reliable hardware component in an environment of an unreliable system is putting the entire security in question — as a quick fix: additional measures have to be taken e.g. to prevent insiders to infect the system (but is that the only problem?). The use of crypto hardware is not making the system safer or securer than without – it creates only a false sense of security and safety. Although a trustworthy system, based on first principles, has conceptually the same problem with undetected malicious software on the secure drive, detecting and fixing/removing that software on that secure drive is a task that can be done by the team of system developers as part of their vigilance and regular maintenance.
A Final Note:
If someone really believes that we can create a trustworthy system using von-Neumann Architecture, without a Storage Watchdog and/or Port-Certification in Firewalls, then I would really like to see that happen. I am not able to say that some additional security-ring types of solutions can’t prevent software from hacking/modifying other software. CPU manufacturers are potentially able to make that happen. But the question is, will these solutions be simple enough that we can analyze all possible cases to say at the end: yes, it’s all checked out – it’s safe. I am not saying security rings should be scraped, but to have them as the last line of defense is not smart. I am pretty sure that we will have AI/ASI that will think through complexity with much more ease than we, humans.
Using a von-Neumann Architecture system with Storage Watchdog for Executables and Port-Certification in Firewalls could be a workaround (at least for a while). But for a truly trustworthy system, we would need to be much more hawkish on what is running at any time on the system, potentially applying additional organizational methods. So the good news is, for now, pre-ASI, we may get away with still using von-Neumann (in trustworthy system + above-mentioned workaround) — but we must plan ahead and be prepared for ASI as part of our digital ecosystem.