This post is long. And I will not go even into details. My goal is to show that we have many technical solutions that have not been explored yet.
ASI Safety is a global problem. Without a comprehensive solution, we have no solution at all. If we need to deal with a serious ASI threat, and we have thousands of different solutions, then the chances are probably against us: we lose. There is no advantage in going with 1000s of different types of ammunition to war. We should focus on a few main technical solutions, ideally a single bundle of ASI Safety technologies to be implemented in all new devices together with another single bundle of technical solutions for legacy systems. We can then focus our intellectual resources on optimizing and vetting the most promising ones.
I want to focus in this blog only on the technical aspects. But clearly, a comprehensive solution requires many political decisions — starting with the decision that we need to start investing in ASI-Safety and that e.g. governments have the legal authority to demand the destruction of every unprotected IT device in case of an ASI emergency. There could be other political/legislative decisions that would simplify the implementation of ASI safety and/or would help to make it easier and potentially faster and cheaper. I will comment on these political questions in a separate post.
Standards are already incredibly important in technology. It is signaling the market that not only one or a few manufacturers, but every company intending to provide a solution in that domain can use a standard easily. With standards, companies can’t differentiate their products from each other that well, but derivative features could take center-stage. As a fact, most new technologies won’t go mainstream without being based on a standard. The support of a standard becomes a simple but essential checkmark which means delivering them must be as cheap and easy as possible. Therefore, it makes sense to go open-source on many details contributing to the standard as soon as possible.
I believe that ASI Safety requires urgently multiple standards before it can be commercialized. In full disclosure, I have patent-pending for multiple ideas, almost everything mentioned on this website has been filed at the USPTO. I have discussed the patenting issue in another blog post. At the earliest (based on experience), I expect some patent claims to be allowed around the end of 2021. Furthermore, I have not authored licensing terms yet. But, I don’t want that both issues held anyone back to invest money, time, and resources into any technology development related to ASI Safety.
Therefore, I can promise here, for businesses and institutions, I will offer a Fair, Reasonable, And Non-Discriminatory (FRAND) license. A license will be for a corporation that will sell hardware or provide commercial software or services within the scope of my patents. Usually, I want to make all software features royalty-free. I promise that I won’t let royalties or patent-related issues getting in the way of my primary goal: broad support for standardized ASI Safety technologies.
My approach to safety/security standards is that they are a promise to the consumer, the manufacturer, and the developer: We take care of safety and security details, once and for all. Then, being standard-compliant means that any further development in ASI Safety will and can be done by dedicated experts.
At this point, I would recommend having for the following features corresponding standards:
- Kill-ASI-Switch
- Protected Backup Storage
- Key Safe
- TLS+
- Secure Confirmation Interface
- Updating Read-Only Storage (Secure Update for codes run in CPU’s Harvard-Mode)
- Software Validation and Updating (Anti-Malware/Virus solution)
- Port Certificates in Firewalls (Anti-Spyware/Backdoor solution)
- ASI Watchdogs
- Platform-independent App Watching
- Imprints in Executables
- Primary/Secondary ASI Security Layer
- Non-Compliance Tagging
There are additional ideas for useful ASI Safety technologies, but I decided that I will discuss that later. E.g., I believe we can, and should automate the organizational work related to the Kill-ASI event as efficiently, comprehensively as possible, so that everything ASI Safety-related is done as fast and seamlessly.
If I go through scenarios, I am aware that there is still a lot to be done before we would be able to have a rogue ASI really eradicated. E.g. ASI could try to utilize resources in a hidden way and try to build its own shelter and means to independently threaten humans with extinction (creating thereby mutually assured destruction). That means we probably need an accounting and automated tracing system for resources (all types: materials, computational, …) and controlling the manufacturing/prototyping process without turning our world into a surveillance state.
There is another important idea, that is not listed above: using CPUs with Harvard Architecture for creating trustworthy computer systems. I posted the details in “Safety based on First Principles”.
I read many articles and book chapters on how Standards are successfully developed and sometimes failed to deliver. Learning from success cases, I would prefer that small/smart groups are formed around every above-mentioned topic and not large groups with too many interests involved. These small groups are developing standards without undue outside influence. SSL, TLS, AES are good examples of how we get high-quality results. If we are lucky then we may even have multiple standard proposals, and we could pick and choose for the final version.
Within this process, it would be extremely beneficial if many experts with different viewpoints would use their intellectual sledgehammer to determine the pros and cons of features and details prior, during, and after a standard proposal is available. Transparency is essential for trust. And without trust, we won’t have the global buy-in that would be required to make it commercially happen. I hope that we would get after a few iterations, and some vigorous debates to Standards that could survive the additional (predictable) technical progress — and that these Standards would be able to serve as one of many lines of defense against a (yet) unknown ASI when we will depend on it most.
And yes, I believe we can anticipate and therefore also predict some aspects of technical progress in the future. You may ask, what are we doing with technical surprises. They will come, but we could expect that these new technologies will either adapt to an existing Standard, as it happens today, or they will amend the standard and make enhancements in the spirit of what we want to accomplish in ASI Safety.
I want to comment on each of the above-mentioned “project” names at least something. Hopefully understandable, I can’t go into all details on what we need from each Standard. I prefer to describe the goals and at some point, I may post additional thoughts on what I was hoping we could get out of a specific standard listed above and why it could be a great thing to have that standard. If you feel you could contribute, please let me know so that we could talk. We will need leadership in each of the above-mentioned topics.
I know, the real hard work must be done by experts and then commented on by the community of engineers, academics, developers, entrepreneurs, lawmakers, and voices from the public at large vetting the solutions/proposals. I already anticipate that some solutions will be controversial, but it could be controversial to do nothing as well. Not every solution or defense measure must be done with radical perfection. Not every standard is the last line of defense; more important we need broad acceptance and prior to a Kill-ASI event, corporations (and potentially governments) are in charge. We need to accept that people want to deactivate certain parts of the ASI Safety Technology before the rise of ASI, because of concerns around government overreach. And that is ok because this concern is justified. Additionally, we need to have redundancy in ASI Safety. If there are other ideas on how we could limit ASI’s actions or get information on its intention, then let’s bring this up. There is still time to vet other ideas — but if new solutions can’t be implemented in ASI Safety 1.0, then let’s prepare it for the 2nd generation of ASI Safety –and not every good or even great idea must be included in a Standard.
1. Kill-ASI-Switch
The Kill-ASI-Switch Feature will be required for every (new) IT device with a CPU and with updatable or modifiable software. It is designed that we humans can reclaim every device that might have been under malicious ASI control is after activation of the switch under full control of humans again. Humans are already outnumbered by electronic devices. Therefore, asking “everyone” to reset all their devices is unrealistic — too many devices would simply be forgotten, overseen and humans would quickly be confused or overwhelmed about what they already have reset. Without support from the outside, a full reset of all IT devices is doomed to fail. On the other side, a reset that is activated remotely is vulnerable and therefore a target for manipulations or cyber-warfare. Furthermore, a reset triggered by the outside without prior warning could harm people or create damage to people that the Kill-ASI-Switch is trying to protect.
It is assumed that the Kill-ASI Signal is the result of a political process (or an automated process indicating that no humans are in control anymore, i.e. potentially being wiped out). This Kill-ASI Signal is being sent out via multiple channels (internet, but mainly via radio) and received by the Kill-ASI-Switch. To give humans some time to finish whatever they are doing with their device, humans are notified via their devices that the device will automatically restart in a few minutes (or postponing it by some more time). Users should also be enabled to manually reset their devices.
As a result of an issued Kill-ASI order, every reset device is in a safe mode in which all (or most) device features are available for the user, while ASI on that device is or would be killed, i.e. it would have no access to the CPU because all software would be taken from (updatable) Read-Only storage media (see below: “6. Updating RO Storage”) and the CPU would operate in a Harvard-Mode, in which instruction pathways are separated from data pathways — so that we could be sure based on first principles that instructions could not be received from untrustworthy sources (like hard drives or any read/write (RW) devices). RAM would be physically segmented into memory ranges with safe (machine) code, for safe bytecode/scripts (both from the RO-storage) and a RAM range for passive data (content) from all other drives. These ranges must be strictly separated within the RAM.
The reset would also trigger an automated cleaning of the HDDs/SSDs and turn every reset device into an automated listening post for ASI network activities. This ASI safe mode would also turn on all ASI-Safety Layers in all interfaces for removable storage devices (see below: “12. ASI Security Layer”). This ASI safe mode may become the new normal, or it could be turned back by users once the crisis is declared over.
The problem is that legacy devices can’t be reset by that radioed signal automatically without additional hardware included that would then detach these devices from power (i.e. including battery and/or energy harnessing from the environment). However, for security reasons, the radioed message should not interrupt any power supply directly – instead, the Kill-ASI Switch component would send out wireless messages directly to known local components.
Additionally, every Kill-ASI-Switch should have dead-man-switch features. If no “all-good” signal is being received either directly via radio or other devices, then the Kill-ASI-Switch would be activated automatically.
Finally, it will be critical that the Kill-ASI Switch communicates with messages that it can trust/authenticate. Therefore, it must utilize KS/EDU that doesn’t allow any leak of keys (see below: “3. Key Safe”).
The hardware of a Kill-ASI Switch will be different for different types of devices, even if its function and major components are the same or similar. That would not require a standard. But the Kill-ASI signal needs to be standardized and how it authenticates the signal and how this would fit together with the dead man feature. It would also make sense to standardize how other ASI safety features are being triggered, like the HDD/SSD clean-up, the activation of the ASI-Safety Layer for removable memories, how it listens to still existing ASI chatter on the network, and how the hardware is going back into the same configuration using the same settings are before (see below: “2. PBS”). I believe it will be essential that Kill-ASI Switch has a separate dedicated (read-only) memory for the CPU that is turned into Harvard-Mode so that we know that after the activation of the Kill-ASI switch only trustworthy instructions from the RO memory are used from the moment of the reboot. (See more details in post “Safety based on First Principles”)
If ASI Safety Technology is being implemented as a bundle and not piecemeal, then I can imagine that the radio signal could send a message that could activate ASI watchdog features for every IT device, i.e. changing the default setting from optionally active to active by default. In the pre-ASI era, we should be careful what we expect citizens to accept as a possible spy tool in the background. A covert activation of watchdog features is something that a Standard should explicitly prevent and if it is activated, then a user must see and must be able to manually deactivate it. The ASI that would directly operate on the CPU and not via a VM on the device, would not able to determine if the ASI Watchdog is currently active — all its monitoring activities are hidden from
I can imagine a few additional features, but we should keep in mind that complexity is the enemy of security. We are much better served if we are keeping the Kill-ASI Switch simple.
2. Protected Backup Storage (PBS)
Every computer has settings that are essential and custom for the device hardware. For restoring the hard and software into the stage before the Kill-ASI event, it is important to have the same values that we stored and validated before that event. If the configuration is being used from memory (i.e.from HDD/SSD) that was also used by ASI then the data are not reliable and/or they could have been purposefully sabotaged by the ASI in order to prevent that devices and corresponding capabilities could help humans after a Kill-ASI event.
Every Hardware device need a Protected Backup Storage (PBS) on which these configuration data are stored securely. Instead of modifying values, it would be better to have new values being created in new records, while old values are being marked as outdated, but still available in case ASI was tricking us to store false data in our protected backup.
Normal devices could store user-related data allowing seamless access to user data on the HDD/SDD, while cloud service devices need to store network access configurations and other data required for a seamless switch to a safe mode without being dependent on data stored in files that might be vulnerable.
The PBS should not replace the need for creating regular data backups or help clouds to operate customers’ enterprise solutions right after a Kill-ASI event, for these goals proprietary solutions would be much better adapted. I would envision PBS as hardware on the motherboard that is being used by OS and then also by the code in safe-board automatically. The PBS should have some standard features to check if a new configuration version is really being used by the system.
3. Key-Safe / Encryption Decryption Unit (KS/EDU)
The idea behind the Key-Safe (KS) is that no key should be made publicly again. Even public keys are managed in a way so that they are not made accessible in clear text to anyone, no exception. All keys are transferred from Key-Safe to Key-Safe. All keys are used within the protected environment of the Encryption-Decryption Unit (EDU).
Each KS has a unique ID. As part of the manufacturing process and initialization, each KS has its own Private-/Public-Key (PPK) generated within the EDU. The hashcode of that PPK is then being used to refer to that PPK. The registration of the public key is being done automatically in Trusted Key Repositories. Only these key repositories can give other Key-Safe a public key based on a provided hashcode. Providing a trusted environment is the task of an updated TLS protocol (see below: “4. TLS+”)
The KS/EDU is used to hide all data exchange from ASI. It is anticipated that it could see some exchanged messages because it will have access to the senders/receivers’ hardware and therefore to some unencrypted messages. More important than confidentiality the integrity and authenticity of the message. Some messages among the KS/EDU can be kept confidential because the data are being taken from a dedicated data bus between ASI security-related components like the ASI Watchdog components (see below “9. ASI Watchdog”) or components related to the Kill-ASI Switch components.
The KS/EDU will be provided in different versions (like Repo-KS, Site-KS, Device-KS, Micro-KS, and Nano-KS) depending on the context in which they are used and how much storage they need for storing additional keys. I will discuss the features and purpose of the different KS-versions in a separate post.
The standards related to the KS has to cover all aspects from (*) the initialization of the KS with public keys of preset directory-, search- and update-services, to (*) features of the KS/EDU hardware and potentially its extensions, to (*) the pairing of KS within devices, to (*) digitally sign content data/hashcodes and to (*) the exchange of data between devices and/or websites/services, which will be part of the TLS+.
4. TLS+
TLS 1.3 is the established standard for encrypted data exchange. Unfortunately, TLS doesn’t cover anything related to the protection of its keys. Therefore, the primary method of attacking PKI and TLS is to steal the private key on a device. At that point, the victim won’t be aware of it and would depend on the generosity of the attackers if they are misusing the keys beyond breaching the confidentiality of messages, or if they are using it to actively damaging the victim. TLS is not providing tamper-evidence that keys were misused.
The goal of TLS+ is to extend TLS1.3 so that it can deal with Key-Safes and Hashcode references and provide features that would require additional confirmation either via a Secure Confirmation Interface (SCI) (see below “5. SCI”) via another independent authentication.
TLS is the same independently of the relevance or context of data encrypted. Currently, all additional steps of confirming the transactions are being done outside the protocol. There are advantages to that method (like flexibility and openness for innovations) but there are also serious disadvantages as TLS’s API can be hacked or misused on either the sender’s or receiver’s side.
5. Secure Confirmation Interface (SCI)
The Secure Confirmation Interface is a second local interface that is completely independent of the primary operating system (OS). This confirmation is designed to be used by human users only. The SCI would replace the phone confirmation as phones are vulnerable to malware attacks no match when ASI is trying to manipulate eCommerce transactions.
The main screen of an IT device is under the control of the operating system. Operating systems are complex and in the end security of systems can be guaranteed. Suppose users would say that they didn’t order something or that the number of items was different when he was ordered from a company. The customer may say: ASI made these changes without his consent or knowledge. Who will pay for the damage? The company or the customer? What is at stake is the loss of trust and ASI will be blamed if it was responsible or not. Trust can only be re-established if there is a tamper-proof chain of evidence that is making every participant accountable for his/her actions.
The Secure Confirmation Interface can also be used by multiple client-sided components: the Protected Backup Storage (PBS), the KS/EDU, Confirmation requested by Watchdogs, or the Primary/Secondary Safety Layer, or the Kill-ASI-Switch or related to the installation of new software or for accepting new devices (including video-cam and/or legacy devices) into a user’s internal network. The SCI could be connected with the other security components via a dedicated and separated trusted bus.
The SCI should be simple and not too flexible as users should only be required to confirm or reject a transaction (or at most postpone/delay a decision). Templates would be used together with a few variable elements (like transaction amounts and/or names) that would be logged so that the person using the interface has no plausible deniability.
The SCI will be used by multiple components and standard transactions — and developers, software/hardware vendors, and e-commerce sites should rely on the availability of reliable confirmation capabilities. I would assume that 160 characters would suffice – together with 2 or 3 buttons. Additionally, I would even recommend having the screen creating a variation of brightness that could be detected by video cameras helping to determine if the owner of the device is the person currently confirming a request — later this may become more important when we need to be sure that the SCI is not being confirmed by a robot.
6. Updating Read-Only Storage
Every device with a Kill-ASI Switch should be required to have dedicated software for the safe mode. Safe software must be provided via separate instruction pathways to the CPU in Harvard-Mode. Even if the safe mode and the regular mode software are the same, there should be a strict separation and no shortcut with a common source of files for both modes.
The method of how Read-Only (RO) data are being provided should not be standardized. More important is that the data on the secure RO drive can’t be modified by the local CPU. Data can only read from that RO media when the CPU is in Harvard-Mode. All files (executable and non-executable) must be updatable. Although the read-only component could be genuinely a read-only memory (like e.g. a crystal with holograms that could only be written with a special laser different from the read-out laser), and the regular updates could temporarily be stored on an SSD component until the user replace and update the crystal with the hologram, this is not a scalable, mass-market solution. But the RO memory could be an SSD that has together with an additional controller (a Storage Watchdog) the local property of an RO component (when it is not being updated from a trustworthy remote data source).
The standard will be about how files can be updated in a safe and reliable manner for a given device that is characterized by a device type, model, manufacturer, and additional software that was purchased or licensed legally.
The RO-Storage component has an associated Storage Watchdog that is receiving encrypted data from a trusted file-server and, independent of that file-server, separate hashcodes for every software package. The Storage Watchdog is a micro-controller or a CPU in Harvard-Mode solely dedicated to updating the files, hashcodes and is making potentially previous versions of the safe mode software available if a newer software version would be exposed as compromised. In order to exclude the CPU of the local system, the watchdog has a hashcoding component, a decryption module, potentially a Key-Safe (KS/EDU), and an un-zip/decompression component.
The radioed (encrypted) messages, which are also used to send the Kill-ASI signal, can also be used to notify devices that a certain device type, model, manufacturer, or additional software has an update for the safety version of the software. These messages can be numbers so that in case the device has missed a message it could send an encrypted message to the updated server and ask if an update for its software was announced.
Please check out additional details in my post: “Safety based on First Principles”.
7. Software Validation and Updating
This Standard is designed to make it much more difficult to have executable software (in particular malware, spyware, viruses, root-kits, or ransomware) on a device or system. The idea is to get every file and software package hashcodes from an external server. The hashcodes are compared with the hashcodes generated on the local machine and stored in a folder-related read-only file. Only a corresponding storage watchdog would be allowed to update these files with the hashcodes. However, whenever the OS is reading a file for which a hashcode is available, the storage watchdog would be called by the OS and a hashcode would be generated by the watchdog and automatically compared with the value of the folder-related hashcode files. The watchdog is also hashcoding the hashcode files and is storing these values within its protected memory. Additionally, in order to exclude the CPU of the local system, the watchdog has a hashcoding component, a decryption module, potentially a Key-Safe (KS/EDU), and an un-zip/decompression component.
The standard is then about how the hashcode files are being provided to the different servers, that check and validate the hashcodes (Hashcode-Server) and how software updates are being provided by software manufacturers (Update-Notifier). The hashcodes are being provided on a per-file basis and on a per-software-package basis.
The purpose of the standard is not to push users with possibly unlicensed software in a way so that they get their software licensed. The Hashcode-Server and the Update-Notifier are not serving in any law-enforcement capacity. Preferably, hashcodes of software packages are being synced among all Hashcode-Server (similar to the syncing among DNS servers), while file-related hashcodes are being managed by non-synced Hashcode repositories.
The standards should protect the privacy of the users and could provide manufacturer information about possible modifications in their software on a per-file basis. All servers could be provided as a public service, a customer service (of a manufacturer), or via a commercial service, that would help e.g. to exactly locate the malware or the virus, instead of deactivating the entire software package (and making it not executable) this can be done by the storage watchdog.
The standard can also define security zones and/or their attributes, e.g. like (*) no modification from the original version accepted; (*) software has derivations from the original, but that’s acceptable by the users and user allows its normal usage (potentially run within a Virtual Machine (VM)); (*) developed software created by a developer and locally signed with a secure Hashcode-Server; (*) macros and scripted from sources that don’t have a Hashcode server; but which are run by default within a VM. These security zones are managed by the storage watchdog, and it would make sense to have these features standardized as well. However, complexity is the enemy of security and the supported features should be as simple as possible without alienating the customers and users.
8. Port-Certificates in Firewalls
Unfortunately, firewalls are seemingly useless. News Media are reporting almost on a daily basis about security breaches and stolen records –and these are only the outrageous cases. Normal intrusions of smaller companies are not news-worthy, but still damaging. Using tools showing TCP/UDP activities of processes on a normal PC would immediately raise a concern about what happens on my computer — am I being spied out. The truth is that software manufacturers have implemented methods in the application to constantly with some remote server.
It is not transparent which application is collecting which data. There is a good chance that a user has already some regular or even nosy spyware on his machine – some are even from well-known, established manufacturers. I want to leave this to others to name names.
I believe Port Certificates in firewalls to be an ASI Safety technology because it could be expected that ASI is covertly piggybacking on many of these data exchange protocols while human IP-port-scanner or analyzer would have no evidence for possible data leakage.
The idea is to have every manufacturer or developer of an app that wants to send or receive data via an IP port register this process and receive a digitally signed Port Certificate that would allow that software to receive data from or send data to a server mentioned within that Port-Certificate. The registration would disclose to the certifying authority the data structure of the data exchange together with documentation of the details. The receiving server will check if the received data are compliant with the registered data structure, or the server (operated by the manufacturer) would automatically report any deviation to the certifying organization who has then the authority to revoke the Port-Certificates. The server could initially check every data package and later turn it into a randomized sampling. If companies want to upload memory dumps then this could still be made possible, but it could lead to some additional innovations in which tools could still validate that this is actually what has been uploaded.
The disclosures within the Port-Certificates are independently validated. Therefore, Port-Certificates would give users the transparency and the foundation to make a decision if they want to let some apps send or receive data. Every covert access of a backdoor within an app would immediately stand out. Apps that would temporarily serve as a file server or web server would be detected and the firewall would prevent access. Access to folders with content that can be accessed from the outside via file or web-server could be protected with other means — so that a data breach using spyware, malware or backdoors could be made significantly more difficult.
The standard would deal with the structure of the Port-Certificates and with a data description language to describe the uploaded or downloaded data.
9. ASI Watchdogs
I wish I would have a better name — and maybe some good soul will help me in finding a better one. But watchdog describes what this component is supposed to do. I wouldn’t hope that all features of watchdogs from different manufacturers are the same, but it might turn into that kind of situation as the deciding customer will be a small group of engineers and will agree with a group of ASI Safety engineers and researchers to decide, what features this component should have.
I envision that the ASI watchdog is watching different components of the PC. There is the (*) Main ASI Watchdog that freezes the CPU for a millisecond and receives from the CPU data about the processes run (see below: “10 Platform-independent App Watching”), potentially data from the caches, then the (*) ASI Audio Watchdog on the Audio card, making sure that the digital audio and the outputted audio really matches, then there could be the (*) ASI Video Watchdog, that is checking for the correctness of the visual output, and the Networking Watchdog and finally the EDU (Encryption/Decryption Unit) Watchdog. We may even come to the conclusion that we need Input Watchdogs for the keyboard and mouse or video-cam and/or microphone and for the storage to the removable or fixed storage devices. The Storage-Watchdog that was mentioned in “6. Updating Read-Only Storage” and “7. Software Validation and Updating” should be specified in the above-mentioned standards.
The watchdogs are designed to detect inconsistencies; which means that their software must remain updatable and the capabilities for independent validation must be sufficient with respect to speed, the number of cores, and available memory/cache. These watchdogs should have all a Harvard-Architecture and its data collection should be done outside the main bus, which would mean that ASI would be in the dark about what the watchdog is doing.
All watchdogs would be connected via a separate data bus. Additionally, the watchdogs would exchange the data encrypted because every watchdog would have a nano-KS/EDU that was paired using an external trusted key repository. If some suspicious inconsistency was found then all relevant data would be uploaded to an evidence recorder – encrypted of course.
The standard should be round minimum capabilities, data bus, and how these watchdogs could be used to make critical user operations more secure (i.e. confidential and/or free tamperproof). The watchdogs could be used to determine that a device is currently not running active ASI-generated software code.
I could also envision that the Main ASI Watchdog is used to remove certain imprinted data from executable code, so that it could run either within a VM or directly on a CPU (see below: “11. Imprints in Executables”). The watchdog would then also validate that the new ASI-generated code went through some test, validation, and performance measurement before the code is uploaded to a software repository for ASI-generated software. And finally, the main ASI Watchdog could be used to prevent that different ASI instances are communicating covertly on a device.
10. Platform-independent App Watching
It is the intention to make ASI’s direct presence on a CPU detectable. ASI should only use VM’s and the technology of Imprinting data in its executables (see below “11. Imprints in Executables”) requiring Watchdogs to help these files to get in a state that is executable. If ASI can really make a file executed on the CPU then the main ASI Watchdog should have a reasonably high probability to detect this. will be on a CPU and in every device’s RAM. We need to have a platform-independent technology to watch apps.
11. Imprints in Executables
Watching ASI is a challenge because, without additional measures, we would have no idea if certain code is running for the benefit of a user or for the benefit of ASI. Ideally, there should be a distinct difference between ASI code, ASI-generated software, and software produced and coded by humans. Human code is from a local developer or from a software manufacturer and is thereby managed by validated hashcodes (see above-mentioned “7. Software validation and updates”) and via the above-mentioned “10. Platform-independent App Watching”.
The idea with the imprinted data is that this kind of software wouldn’t run if started and the data within the imprinted data record can contain a good deal of information that is useful to be available to the watchdog and in case it is modified by ASI. The imprinted data are digitally signed; they contain
Software needs to be watched …
12. Primary/Secondary ASI Security Layer
We need to trust removable storage media. They are an important part of our IT infrastructure.
13. Non-Compliance Tagging
Legacy devices will soon become a matter of the past … but we need to stay vigilant … human