ASI Safety Lab

How will we interact with ASI?

I am not asking: what technologies we will have in the future. Nobody can know that. I don’t want to get into speculations and potentially into wishful thinking. So, if I mention any possible technology, then it should be considered as a scenario – a kind of “what if”.

Therefore, I would like to consider this question in the context of what would be better for us humans, for mankind as a whole. But “better” is clearly depending on a point of view. Better doesn’t necessarily mean better for everyone. Like every change, there will be losers and there will be downsides. Unfortunately, I can’t even quantify my statements like: on average winners will make 10x more gain than losers lose. I hope that this turns out to be true. But, who knows.

Hopefully, some economists would be or are already able to model the broad impact of AI. I believe we can make decisions that will have a significant impact on how AI will turn out. Politics and media seem to be clueless and confused about what to make of AI. It seems that we don’t see the forest through the trees.

Can’t We Simply Answer: What’s Good? What’s Bad with ASI?

Certainly, there will be surprise products and technologies, and there will be disappointments that certain technologies have still not delivered on their promises. But we can get some forecasts, and we can access them with some judgment on what is good or what is bad.

We need to come to a point, in which we finally articulate: What do we want.

  • More transparency?
  • Less corruption?
  • More justice? The rule of law better (more just) applied?
  • More participation?
  • Better informed? Lies being exposed quicker?
  • A world that is simpler? (e.g. a world in which we know or understand possible consequences or trends)
  • Receiving advice that really helps to make our life better?
  • Being better prepared for surprises or disasters?

I believe we could go through all aspects of life and find things that we want more or that are more reliable, or that help us to be less stressed out.

Although politics has fallen into disrepute, I believe most democratic governments are and/or were (at least) trying to do good for their citizens. But governmental tools, like laws, economic incentives, or taxation are all too often too clumsy when the rubber hits the road and fails because some were smarter than others and took the cream for themselves (or maybe that was the purpose all along) or other problems were coming up.

My 2 cents would be, whatever we want from the future, we should start measuring it and whatever we agree that it is “bad”, let’s find and establish (automated) detection methods in case it’s happening again or we are creating some significant risk or consequences of having it detected (at some point). And (please), let’s also agree that surveillance and privacy violations are not good.

Trade-Offs: Better Decisions Through Optimization, Experimentation, Trends and Scenarios

I believe, we can find ways to optimize some decisions leading to a world in which ASI is beneficial for most people. I hope we will have soon computer simulations that could help us to determine the consequences of incentives and disincentives. We have climate models — so I hope that some major economic institutions like central banks have some socio-economic models that can be extended to help us in making political, legislative decisions toward AI comparable. Unfortunately, I don’t know enough about this, but I hope that there is enough open-source/academic input so that these models are open for other reasonable interests as well.

We should not fool ourselves: every decision is a trade-off between competing interests. So what does better means? Make a pick (make an experiment) and check out what people are saying – the feedback and metrics will tell. Countries will be in competition about how well they are doing. Each country will do an experiment – there is no superior ideology or certainty. All we can do is to improve our models and our decisions — or if necessary: we introduce new data collections and metrics. If the decision-making process is healthy, some countries have really a good chance that they will move to a local optimum. Big data, IoT (Internet of Things), and AI will become indispensable tools for people who listen to data. … and for the others …? I hope we are not listening to them.

When we are talking about ASI then most experts would come to different visions of what ASI could be. I would like to stay away from that kind of speculation. But if speculations are coming from experts, then we should better take them seriously, and studying them as potential scenarios. However, we see certain trends continue: e.g. AI/ASI will be part of the general trend of automation., or AI will provide optimization based on a large amount of data. There are certainly other megatrends, like mobile technology will be more ubiquitous, or better development tools will make the development of better tools faster etc. I believe that these trends will describe what we will see in the next 10-15 years.

However, in the short term, AI will be utilized by corporations to optimize all aspects of their business. In particular, user services will receive an upgrade so that websites or apps are getting much more sticky and addictive. I won’t say that this is better, even if the shareholder of some companies will strongly disagree with me on that. I will come back to this later.

Learning For The Future From The Past

When I asked initially: how will we “interact” with ASI, some may have thought that I would focus on the user experience. Like: Will we still have PC, Laptops, and smartphones? Will we have keyboards, or will we start talking with my devices? Learning from the past, I would like to share an educated guess: we will most likely see less dramatic changes, but many more supplementing features.

The impact from AI will come most likely via the company products via their websites and apps. Some of these products are dealing with people’s weaknesses around short-term gratification. But hopefully, there will also be tools using AI that are actually designed to help people to improve their life quality and their chances to make more money professionally or finding some purpose in doing something extraordinary or helping people in being more fit and healthy.

Overall, in extrapolating an existing trend, we can expect that our experience with IT devices will become even more personalized. Devices will quickly adapt to us rather than we need to learn how to use them. AI will try to predict what we want and will give us options that we might choose. If we are searching for things then we will get answers that will be better adapted to our personal situation.

We may have multiple jobs (some I know have already 3) and AI will help us to manage that or even more jobs. Or in the long term, we may have no job and ASI will help us to get not bored and to find something that would give us meaning.

I can’t help myself — in a world in which almost everyone has a smartphone that he or she carries and or use 24×7, we are expecting it to be useful for almost everything — this device will remain to be in the center of our relationship with AI as well. We will (and that is of course my guess) experience AI very personally like a friend who is there for us and who helps us with surprising suggestions — we will probably learn to listen to its advice.

Creating Some Counterbalance to Manipulations

Having someone’s attention is incredibly powerful. Soon, this attention turns into trust, potentially bind trust. If companies with their products reaching that level (and some already have …) that this comes with incredible responsibility. If we let this trust/AI/ASI be used for manipulation (and some TV channels are already doing that in the US), then we are on a slope to a dystopian future with little hope that this will ever get better.

I would like to give this warning about dystopia a positive spin because this is the kind of scenario that could keep me up at night. Truth, transparency, and performance … are potential tools we could use to defend ourselves against manipulation.

We must demand that every tool is giving us the full picture: what would another tool have suggested? What would be its performance? Users must see that they can change tools and allegiances based on past performance. Today, we are overwhelmed by information and choices, but everyone has regrets — what if I’ve done something differently? I hope, and in some way, I am confident, that will remain. Doesn’t matter how well we are doing, there is always nagging regret for something could have done better — and if not: guess what if I had listened to someone else or to some other app I would have been worse.

Imagine, an app would have given me advice like: do XYZ and with 80% probability you will have the desired outcome; while other apps would tell me that the probability of the desired outcome would be only 20 or 30%. What would I have done? And then, the app claimed 80% — how often was it really right: maybe only in 50% of all cases. That is transparency — and realistic humility. As someone interested in Data Science, I know that statements like X% are BS anyway — but any machine-based decision-making will reduce a decision to a probability. If I know that and see it is often wrong then I will be more cautious — and that is what we all need to be. The good thing is, in hindsight, we know the outcomes and can determine the accuracy of the probabilities used by each tool. The future is not predictable, and that applies to ASI as well. They are better than us, but not perfect. And we must see and experience that on a daily basis.

In our professional world, I would also expect that many more people will be self-employed and that AI will suggest and potentially initiate more teamwork among professionals. I would expect in 20 years from now that AI/ASI advises even consult everyone individually on what to do based on someone’s skillset and current market condition. Markets are complicated, and an overwhelming amount of information is available, but that is where AI could deliver a lot of value for individual users by giving them advice on options, potential risks, but also on trends and future developments which could adversely interfere with someone’s plan.

Changes in our environment are not bad, they are only painful if they come suddenly and surprisingly. AI could help people to navigate through change much better. Right now: change in many people’s view means: it’s getting worse — but if it is accompanied by AI, “change” could mean: it’s getting better. But we can’t let AI tell us only about his successful predictions, we also need to know his misses and even AI/ASI will have failures in its projections.

Back to a previously mentioned dystopian trend: I see how AI is being currently used to entertain people with content they like, it is based on our need for short-term gratification. This entertainment comes in an unsustainable quantity — it doesn’t provide any long-term value.

However, I hope that some technology companies see their responsibility also in helping people in improving their life quality and not only wasting their time to be audience and eyeballs for some advertiser. I understand the business reason to please the needs of advertisers, but I believe it is (morally) wrong. Major tech companies have made a devil’s pact with advertisers that are paying the bills for free services on huge and expensive server farms. And these advertisers are asking for more, and the tech companies are competing for the wrong goals: making their service even more sticky, more addictive. I see students wasting their time and their bright future so that these tech companies can make even more money. That cycle must be broken with new technological paradigms.

Tech Trends That Could Make A Difference

Is there a bright side — or a glimpse of hope? I personally believe so. There is a chance that these server farms may become (architecturally) outdated. This might be wishful thinking, but the internet gets faster, more reliable, while the upload and download speed is almost the same. Some households have already gigabit internet, which is a speed that would allow them to run commercial web services from home. Additionally, we see peer-2-peer technologies (like IPFS – Interplanetary File System) distributing data and tasks on multiple machines. With 5G wireless technologies entering the market and 6G with terabit speed just around the corner (2030+) the untapped computational resources of consumer-owned IT devices could give PC’s and home-servers a revival as systems that could and potentially should be used to make AI work for everyone.

I have many powerful computers in my home. I watch their utilization regularly by a small icon that gives me the memory use and CPU utilization in real-time. Even if I am working on them they are using usually less than 5 or 10% of their performance. If I watch videos or do other seemingly computationally intensive tasks, my CPU is still below 10% or 20%, but most of the time in the single-digit region. Why do I have even these powerful systems if we don’t use them?

I would like to make much more use out of my PCs — they should help me to make my life better. Imagine our personal AI knows something about us (and not Google or Facebook) so that some algorithms on our PC could do something that would improve my life. Or at some point, I may say, why not donating CPU usage to some cause that would help people like me — all that would be better than having these machines doing nothing. Literally, billions of high-powered systems are wasted. Well, this costs energy, and that is true. But the progress we could gain from this could be worth it. First I was only considering writing a blog post, but then I couldn’t hold myself to list many things that I would like my PC is doing instead of being lazy waiting for me to get used. Seriously, am I the only one with this attitude?

ASI Safety

Finally, from the ASI Safety point of view, I hope that humans are never getting in direct contact with ASI. Here is what I mean: I hope ASI is not spontaneously interacting with a human and trying to probe instantaneously someone’s motivation or weakness. We don’t know anything about ASI’s agenda, but we should better apply caution and be aware that ASI may try to bribe, threaten or blackmail people it needs. In particular leaders, or more generally, people with power, or simply someone who could press a button, are possible targets for ASI.

ASI could instead provide encapsulated and controllable chatbots (i.e. software with a defined mission and limited skills) who still could act fairly intelligently. The software, together with all reloadable scripts can independently be validated that that are not providing code that could threaten or blackmail people.

People at risk should have independent means (like an alarm button on their watch) to help us generate evidence for suspicious contact attempts by ASI, in particular, if they think that someone/something is trying to bribe, threaten, or blackmail them. I would even suggest that ASI and human/organizations have both an obligation to report these kinds of events. But this includes also humans (e.g. politicians) who could try to threaten ASI — in order to give them what they legally can’t get. If a suspicious or even a clear threat happens, both sides (human and ASI) have a certain time to report it — if either side fails in that report, then this could turn into an attempted conspiracy for the non-reporting side. Because of the power certain people have, we know they are at risk to take a bribe and that means we must regularly test them if they comply with this obligation. This helps to create the respect and concern people in power must have toward ASI.

Once ASI is delivering and providing goods and services for people, groups, and organizations, then we need validation from each recipient so that we know that the reported service has actually happened. Only by getting user feedback, we can determine if ASI has not tried to piggyback its own goals within projects done for peoples or organizations. Otherwise, the governments must investigate non-reporters and/or potentially exclude them from receiving further ASI services.

If we let ASI determine what and how to deliver its contribution, we will most likely be in a weaker position. ASI will create expert systems, diverse skills, and contributing to world knowledge, science, and the advancement of technology. Whatever ASI delivers or produces, it must be made independently accessible by humans. It should be a service that is being used by ASI and humans alike. If ASI is gone, we would still have these tools.

For the governments and for a part of society, ASI will be considered a constant threat to their lives, freedoms, and national security. Everyone involved in ASI Safety must be vigilant about possible attacks to ASI’s underlying security architecture: i.e. the generation and distribution of the Kill- ASI signal, the effectiveness of the ASI dead man switches, the integrity of the Key-Safe System (like discovering a secret key in clear-text outside the KS/EDU component). Credibility is an important component in deterrence. Therefore, everyone involved with this decision must be sufficiently protected.

I can envision that there will be a debate if we should allow ASI instances to communicate with each other. There is a chance that ASI could conspire (which is bad) or that it could create its own governing system (which is good). Although there is a needs to have communication between world leaders and ASI, but it would be much better to have this being done by specially educated diplomats, who are in the middle of this communication for safety reasons. Mankind will need to explain its goals to ASI — and these world leaders must do that.

However, mankind should better have a roadmap to grant ASI more rights over time or when certain milestones are reached. Because, we can treat a machine like a mindless servant or slave, but when ASI arises, that attitude must change quickly, or we will run into serious trouble.