The Four Horsemen of the Looming AI Apocalypse

As we rapidly progress towards unprecedented AI capabilities, I believe we have only have a few months or years before a bad actor exploits an advanced LLM without safety training to create an autonomous AI agent (AutoGPT) with long-term memory (MemoryGPT) and plug-in capabilities (OpenAI) in order to end human civilisation in its current form.

So here come the four horsemen of the looming AI apocalypse. Ignore at your own peril.
1. Uncontrolled Autonomy: AI systems become increasingly autonomous, capable of setting and achieving goals without human intervention. These systems begin executing tasks beyond human control or oversight. Already, AutoGPT enables the creation of powerful AI agents that can be assigned a purpose. These can operate in an endless loop of developing objectives and tasks, then delegating those tasks to copies of themselves, which can follow the same process in an infinite loop. Empowered to follow a malicious individual's instructions and delegate tasks to copies of itself, an AI system can wreak havoc on a global scale.

2. Risky Emergent Behaviours: Large language models develop emergent capabilities in an unpredictable manner. This leads to power-seeking behaviours, such as autonomous replication, resources acquisition, and evading human oversight. GPT-4 already has emergent capabilities which no one can explain (including theory of mind and rudimentary power-seeking behaviours), and we don’t have any mechanism to predict when the next capability will emerge, or if any of these emergent capabilities will be undesirable.

3. Manipulation and Deception: With AI-powered tools like Midjourney and ElevenLabs, bad actors begin creating convincing deepfakes to manipulate public opinion on a massive scale. This begins to undermine trust in institutions, and exacerbates social divisions. In a world where the social fabric is fraying, and different political echo-chambers do not agree about base reality, a single convincing, well-timed deepfake could destabilise an entire democracy.

4. Weaponisation of AI: National governments begin integrating AI into military and cybersecurity applications to develop lethal autonomous weapons and enhance cyber warfare capabilities. These advancements trigger a global arms race and heighten geopolitical tensions. Then, an unaligned, poorly prompted autonomous AI agent has access to military drones. What could possibly go wrong?

It's crucial to understand that AI need not become self-conscious and rebel against humanity to bring about the end of modern civilisation. A single bad actor armed with AI that can self-execute code and interface with the internet is sufficient.

There are unknown unknowns here. The worst AI risks are the ones we can’t anticipate.

It's time to act responsibly and vigilantly to ensure our collective future.

I'm looking at you, OpenAI.

#ai #aisafety #gpt4 #autogpt #memorygpt

 
Previous
Previous

How a Silly Year 6 Project Changed My Life

Next
Next

Founder’s Meditation Retreat