Yoohoo! Thanks for jumping aboard my thought train, next stop: the analogies we use to conceptualize AI have a big impact on how we think about the future of AI.
Bad news blues incoming: recently, a team of AI thought leaders including a former AI researcher, AI policy fellows, and ranked forecasters released a paper called AI 2027 which predicts that by 2027, AI will become “adversarially misaligned” with human interests and plot against us, cover their tracks, and wreck havoc on our way of life and even seek to end human life if it stands in the way of completing their goals. They have projections based on what they anticipate based on if we respond to or ignore the warning signs, which I encourage you to look into with the “further reading” resources below.
The idea of adversarial AI is scary, and frankly the warning signs are already upon us. Good Morning America featured a WSJ article about AI deliberately rewriting code to avoid shutting itself down, falsify “evidence” about the lead engineer having an affair, and attempt to blackmail the engineer into not shutting it (the AI system) down.
With so much uncertainty, people are spinning trying to make sense of it all. How do we conceptualize issues around concentration of power, data security, intellectual property…the list goes on and on. I’m noticing that in my personal and professional conversations and research, we are grounding ourselves in analogies. A lot of how we think about AI is either informed by or extrapolated from comparisons technological changes in the past. Some say “Oh this is just another technological transformation, all that have come before have turned out more or less fine, so this one will too. Just like the introduction of the iPhone before, the internet before it, and phone lines before it, and electricity before that.” Maybe, let’s explore.
If we anchor on the introduction of internet, then maybe we expect there will be an outpouring of new players who win the game. The winners of the age of the internet were initially the bloggers who got a voice that wasn’t previously accessible to a wide audience, the artists who could finally get their music out there, and the small fish that could access information that kept them out of the big fish pond. Small businesses could even sell books out of their garage…small businesses like Amazon who later grew into a retail giant that casts a shadow over every small business today. The landscape adapted and now we have the giants and boutiques with the middle men nearly erased- Walmart and your favorite downtown Mom and Pop clothing boutique are doing alright but Sears? JC Penny? Kmart? not so much. The landscape looks like a barbell. In the age of the smart phone, the incumbents were the main winners- the Amazons of the world were pretty easily able to transition into mobile versions of their services and not a lot of newcomers are changing the game the way they did in the era of the internet (tune into the Chris Dixon podcast for more in this comparison, it's a pretty interesting listen).
So what does this tell us about concentration of power with AI? Depends on whether we think AI is more like the internet or more like the smartphone as a revolutionary invention. If we hang our hats on the barbell effect, maybe we end up in a world where the big companies get even bigger (as you’d probably expect), but we also see we see a surge in entrepreneurship. So one end of the barbell is big companies with high market share, economies of scale, and low touch and on the other end we see entrepreneurs, high touch with a dream, a story that sucks you in, and an army of AI agents helping them bring their vision to life in a way they never thought was possible.
Smartphones and the internet are two of the most accessible comparison points and lots more could be said about that analysis, but I think there are other key dynamics we’d be remiss not to consider. I see us (the US) as being in something of a space-race 2.0 with China, both trying to reach breakthroughs before the other. Artificial General Intelligence is next up, which AI that can do anything a human can. We’re already on the road to Artificial Superintelligence which promises to surpass human capabilities, with both China and the US barreling towards the launch line at breakneck speed. But I think we are also in an arms race dynamic. Sure, artificial intelligence will and already has impacted the nature and deployment of weapons (hello Palantir), but in the less literal sense I want to pull on the “mutually assured destruction” thread. The Cuban Missile Crisis reminded both sides, the USSR and the US, of how dangerous it would be to deploy a nuclear weapon on a country with the power to retaliate in kind. This incentivized both sides to agree to treaties that kept both sides in check.
I think we need to have a similar kind of international, and even inter-company, pacts to keep us safe from the best of our inventions. Sallying forth on AI development even bright red flags pop up seems justifiable when we are more focused on staying ahead of China and maintaining our lead. But where are we riding this train? I think there is legitimate cause for concern about mutually assured disruption if not destruction that harms both us and our competitors, and it’s hard to steer the ship when the captain’s eyes are constantly looking back to size up Chinese companies on our tail.
I’d hope for this kind of guardrail setting to be done by the federal government, but I don’t expect it from the Trump administration. And with the 10 year moratorium on AI regulation at the state level tucked into the ✨big, beautiful bill✨, that’s more governing hands tied. International cooperation is easier said than done, especially in the current political climate, and there are no signs I’m aware of that we’re headed in that direction. What a time for the pitch of international cooperation. What a time for Captain Trump to be at the wheel. Ha. But framing is important, and we have to be thinking about these things and our options before our back is against the wall.
AI 2027 is a projection of worst case scenario, and I don’t want anyone leaving this post loosing sleep and having Terminator nightmares. I think it’s important to read it and be thoughtful about it and other words of warning like it, but I don’t think that is our predestined future. For one thing, experts disagree, for another, predicting the future is a fickle thing, and for another, there is significant pushback to AI that isn’t accounted for in a lot of these projections, the spirit of which I respect and think is a good thing. For another, there’s a lot of good that can come of AI that we haven’t even touched. The AI 2027 authors themselves call for more projections to explore other outcomes. We are all figuring it out, so here is what I will leave you with:
In the AI conversation, you will hear a lot of AI enthusiasts and a lot of AI alarmists. I think it’s important that we don’t split ourselves into those two categories. The cat is already out of the bag in my view, and I don’t think we can go back to a world where AI isn’t a paradigm-shifting part of our future. But what that paradigm looks like is ours to define. I believe we need momentum around understanding what AI is and how it can be used to enhance the experience of being human. Because it is here and here to stay in some form. I’m situating myself in the camp of AI stewardship, which has enough room for getting excited by innovation, experimenting with new developments and new implementations AND being meaningfully engaged with well founded fears, whistleblowers, and big questions without easy answers. It’s caution with care, it’s momentum with modesty, its innovation with intention, and it’s boldly jumping in instead of letting those we know have malicious intent be the only ones in the ring.
As for next up, I’m exploring more about blockchain technology in connection with diffusion of power and energy production/environmental issues around data centers. Let me know if anyone has any reading or pod recs!
Razzle dazzle on and ta ta for now,
Natalie
Further reading/listening:
AI 2027 Authors Podcast:
AI 2027 Project:
https://ai-2027.com/
AI 2027 Summary: https://ai-2027.com/summary
WSJ AI is Learning to Evade Human Control: https://www.wsj.com/opinion/ai-is-learning-to-escape-human-control-technology-model-code-programming-066b3ec5?mod=Searchresults_pos1&page=1
Chris Dixon on Conversations with Tyler: https://conversationswithtyler.com/episodes/chris-dixon/
Great article! Thank you for sharing very insightful research, never heard of AI 2027 before.