There’s this one question that I keep coming back to whenever I’m reading or thinking about AI: If I had been 35 when the internet blew up, would I have been against it?
I see a lot of evidence that LLMs are already being adopted by younger generations, either because they figured it can solve their homework, or because their parents believe that it will improve their education. And every time I’m confronted with something like that, I can’t help but feel a little bit like a grumpy old person, because if I were a kid or a teenager right now, I’m pretty sure I would be using and abusing LLMs in some form or another. Yet here I am pushing against it at every opportunity.
I was a teenager when my family got the internet, and adults around me were either totally oblivious to what it was, intrigued but dismissive, or actively discouraging its use (I’m from that generation that got told by teachers that Wikipedia could not be trusted for anything). Overall the general attitude was a mild pushback, fed by a lack of understanding and a vague sense that it made me waste my youth. < The thing is, those concerns came from a good place. And time has shown many of those manifested into real issues. Information online has been polluted by fake news long before the arrival of LLMs. Porn addiction is a real issue that plagues generation of men and encourages gender-based violence. Online harassment was a thing on blogs and forums before it found its home on social media.
So when I write about how I’d rather push again AI, am I being the annoying adult that told teenage-me that the internet is useless and a waste of time? Am I pushing back against the obvious course of progress and should I instead trust the younger generation that they will figure it out and make something good out of it?
Maybe. But I think I have a thing or two that justify my attitude, unlike the people who pushed against technology as a reflex against change. Here are three reasons: 1. the internet was more revolutionary than LLMs are and changed the face of the earth 2. the people building the internet were not existing monopolies and did not have a history of abusing people’s rights 3. We have gained experience of what misuse of internet technology looks like and we can identify what can go wrong.
On the novelty aspect: LLMs right now are a very advanced version of summarisation tools and predictive text, stochastic parrots as they’ve been described. They can be very useful in certain circumstances (cue to me spending 20 minutes reading the documentation of my sound card before getting schooled by ChatGPT in 30 seconds on how to solve my problem), but I doubt they will revolutionise the world as their developers keep promising. While we can draw many parallels with the internet (the dotcom bubble being my favourite), there are too many differences between the two technologies and their context for me to believe that the future will be defined by LLMs. If anything, the fact that their entire existence is premised on the processing of internet content gives it a dependency on an existing technology that the internet did not have (you could argue using the phone network infrastructure was relying on another technology but it was something that could be replaced and improved, unlike the content produced by news outlets and online users).
Regarding the people building it: most of the money poured into LLMs right now is coming from Big Tech companies or companies in adjacent sectors (looking at you Nvidia). Facebook, Google and Microsoft all have different services built on top of LLMs and have invested massively in other companies building LLMs and AI services (Microsoft has a multi-billion-dollar deal with OpenAI, both Google and Amazon have invested into Anthropic, Mistral has a contract with Microsoft and so on). Those companies have a terrible track record of respecting individuals and communities, from encouraging the genocide of Rohingyas to ruining teens’ mental health to the massive violation of privacy that their business model relies on. And an important reason AI is taking over the news is because these companies are aggressively pushing for adoption, in part to justify their gigantic spending, and to ensure they control what the future looks like.
Finally, we have experience today with the internet and how companies operate on it that we didn’t have at the time. We have a model of how things can go wrong, what risk and abuse look like. Of course there are new risk, new potential abuses, new things that can go wrong. But building on my first point here, LLMs only work because they’re built on top of the internet and its infrastructure. Right now it’s just another service like social media or instant messaging, and even if you can run an LLM offline on a dedicated device, it will make use of other technologies we know and understand, meaning we have an acute sense of how it can fail.
In the end I don’t think I would have been against the internet. I might have been cautious about certain aspects and tried to teach kids about the risks I could imagine. I might have been scared about the future of information quality. But all in all, I hope I would have taken actively cautioned against the dangers and downsides without blindly rejecting the entire technology.
With AI, my approach is more drastic, because we know how things can go wrong, and because so many parameters are combined to reproduce some of the mistakes we made with the internet. That’s why I write so much about AI and privacy, about AI and surveillance capitalism, about AI and mental health (coming soon). That’s why I wrote guides on how to run a LLM locally and why I’m working to break down tech monopolies and support an ecosystem of alternatives. Also because I believe that if we don’t let private interest guide the development and deployment of technologies, we can build things that benefit us all, not just a small set of tech lords and their shareholders.