From the first wave of hype in 2022 and the scramble from Big Tech companies to catch up, AI felt like BS to me. It was the latest child from a tech industry trying to justify its existence and attract investment. Blockchain, VR and the metaverse didn’t catch up, AI will do.
It also felt like yet another blow to privacy and security, a boon waiting to be weaponised by tech companies and government alike at the first opportunity. I don’t trust Big Tech with my data, why would I give an AI a deeper insight into my life by sharing personal details with it in a conversation of programmatically?
Yet within the first year, trying to disentangle the marketing hype from the actual potential of the tech, I saw interesting uses cases and smart people using it in interesting and creative ways. Today, I find myself using and testing it more and more: coding, when I don’t want to invest the time in learning something new but rather focus on building. Searching for the right term for a concept or object. Testing ideas and forcing myself to explore different views. Could it be that it’s not all be BS after all? Maybe this technology can actually be useful, save time and enabling certain things that were before impossible? Unsure, but like any decent technology, it seems that while it doesn’t solve everything, it may be able make life a bit easier for certain things.
Yet when I have to voice an opinion, it still is one of concern, pushing against deployment and adoption. With the shape of digital markets today, AI-powered products entrench the dominant position of Big Tech companies that already have way too much power. It normalises invasive (and copyright infringing) data collection and incentivises people to share intimate details about their lives that can be used against them, as seen with other technologies. It consumes insane amounts of energy that only worsen the climate crisis that tech oligarchs love to say they will solve. It exploits low-wages workers to annotate data and make the technology feel like magic. LLMs basically have an infinite attack surface. And the list goes on. So no, I don’t think you should use AI. Not if you haven’t carefully though about how who will access your data, how much energy will be consumed when you do, how was the model trained and how trustworthy is the content that it will generate.
The reason I’m writing this is to reflect on our attitude towards innovation, and particularly tech innovation. One of the core arguments of many in the tech sector is that “regulation kills innovation”. Having researched and uncovered how technology can dramatically impact individuals and societies, I am convinced that a cautious approach is mandatory, particularly with potentially paradigm-changing technologies like AI. This is not about trust in the industry, it’s about caring for others, and the negative impact it could have on them, rather than focusing on potential benefits, financial and societal. Yes, I want progress, innovation, and cool tech. No I don’t want it to come at the cost of my privacy and security, other people’s health or insane energy consumption.
So even if AI goes on and become an absolutely revolutionary technology that solves the environmental crisis, enables us to talk with dolphins and allows everybody on the planet to stop working and pursue their passion, I won’t be ashamed to have pushed against it. Because at one point in its development, people were submitted to terrible working conditions when labelling datasets. Individuals were given false information putting them at risks. And the threat to our security and privacy was real.