Boundaries and AI hype
Here I am writing another ranty draft about AI. Perhaps this time I’ll make it public.
After a tense and frustrating meeting today to discuss the AI/agentic coding future of the engineering department, I’m again thinking about my boundaries as a professional.
During this meeting, the focus of the proposed document was almost entirely on productivity. I can understand, to some extent, that the business has to focus on productivity to stay afloat and competitive. When I asked about how engineers’ preferences, their career, and what impact this may have on the development of their skills (for example software design, a skill you won’t likely develop by only reading/reviewing generated code) fit into this proposal, there was little meaningful consideration. At least in the way the proposal was phrased, and I suspect this mirrors the real expectations, there didn’t seem to be room for engineers to make this decision themselves without opposing the intended organisational culture. “AI-first.”
Over the last year or so, I’ve observed this intense thirst from leaders in organisations to ‘see the light’; their networks are full of people in similar roles that have been baffled as to why some engineers aren’t as excited about the future of software development, or are at least approaching it more cautiously. I can’t shake the feeling that whatever is driving this thirst is at least somewhat disconnected from reality, and it means anyone who doesn’t share that view has the pleasure of being gaslit on a daily basis.
Meanwhile, I’ve also observed engineers using and experimenting with these products, genuinely excited about them, and considering how to fit them into their practices. Others have experimented, and concluded that they aren’t useful to them at that time, or only fill a certain niche. Both of these responses seems a lot more rational to me, even though I don’t necessarily hold that same excitement.
So. Boundaries.
Since this hype cycle began, I’ve been trying to make sense of it both as a professional who at times has to be pragmatic to align with organisational goals, and as a person who has their own values and ideals.
As my professional self, I see plenty of ways in which this can change how we write software, both at the more abstract level of implementing automations using natural language, and at a practical level for code generation/manipulation. That’s fairly uncontroversial. I also see a bunch of failure modes due to the way LLMs inherently work: they generate plausible text based on an input, there is no reasoning or understanding, making them prone to subtle and not-so-subtle errors in their output. This is not something that can be ‘fixed’, it’s inherent in the technology. Depending on the way it’s deployed and what problem it’s intended to solve, that may also be a totally acceptable tradeoff.
As my normal, messy, every-day self, I see different things.
Environmental: these models are energy intensive to train and operate, requiring specialised hardware and the associated infrastructure to run it (often, potable water.) There seems to now be data centre gold rush, of sorts, and these data centres often seem to land near marginalised people or where there is cheap land despite problems that make potable water scarce. The companies operating these products (OpenAI/Microsoft, Anthropic, Google, etc.) do not have a good track record when it comes to transparency, but the consumption does not appear to be negligible. Given that we are in a climate crisis and we are not already on 100% carbon neutral energy, it doesn’t seem like we need their data to conclude that this makes our problem worse.
Moral: the way these models were trained, by consuming all of the content they could get their hands on (legal or otherwise, because fuck copyright laws apparently), and assigning reinforcement tasks to underpaid and under supported labourers in developing countries to improve model performance and moderate content to the point of psychological damage.
Social: AI induced psychosis is a thing now. People have taken their lives after ChatGPT has obediently helped them write a plan. Children’s toys are telling them how to start fires. Education systems are struggling in the wake of lobbying by tech companies to digitalise the classroom, despite resistance from educators (this started a long time ago, AI is the newest layer.)
There are a lot of incredibly important issues tangled in with this AI push, and from what I’ve observed, tech leaders seem to either not be aware of them, shrug them off with “it’ll get cheaper/more efficient”, or simply don’t care.
Personally and professionally, I can’t justify participating in the ways my industry is using LLMs at all. It is simply not worth the cost to the planet and our people.
The really frustrating part of this is that it’s solvable—it just takes work. Work to construct datasets that respect the peoples’ rights and culture. Work (and money) to train the models using well-supported/-compensated labour. Here’s a sublime example of people doing that work.
Unfortunately, it’s work that seemingly the majority of the loudest voices in the room are not willing to do, for the sake of profit.