I have stuggled with writing this for a while. This has the potential to be controversial: Some people have very strong opinions on this, while my own opinion is more … nuanced. –
I live in two different worlds regarding AI.
During the day, I am working in a big tech company that is betting big on AI. AI is everywhere, and (at least some) people are excited about the possibilities.
Outside of work, when I log in on Mastodon, the typical poster there thinks that anyone using AI for anything should be ashamed, and that AI is unequivocally bad.
Going with the times
In Tech, things change all the time. If there is some new development that you don’t like, you might decide to “boycott” it – however, you risk looking very silly down the line when the thing you despise is just normal, while you made disagreeing with it your whole personality. For example:
- In the late 90s, some people were very upset about computers no longer having a floppy drive, or only supporting USB for peripherals, so they swore that they would never buy that new-fangled stuff. Try finding a laptop with a floppy drive today.
- Also in the 90s, you might have decided to ignore that new-fangled “internet” and “WWW” thing. Anyone can edit Wikipedia, right? That clearly makes the printed encyclopedia in the shelf much more useful!! You would have missed a lot of exciting stuff in tech in the following decades.
- In the 2010s, you might have decided that “There is no cloud, only other people’s computers”, but you would have missed out on a number of cool innovations. (That’s another post for the future.)
- On the other hand, in the early 2020s, you might have decided early on that NFTs are stupid, and you would have been right.
So what about AI? I feel the jury is still out, overall. There are aspects of the hype that are clearly stupid. There are also AI-related things that are useful. I feel that dismissing everything containing AI means depriving myself of some useful tools. Not using the best tools is going to mean that I cannot do my best work.
Maybe that’s the FOMO argument. But it is also the “lifelong learning” argument, particularly in a professional context.
Example: Coding assistants
Coding assistants can be a giant productivity boost. At Google and Microsoft, statistics claim that up to half of coding keystrokes are “autocompleted” from an AI assistant. I can believe it! Never mind that you sometimes end up deleting most of what the assistant has written because it has misunderstood what you wanted to do.
Some situations in which a coding assistant is helpful for me:
- Framework boilerplate. For instance, I recently ported a Go command-line tool to Cobra. The assistent could spit out Command structs and Cobra registration functions based on the other code in the file. I would imagine that JS frameworks work really well for this too.
- The boring parts of the code. “Create an argument parser supporting
the
-f
,-d
and-h
options”. This, too, is kinda boilerplate. - You can narrate your code by using comments like subheadings. You
write something like
// Connect to the server.
and let the assistant propose a dozen lines. - You can ask the chatbot for ideas on how to approach something, like “How do I make the JS on this page communicate back and forth with the server?”, let it suggest a websocket and make a skeleton implementation of the two sides.
What these things have in common: I want to understand the code that the assistent generates. And I have the last word and proofread everything. Typically, there are some bugs in what is generated. I don’t like the Vibe Coding approach, where I just let the machine make random edits to make errors go away.
If – and only if – you are the one that stays in control, an AI assistant can be like “a bicycle for the mind”.
Don’t buy the hype
While all of the above is pretty positive, there are many aspects of the AI hype that I really do not like. Consider “slop” – the endless stream of low-quality, uninspiring content created from LLMs, image generators, and more. Grifters seem to really like those.
Then there is the societal argument about the consequences of large-scale deployment of AI. There is clearly an effort by some to replace knowledge workers with unaccountable machines.
You apply for a credit card, the AI declines your application, and there is no way to appeal.
Translators and linguists are laid off because “AI does the job well enough” (cf. Duolingo).
Personally, I find the endless boasting about GenAI in particular just boring. Half the posts on my LinkedIn now are people leaving their jobs (which is expected), while the other half is discussing generative AI. Are there no other topics?? But look at me writing this, I am part of that problem now :/
Conclusion
As said above, I think the jury is still out. It is probable that the AI bubble will burst at some point, with painful consequences for the entire tech industry. It is also probable that GenAI of some sort will be here to stay. So in my opinion, the best option is to find tools that are useful but do not believe all the hype.