Authenticity Guarantee: No AI was used to write this. These ramblings are my own.
Early Days
Something changed – updates from my manager became more loquacious, impeccably formatted, decorated with emojis, littered with the now infamous em dashes.
At first, I had thought it was an uptick in quality, perhaps leading by example. My optimism was rapidly dashed after finding out that these frou frou decorators are the hallmark results of putting your ramblings (or at least the basics of your idea and points you want to emphasize) into a LLM. The well organized output can sometimes be lengthier than required. This pattern has continued to worsen with time.
Now our managers could 10x (maybe even 100x) their announcement output to the team, expecting us to just magically have the bandwidth to read, parse, mull over, and respond to these extensive and frequent missives. I am guilty of seeing a wall of text and copying it to ChatGPT asking for a summary. I’ve even asked for poignant questions.
I figure: If managers are using AI to do their job, why should I work differently?
This has lead me to an awkward impasse of attempting to reconcile any meaning to the administrative aspects of my job. Like, what’s the point, if we’re all just meat bags pointing AI agents at each other?
I’m not here to complain and not offer solutions though! Ideally, AI Generated announcements that span more than several paragraphs should come with a short-form version: bullet points, or a condensed paragraph. This would allow humans to quickly get the gist, and then could read the long form post if they require more information.
The Jig is Up
It didn’t take long for my manager to admit this was what he was doing, and in so, gave this approach an informal blessing to the rest of the team.
What has followed is an unfortunate series of events.
- AI Generated Docs and READMEs.
- A coworker 100% vibe coding an AI enhanced observability stack, expecting the rest of the team to just “use AI” as well on this one.
- Additional R&D into AI tools on top of my other work responsibilities.
- Lots of talk about AI governance, with no action taken.
Some of these impact me more than others.
AI Generated Codebases
I don’t have a problem with vibe coding.
I have a problem with a teammate vibe coding an entire codebase, including docs and READMEs, and presenting it to the team saying “just read the docs, you need curiosity and a willingness to explore and try this out”.
The sheer cheek of someone asking you to read docs written by AI to understand a codebase written by AI rubbed me in all the wrong ways. Where’s the ownership, the care, the personal investment? Yet I cannot put my finger on a single reason this upset me…
The lack of skill required to produce an end product. This in turn leads to a lack of understanding in describing the end product, leading to poor build instructions, that AI will happily churn out unless you tell it to check you.
The idea of being totally dependent on an LLM to understand, build, and maintain a codebase. If you take Kernighan’s Law as tech gospel, then a human or a team of humans would have to be twice as smart as an LLM to debug a code base written by a given LLM. This disturbs me, as I firmly believe now there is no such human in existence. Perhaps codebases will eventually become so sprawling that by necessity it will have to be encoded to a denser language that is understood by AI but indecipherable to humans. That would be the final nail in the coffin of the professional developer, relegating human-driven development to the hobbyist tier.
DevOps team lead with no formal coding experience, a 100% vibe coder. A sign of the times, perhaps? Definitely a major bummer to me.
Perhaps again, instead, it’s as simple as “What’s the point in doing this ourselves, if AI is just going to do it instead?” It’s discouraging to me – not wanting to invest in growing skills that may be made completely irrelevant by AI. I’m not letting that discouragement stop me – I’m just focusing on learning more about what interests me: game development, containerization, monitoring, etc.
AI Docs – better than no docs?
I don’t think I’ve ever worked at a business that was satisfied with the level of documentation in codebases.
At the code shop I worked for, writing extensive documentation cost the customer time (and therefore more money), and also gave the customer an easier offramp experience. It was against the company’s interest to document client’s code (or so it was explained to me as such). Worse yet, the refrain “The code is self-documenting” came up frequently, and let me tell you, it wasn’t.
At the recruiting firm I worked for, documentation was non-existent, and served to bite us in the ass. I remember my manager deleting critical EC2 instances and databases in a late night caffeine (and I assume other substance) fueled haze to reduce AWS spend. “Just restore from backups” was the reply. There were no backups.
At my current job, documentation is better than at previous employers, but is still wanting.
Enter: the solution you’ve all been waiting for – make AI write the docs.
I’ve taken a stab at this myself, with mixed results.
On one hand, it does work well – runbooks and playbooks from thin air, perfectly formatted and ready to paste into Confluence.
On the other hand – it can be over the top, and things are moving quickly and document updates aren’t a priority since we’re a little understaffed, so doc drift happens. Without building documentation into the product workflows, any documentation created is now manual tech debt. Using AI to create docs once starts to create a snowball effect of sprawling outdated or irrelevant docs that no one so far cares to go back and clean up.
Confluence spaces are flooded with crud, or so it feels.
Again, back to “what’s the point of having docs, if they’re outdated or not useful? Is it to have a historical record? Is it to check a box saying ‘we have documentation’?”
I don’t know.
Tooling, Governance, R&D, Clueless Vendor Reps
Everyone and their brother is bragging about how their platform is “AI Driven”, “AI First”, “AI Enhanced”, etc. Pick a buzzword and prepend it with “AI” and you’ll find no shortage of google results.
My experiences with these new AI plugins have been mostly trash, with one notable and outstanding exception – what I would consider as “AI done right”.
Pulumi: Turns out, when AI has full access to your cloud state, it’s really good at helping troubleshoot cloud issues. Pulumi has set the bar high, having helped me troubleshoot both cloud networking and storage configuration issues in the past few months.
Other than that, Azure CoPilot, GitHub CoPilot, Rovo (Atlassian), and a few other vendors I will not call out for legal reasons have been a huge disappointment. Slow response times, inaccurate or completely irrelevant results, and “I’m sorry Dave, I can’t do that” responses have been the massive turnoffs for me.
Let’s take Teams and Copilot. My only use case I’ve wanted so far is “Hey, {coworker} sent me a URL a while back, but it was in one of a dozen frequent group chats and I can’t remember which. Could you find that for me?” This is the holy grail of AI in a chat client for me.
Teams CoPilot: I don’t have access to chats or channels, but I can search OneDrive or SharePoint. U S E L E S S
So, again, what’s the point?
Energy Consumption & Hardware Hoarding
I don’t have the most in depth understanding of this facet of AI, but trending headlines suggest:
- We don’t have sufficient electric capacity in many places
- We can’t manufacture chips fast enough (GPU and Memory)
- Training AI models take a lot of power (this may be an understatement)
It seems to me that a whole lot of time, effort, energy, and money has gone into producing mediocre tools (so far). It’s still early.
If you’re a hobbyist like me, you’ve grabbed ollama and turned up a few models and have seen amazing response times – if LLMs run this well on an old gaming desktop pressed into service as a linux server, surely AI can’t be using all that much electricity? I’m of course neglecting all the power that went into the banks of GPUs for months straight to train that AI model, not to mention the power required to host the servers.
Even deeper into the energy rabbit hole, there are municipalities denying construction permits for datacenters because they lack the energy capacity to support them. TerraPower, among others, are creating natrium reactors, nuclear power plants that utilize molten sodium as the heat transfer medium to steam generators. This additional separation of concerns (reactor -> sodium exchange -> steam turbines) should provide a far higher standard of safety and a massive risk of a Three Mile Island / Chernobyl like event.
Let’s not get sidetracked.
In addition to power, these servers need to be cooled. This can require an enormous amount of fresh water for evaporative cooling systems. An EESI Article claims the following
Large data centers can consume up to 5 million gallons per day, equivalent to the water use of a town populated by 10,000 to 50,000 people.
Now you can’t just destroy matter – the water is evaporated into the air, reclaimed by nature (eventually). However, unless you’re building your own condensers to reclaim the water, that’s a massive draw. I can’t help but wonder if these tools are really worth it?
Why the dead sprint towards more data centers? What is so important that requires this insanely resource heavy approach? From what I’ve seen so far… nothing.
Hardware – nVidia had to be mentioned at some point here. The world leader in AI chips, and US Stock Market sweetheart. Beloved by AI bros and gamers alike, nVidia is hoarding silicon like there’s no tomorrow. Originally, the AI boom caused nVidia GPUs to spike, giving AMD and Intel a golden opportunity to enter the consumer market with “almost as good” performance for less than the staggeringly expensive nVidia cards. If only that were the end of it.
As time went on, RAM chips increased in price, as a result of a silicon shortage. A shortage driven heavily by nVidia ramping up production of its chips. Building a new computer w/latest gen hardware is so expensive now, even Linus from LTT has suggested “channeling your inner scrapyard warrior” and building new machines from some parts sourced from the used market, namely eBay and online marketplaces.
The point is, consumers are now being squeezed by the AI madness as well – again, why?
Where do we go from here?
I don’t know what the future holds.
- Maybe there’s a breakthrough in LLM tech that really does launch us into an era of an agentic workforce.
- Maybe there’s a huge breakthrough in AGI (Artificial General Intelligence) (different from LLM in case you’re wondering) that renders the idea of human leadership and planning obsolete.
- Maybe there’s a huge breakthrough in robotics that actually helps us realize a future like “The Jetsons”. Maybe it will be more Asimov-ian than that.
The above list is not exhaustive, just the three large possibilities on my mind. If one of these breakthroughs happens, it will turn the modern world on its ear. If AGI and robotics breakthroughs happen, there’s no telling what pandora’s box we’ll open, but given the breakneck pace the industry has been maintaining, I doubt they’ll stop to think it over.
In short, LLMs are amusing, General AI is fascinating, and the field of robotics has never been more exciting than today. That said, I think we’ve jumped the gun a little on how impactful it really is, even when properly implemented.
For me, I use it where and when I need it. I don’t need it to write all my code, just the boring stuff. I don’t need it to create images, but I will use it to source recipes (and it hasn’t let me down yet). Most importantly, it’s not a necessity – and I think we should all make a point of keeping it that way.
Notes
- There’s so much more I could talk about, but not authoritatively.
- AI Audio and Video as they pertain to deepfakes, and how this may end up being the end of any trust in online media.
- Bots – seriously, possibly the end of genuine social media networks.
- War – AI soldiers, ground and air vehicles, even nukes.
- Art – AI generated music, movies, pictures, etc. If an AI controlled robot paints a masterpiece, who can argue it’s worth (either way)?
- Religion – An amusing scenario where an AI is tasked with creating a grassroots religion through the web. Less amusing if the religion can grow to scale.
- Politics – AI summaries of lengthy bills, AI used to create campaign platforms that will “play well” but perhaps the candidate doesn’t care about.
- Lagging legislation and legal frameworks.
- I don’t hold productive or experimental AI usage against anyone.
- AI will be used for terrible things, and I’m not sure we’re prepared to hear about it, prosecute it, or stop it.
- Artificial Intelligence (AI) is a massive catchall for distinct applications:
- Machine Learning: Learning patterns from data w/out explicit programming. Think of playing a video game by just mashing buttons and writing down the presses/results, then picking the presses that gave you the best results and trying to improve from there.
- Natural Language Processing: Think ChatGPT
- Computer Vision: Facial recognition, image analysis, the systems in self-driving cars (*and some not-so-self-driving cars)
- Robotics: Reasoning for movement and interaction.
- Generative AI: Creating movies, music, etc.
- The holy grail of these endeavors is Artificial General Intelligence. A competent system that can learn and retrain itself in real time, perhaps even self-aware or conscious, such as we understand the concept.
