Engineering in the Age of AI


We are told Artificial Intelligence is a productivity miracle. In the world of systems engineering, the promise is enticing. The current trend is “the death of Terraform” and the rise of “autonomous GitOps.” They say we can finally move beyond the friction of manual infrastructure and let agents manage our devices with automated precision. 

But the more I think about it, the more I realize this tool is distorting the things that make being a professional actually meaningful.

The Hotdog Economy

As an engineering manager, I see the pressure to chase velocity at the expense of our professional souls every day. I face the constant internal conflict of defending the slow, deliberate path to directors who only see the bottom line of a dashboard.

The logic behind a declarative model is solid. It makes sense to want a fleet that follows a set of rules without having to babysit every single update. 

When you use an agent to “one-shot” your infrastructure, you might end up with a functional environment, but to use a popular analogy, it is a hotdog. In the same way a hotdog is mechanically separated meat that contains disgusting fillers, AI is mechanically separated logic that may contain any number of bugs and inefficiencies.

It has calories. It fits in the bun. It fills your stomach. It works. But you have no idea what is inside it. You don’t see the fillers, the stabilizers, the brittle logic until the system fails under pressure.

When you strip the friction out of a job, you also lose the satisfaction of actually solving a problem. There is a specific kind of growth that only happens when you wrestle with a difficult configuration or hunt down why a machine isn’t behaving. If we hand those struggles over to an algorithm, we are bypassing the part of the work that forces us to learn and adapt. Our progress as engineers stops, we end up with a result we didn’t earn, and our own skills start to fade because we simply stopped using them.

The Regression Toward the Mean

Another problem is that a LLM is not a tool for original thinking or strategic genius. It is a presentation product that acts as a regression toward the mean. It is a blender containing the entirety of the internet’s output. By nature, an LLM cannot be an outlier, yet all great engineering and strategy happen at the edges.

These models provide generic, clustered answers that lack the conviction of human judgment. Because the technology is eloquent and confident, it can make a mediocre idea sound like brilliance. We risk entering a feedback loop of mediocrity where the AI trains on its own average output and the average itself begins to degrade because we have stopped feeding the system original, outlier, human thought.

An agent can translate a script from Python to Swift in fifteen minutes, a task that might take a human hours. On paper, that is a massive gain. But this speed creates a hidden tax. If you do not spend the time to refactor and document that code, you are just inheriting cognitive debt.

When you earn a solution through manual effort, you understand the why behind every line. You care about the thing you built, and you want it to be great. Bug fixes are an act of care and passion because you want to build the best possible thing you can.

When an agent generates it, you are left with a tower you didn’t build. You might save time in the morning only to spend twice as much in the afternoon trying to understand why the agent hallucinated a command that doesn’t exist. You lose the desire to support your systems with care and passion because you did not live the moments that built them.

The Responsibility Vacuum

There is also a serious issue with accountability when an algorithm starts making the calls. It becomes too easy to blame the system when things go wrong. If an AI decides to lock out a user or wipe a drive based on a pattern it thinks it sees, it isn’t the one that has to look that person in the eye and explain why.

In these scenarios, the human engineer becomes the moral crumple zone. We are kept in the loop just enough to take the blame when the automated system fails, but we lack the deep context required to prevent the failure in the first place. It acts as a shield that lets us distance ourselves from the consequences of our own infrastructure.

AI doesn’t actually exist in the physical world. It lives in a space made of math and probability where it never has to deal with the messy reality of a broken screen, a bad connection, or a frustrated human being. It doesn’t get tired or feel the weight of a mistake because it doesn’t have a life to live. Humans need to stay at the center of this process because we are the ones who have to live with the results.

Choosing the Hard Way

The danger is not just that our skills will atrophy. The real threat is that we will simply stop caring. By outsourcing the struggle, we lose our connection to the work. Engineering used to be a source of pride, a craft where you could lose yourself in a problem for hours and emerge with something elegant. You cannot achieve a flow state by watching an AI generate a code block. You cannot feel the satisfaction of a breakthrough if you never experienced the tension of being stuck.

We can build guardrails, and we probably should, but history shows that the human desire for the path of least resistance eventually clears those obstacles away. 

The only real defense is individual awareness. Authenticity is forged in friction.

When we prioritize the frictionless, the work becomes purely transactional. It stops being fun or engaging. We can crap out whatever feature or infrastructure update is requested, but there is no nutrition in that victory. If we lose the passion for building the right thing the right way at the right time, we are left with a hollow shell of a career.

Outcomes cannot be all that matter. We must realize that the value is in the journey, what you learned along the way, and who you learned it with. We have to be willing to choose the hard way. The manual, sometimes inefficient way is where the meaning lives. 

I’m not saying renounce AI entirely. I’m just begging for some caution. My advice is to use it carefully, and responsibly. Maintain individual awareness. Use AI to understand a problem, but don’t use it to solve the problem. Don’t use it to build the solution. Don’t use it to remove the friction. Don’t expect it to have novel ideas, or create unique workflows.

Do not voluntarily hand over the keys to your creativity and expertise for the sake of convenience. If we do, we risk losing both our jobs and our meaning. We will end up with a world of broken systems, built by machines that don’t understand the goal and maintained by people who no longer remember why they wanted to be engineers in the first place.

Leave a comment