Mohamed Ghazi
HomeCtrl+ Alt + HAbout MeCtrl+ Alt + ATechnical ExpertiseCtrl+ Alt + SProfessional ExperienceCtrl+ Alt + EKey AchievementsCtrl+ Alt + RLatest InsightsCtrl+ Alt + BGet In TouchCtrl+ Alt + C
The Insights
2026-03-23
14 min read

The Era of Coders Without Code?

MG
Mohamed GhaziTechnical Lead
Spread the knowledge
The Era of Coders Without Code?

I've been leading engineering teams for a while, and I'll tell you something: I'm tired of the AI conversation going the same way every time for the past two years. Someone brings it up in a meeting: half the room gets defensive, the other half goes quiet — not because they disagree, but because they don't know yet what it means for them, and it keeps shifting even as the conversation is happening.

I was in that second group for longer than I'd like to admit. This is me finally saying something.

It moved faster than I expected

A couple of years ago, I showed my team GitHub Copilot. The reaction was that particular kind of impressed-but-uncomfortable you get when something works better than you wanted it to. It finished functions. Fine. A trick.

Then things got serious, fast — and I don't think most people clocked how fast until it had already happened.

The tools we use today don't just complete lines. They generate full modules, write tests, draft documentation. In some workflows they pass outputs between each other like a little autonomous team. It's embedded in VS Code, JetBrains, your terminal. It's not a beta feature you have to opt into. It's just there, on a Tuesday, and you're using it.

The Real Encounter

The first time you really watch it work — not demo-watch, but actually sit with it solving something real — a specific fear kicks in. I've seen it on the faces of senior engineers I deeply respect. The thought is: if it can generate, review, and fix code, what exactly is left for me?

I want to give that question a real answer, not a reassuring one.

What these tools actually are (and aren't)

Everything your team is using right now — Copilot, Cursor, ChatGPT, Claude — is a pattern-matching system. A spectacular one. But pattern-matching is not understanding, and I think that distinction matters more than people give it credit for.

There are no intentions behind the output, no awareness of what the product does for actual users, no judgment about whether the architecture will make sense in eighteen months. It doesn't know your codebase has a weird legacy reason for doing things the way it does. It doesn't know your team has tried this approach before and it caused three outages.

Where the AI Shines

What it is good at — genuinely, impressively good at — is anything structured, repetitive, and well-documented. Which, honestly, describes a lot of what fills our days.

prompt
create a TypeScript function that validates an email, throws on invalid input, returns normalized value

And it's done in ten seconds. Correct, clean, handles edge cases. What used to eat fifteen minutes of focused attention is now something you don't have to think about.

But notice what that prompt required: someone who already knew what they needed, why they needed it, and how to describe it precisely. The tool executed. A person decided.

The Confidently Wrong Trap

The tool is also confidently wrong, sometimes in ways that are hard to spot. I watched a senior engineer on my team spend forty minutes debugging a service method the AI had generated. The code was clean. It looked right. It was logically coherent. It also completely misunderstood the domain problem it was supposed to solve — because it was pattern-matching against similar-looking problems, not reasoning about ours. No error. No warning. Just a plausible-looking answer to the wrong question.

That's the thing about a system that predicts what comes next: it doesn't know when the question itself was off. A developer does.

Where developer attention actually goes now

This is the shift I keep trying to articulate to people on my team, especially the ones who are anxious about it.

  • A year ago the job was: how do I write this function?
  • Today the job is: what should this function do, how does it fit the system, and what constraints must it obey?

Those are not easier questions. They're actually the harder ones — the ones that used to get skipped because we were too busy writing the function.

Now the prompt became:

prompt
generate a service method following repository pattern, async, paginated, consistent with the existing project structure

The tool writes the code. But who defined the pattern? Who decided on pagination? Who knows the codebase well enough to ask for consistency in the first place?

A developer did. One who understands the system, not just the syntax.

That's the thing I keep landing on: AI is a threat to the mechanical parts of the role. If your value was always in understanding systems, making architecture calls, pushing back on bad requirements — none of that is in danger, as of now.

We've seen this panic before

A teammate made a comparison a few days ago that I haven't been able to shake. He pointed out that this is almost identical to when drag-and-drop builders and low-code platforms first showed up. The reaction then was the same: anyone can build an app now, so why do we need developers?

And technically — they could. Simple things got genuinely easier. Some categories of simple work did disappear. But the systems didn't get simpler. They got larger, more integrated, more dependent on people who understood performance, security, maintainability, and how everything connects under the surface. The easy work got absorbed by tools and the hard work started mattering more. I think the same thing is happening again, and I think we'll look back in five years and see it clearly. Right now we're too close.

The part I won't paper over

I don't want to do the thing where I acknowledge concern and then immediately dismiss it with optimism. Some of this is real.

When one developer with the right AI setup can do what used to take two or three people, the headcount math changes — though I'd add an honest caveat: the gains are real but uneven, and they show up a lot less clearly when you remember that writing code was never more than a third of the job to begin with. I've had that conversation with leadership and it wasn't abstract. Junior tasks that were once entry points into the profession — they're shrinking. Manual testing, repetitive integration work — shrinking. That affects real careers, particularly early-career developers who are trying to build foundational skills in an environment where those foundations are increasingly abstracted away before they get a chance to pour them.

The Leadership Debt

There's a longer-term version of this problem that nobody in leadership wants to sit with. If junior roles keep shrinking — if we collectively decide that AI handles the entry-level work so we don't need to hire for it — we're quietly dismantling the pipeline that produces senior engineers. The person who will be your lead architect in eight years is, right now, supposed to be doing exactly the work we're automating away. I don't know what the industry looks like when that debt comes due. I don't think anyone does.

If you manage a team, the junior hiring conversation is going to come to you — and the framing will be about efficiency. Push back on that framing. Not sentimentally, but practically: the senior engineers you'll need in five years are the juniors you hire and develop today. That pipeline doesn't rebuild quickly once it's gone. Hire them. Structure their growth differently than you used to — more pairing, more deliberate code review, more explicit teaching of the judgment that AI can't model. But hire them.

And if you're further up — if you're the one having the headcount conversation — ask a different question than the one finance is asking. Don't ask how many developers AI replaces. Ask whether your team can still explain, own, and fix what the AI produces under pressure. If the answer is getting shakier, the productivity number is a vanity metric. The real cost shows up later, in an incident at 2am, when the person on call doesn't understand the codebase they're supposed to fix.

I don't have a clean answer to any of this. I think anyone who does is selling something.

The risk nobody talks about enough

Here's the thing I actually lose sleep over — more than job displacement headlines.

When someone starts accepting generated output without reading it, stops debugging, or stops building intuition about why something works, the tool quietly becomes a ceiling instead of a floor. The skill decays and the developer becomes a prompt dispatcher.

I saw this in the most embarrassing way possible. During a routine PR review I came across a comment sitting right at the top of a freshly committed file. The AI had written it. The developer had not noticed it. It read:

"Got it, (Prompt Dispatcher Name)! Here's the updated custom hook with all the edge cases covered, but remember to remove this comment before pushing it."

-Committed to repository, Visible to everyone-

I didn't know whether to laugh or schedule a one-on-one. I did both. It became a running joke on the team, but what's underneath it isn't funny at all: if you're copying output without reading it, you're not using a tool. You're being operated by one.

The committed comment is embarrassing. The version that keeps me up is the one where the mistake isn't visible. Because the tool doesn't just occasionally write clumsy code — it writes plausible-looking code with security holes baked in. Hardcoded credentials. Auth checks on the wrong side of the stack. Clean, confident, and wrong in ways that won't show up in a PR review if the reviewer isn't specifically looking for them. I've started treating AI-generated code the way I'd treat code from a contractor I've never worked with before: assume it works until proven otherwise, but verify the things that matter before they go anywhere near production.

I've started having direct conversations about this. Use it — genuinely, please use it, it's a real multiplier, but own what it produces, understand it, be able to explain it, modify it, and defend the decision behind it.

The engineers I trust most are the ones who use AI to move faster and then immediately ask: but does this actually make sense here? That pause is not something any model is doing for you.

What keeps you valuable

Use it to generate, to kill boilerplate, or to explore ten approaches in the time it used to take to think through one. But design the architecture yourself, understand the logic, and know how the system behaves when the tool isn't in the room.

One thing I've started doing deliberately: I keep a part of my workflow AI-free. Not out of principle — out of maintenance. The instinct for why something is wrong, the feel for when an architecture is about to cause pain two quarters from now — that only stays sharp if you're still doing some of the work with your own hands.

"Use the tool. But don't let it take all the repetition, because some of that repetition is how you stay calibrated."

The developers who do well in the next few years won't be the ones who resist AI to prove a point. They also won't be the ones who outsource their thinking to it entirely and wonder why they feel hollowed out after six months. They'll be the ones who get genuinely fluent in using it — while holding onto the judgment, the taste, the systems thinking that no model has come close to replicating.

So — is this the era of coders without code?

Not in the way the headline implies. But it's the era where writing code stopped being the central skill that defined a good developer. That's a real change, and it's worth taking seriously.

The software isn't getting simpler. The responsibility isn't shrinking. If anything, the demand for people who can think clearly about what should be built — and why, and how it should behave under pressure — has only grown.

"The code may write itself. The thinking still has to be yours."

Focus Keywords

#Generative AI#AI Coding Tools#Future of Software Engineering#Engineering Leadership#Developer Career#Systems Thinking#Developer Productivity#GitHub Copilot#Software Architecture#Junior Developers#Claude#Cursor#Automation#n8n#AI revolution

Join the conversation

Interested in deep tech, leadership strategies, or building high-performing teams?