Skip to main content
I want to be honest about something: most takes on this topic are either panic (“AI will take all the jobs”) or reassurance (“nothing will change, developers will always be needed”). Both miss what’s actually happening. The honest version: AI agents are systematically eliminating certain types of IT work while creating high demand for others. The shift is already happening. And it’s more nuanced than the headlines suggest.

What AI Agents Actually Do Well (That IT People Used to Do)

Before talking about what changes in IT careers, it’s worth being precise about what agents are getting good at. Not theoretically — practically, in production, right now. Tier 1 support and triage. The questions that used to fill helpdesk queues: password resets, access requests, “how do I connect to the VPN,” software installation guidance. Agents handle these. Not perfectly, but well enough that teams are shrinking their Level 1 headcount. Log analysis and anomaly detection. Scanning logs for errors, correlating incidents, identifying patterns in infrastructure behaviour. An agent can process a week of logs in seconds and surface anomalies that would take an engineer hours to find manually. Routine change requests. Environment resets, dependency updates, access provisioning for known user types. The work that fills tickets on Friday afternoon. Documentation and runbook generation. Given a system or process, agents can generate first-draft runbooks, update existing docs, and keep wikis current. Not flawlessly — but fast enough to change what “maintaining documentation” means. Code review at scale. Catching style violations, security anti-patterns, missing error handling, test coverage gaps. Not replacing human review judgment, but reducing the volume of obvious issues that reach reviewers.

What This Means for Specific Roles

Support Engineers

The job used to be: receive ticket → diagnose → resolve → close. That middle section — for a large fraction of tickets — is now automatable. What survives and grows: playbook authorship and agent training. Someone has to define how the agent should handle an edge case. Someone has to review where it’s failing and why. Someone has to maintain the escalation paths that kick in when automation isn’t enough. The support engineer who thrives is the one who stops thinking “I resolve tickets” and starts thinking “I build the system that resolves tickets.”

SREs and Platform Engineers

These roles are in a fascinating position. The work that’s most automatable — incident response runbooks, routine scaling, alert acknowledgement — is also the work that SREs were already trying to automate out of existence. AI just accelerates that by years. What grows: reliability product thinking. Defining the SLOs that agents watch. Designing the automated remediation paths. Building the feedback loops that improve agent behaviour over time. This is higher-leverage work than manual incident response, and it’s where senior SREs are already gravitating. The risk: junior SREs who never learned incident response deeply because agents handled it before they could develop the intuition. That’s a real gap emerging in some teams.

Security Analysts

False positive management — the exhausting work of triaging hundreds of alerts to find the two genuine threats — is increasingly agent-handled. Routine compliance checks, access audit reports, vulnerability scanning summaries: agents are good at this. What grows: adversarial thinking and policy design. Red-teaming the agents themselves (prompt injection against your own security agent is a real attack vector). Designing access policies that agents can interpret correctly. Threat hunting — the creative, hypothesis-driven work of finding things that don’t match any known pattern. Security is one domain where human judgment is genuinely hard to replace because attackers adapt faster than training data can capture.

Software Developers

The coding itself is becoming cheaper. Writing boilerplate, scaffolding features, generating tests — agents handle this increasingly well. The question isn’t “will AI replace developers?” It’s “which parts of development are left when implementation is cheap?” The durable parts: architecture decisions, product thinking, and review. Deciding what to build, how it should compose with the rest of the system, and whether the AI’s output actually achieves the intent — these require the kind of judgment that comes from shipping real systems and learning from how they fail. The developer who treats AI as “faster autocomplete” will be surpassed by the developer who redesigns their entire workflow around AI as a first-class collaborator.

New Specialties That Are Emerging

These roles barely existed three years ago. They’re becoming critical infrastructure. Agent Ops. Owning the production quality of AI agents: prompt versioning, evaluation datasets, incident response when an agent misbehaves, monitoring dashboards for agent behaviour. This is a software engineering role with a domain specialisation in AI systems. AI Evaluators. Building the datasets and harnesses that measure whether agents are performing correctly. Designing rubrics for “correct behaviour” in ambiguous domains. Running red-team exercises. This sits at the intersection of QA, domain expertise, and ML understanding. Data Stewards. Agents are only as good as the data they have access to. Keeping source-of-truth systems accurate, well-structured, and appropriately permissioned is increasingly critical infrastructure. This isn’t a glamorous role — it’s a load-bearing one. Human-in-the-Loop Architects. Deciding, system by system, where human approval is required and where it can be safely automated. Designing the interfaces for human oversight. Measuring the cost of over-automation (errors) against the cost of under-automation (friction). This is a governance and design role that doesn’t fit neatly into existing job titles.

Skills to Invest In (Concrete and Specific)

Systems Thinking Over Task Execution

The most valuable engineers are the ones who understand how the pieces connect — why a change in the authentication layer affects the audit log, why a schema migration breaks three unrelated services. Agents are weak at this. They see files, not systems. The engineer who holds the system model in their head is the one directing the agent effectively.

Evaluation and Observability

If you can’t tell whether an agent is performing correctly, you can’t trust it. The ability to design evaluation datasets, define what “correct” means for a specific domain, and build the monitoring infrastructure to detect drift is becoming a core engineering skill.

Prompt and Policy Authorship

Expressing constraints in machine-readable form — whether that’s a prompt, a CLAUDE.md, a rule set, or an access policy — is the new “writing code for the system.” The people who are good at this have precise, unambiguous communication skills applied to technical domains.

Domain Depth

This is the counterintuitive one. With AI handling more of the implementation, domain expertise becomes more valuable, not less. An agent doesn’t know that the “cascading deletion” in your data model is politically sensitive because of a compliance requirement from 18 months ago. You do. That context is hard to acquire and impossible to automate.

How to Think About Your Career Trajectory

A useful frame: the work closest to understanding intent and designing systems is durable. The work closest to executing known procedures is automatable. This means: Move toward ownership, not execution. Own a system or a domain, not a task. Agents execute tasks. Humans own outcomes. Document the “why” relentlessly. The context that makes your decisions defensible — the tradeoffs you’ve considered, the constraints you’re working within, the reasoning behind an architectural choice — is the hardest thing for AI to reconstruct. Write it down. It’s valuable to your future self, future teammates, and the agents that will work alongside you. Volunteer for the experiments. When your team is piloting an agent for some workflow, be the one who’s closest to it. The people who understand what agents can and can’t do — from real experience, not from headlines — will shape how your organisation adopts them. That’s leverage. Don’t abandon depth. The temptation is to become a generalist AI orchestrator who touches everything shallowly. The more durable path is deep expertise in something real (distributed systems, security, ML, frontend performance, data modelling) with the ability to use AI to accelerate in adjacent areas. Depth is what makes you the person who catches the agent’s mistakes — the person who knows the right answer when the agent is confidently wrong.

The Honest Summary

Work typeTrajectory
Routine ticket resolution, Level 1 supportAutomating quickly
Log scanning, anomaly detectionAutomating quickly
Architecture and system designGrowing in value
Evaluation, quality, red-teamingGrowing in value
Agent ops, prompt engineeringNew, high demand
Domain expertiseGrowing in value
Documentation (first draft)Automating
Documentation (context and reasoning)Growing in value
Code implementation (boilerplate)Automating
Code review and architectural oversightGrowing in value
The IT professionals who will thrive aren’t the ones who resist this shift — they’re the ones who move deliberately toward the parts that are growing. Not because AI is forcing them to, but because those parts were always the most interesting work.