By 2026, “AI-proficient” on an engineering CV is noise.
It used to signal something — an early adopter, someone paying attention to tools. By now, it’s a default line. Nearly every engineer submitting a CV in 2024 or later has added it. Most have used ChatGPT or Claude for something. Few have structured the practice enough to reliably ship better work with it.
The problem for hiring managers is immediate: you need to know which engineer will use AI as a force multiplier on delivery and which will use it to hide information gaps.
Over six years of assessments across 500+ engineering candidates in Southeast Asia, three observable behavioral tiers emerge. They’re not about which tools someone uses. They’re about how someone validates their output.
Tier 1: Output-Dependent
These engineers treat AI as a finishing tool. They use it to write boilerplate, generate test cases, or draft comments. When the output doesn’t work, they re-prompt. They debug by asking the AI to fix it. When the AI’s fix doesn’t work, they get stuck.
Under pressure — a deadline, a production bug, a complex refactor — they don’t degrade gracefully. They get faster at re-prompting. They ship output they haven’t fully validated. They’re less likely to catch edge cases.
CV signals:
- Heavy emphasis on tool names (ChatGPT, GitHub Copilot, Claude)
- Vague language: “leveraged AI,” “AI-augmented workflow,” “used AI to accelerate development”
- No specific examples of what they built or how AI was involved
- Brief employment stints (4–6 months)
- Job transitions right after large projects ship
Technical screen probe: Give them a code snippet with a subtle bug. Ask them to review it and explain what’s wrong. Tier 1 engineers will often miss it on first read. When you point it out, they’ll say “Oh, the AI should have caught that” or “Yeah, I would have noticed if I was more careful.” They don’t take ownership of the gap.
First-month deliverable risk: High. They’ll ship code faster. It will work in the happy path. Edge cases and error handling will be inconsistent. They’ll need more review cycles.
Tier 2: Output-Aware
These engineers use AI as a drafting tool. They generate something, they read it, they think about whether it makes sense. They ask themselves: “Is this doing what I actually want?” If the answer is no, they revise the prompt or rewrite it themselves.
When their AI-generated code doesn’t work, they can usually reason about why without re-prompting. They understand the gap between the spec they gave the AI and what they actually needed. They iterate, but they iterate with intent.
CV signals:
- Mention of AI in context of specific projects or problems
- Mix of AI tooling and non-AI work described with similar detail
- Descriptions of iteration or refinement
- Consistent employment stints (12+ months)
- Examples of shipped work, not just tools used
Technical screen probe: Same code snippet with a subtle bug. Tier 2 engineers usually catch it on first read. If they miss it, they’ll reason about why when you point it out.
First-month deliverable risk: Low to moderate. They’ll ship code you can review with confidence. Gaps will be between spec and assumption, not between what they said they’d do and what they shipped.
Tier 3: Output-Governing
These engineers treat AI as a first draft with known failure modes. They use it to move faster on things they already understand. They never use it on things they don’t understand. They’re suspicious of AI output on novel problems, unfamiliar codebases, or security-critical code.
They can articulate what AI is good for in their workflow and what it’s a liability for. They know which categories of problems AI output is reliable for (data transformation, API scaffolding, test generation on known specs) and which categories it’s not (novel algorithms, security reviews, design decisions).
Under pressure, they don’t speed up significantly. They don’t take risks they haven’t already evaluated. They’re actually slower than Tier 2 engineers on short timelines, but they have fewer rework cycles.
CV signals:
- Specific examples of where AI was useful and where they didn’t use it
- Language that shows judgment (“Used AI for boilerplate, but reviewed all business logic by hand”)
- History of shipping without rework cycles
- Longer employment stints (18+ months)
- Evidence of learning across projects, not just tool mastery
Technical screen probe: Same code snippet. Tier 3 engineers almost always catch the bug. When you ask about their process, they’ll describe it: “I would have reasoned about this case before writing it.” They own the validation process, not just the outcome.
First-month deliverable risk: Very low. They’ll ship code that needs minimal revision. They won’t be the fastest engineer. They’ll be the most reliable.
Reading the Signals
Not every CV will be explicit enough to place someone into a tier. The technical screen is where you get clarity.
Ask candidates to walk you through their AI workflow on a real recent project. Listen for:
- How they validated the output
- What categories of work they use AI for
- What they don’t use it for
- How they debug when AI-assisted code doesn’t work
- How they handle feedback on their code
Tier 1 engineers will focus on speed and tool features. Tier 2 will focus on intention and iteration. Tier 3 will focus on judgment and risk.
Pay attention to what they volunteer about their validation process. Tier 3 engineers usually offer it unprompted. They think about it. Tier 1 engineers usually don’t mention it. They assume the tool is supposed to be the validator.
What a Reliable First Month Looks Like
Hire a Tier 3 engineer and you’ll see: thoughtful PRs, good questions about the spec, edge cases caught before review, on-time delivery with minimal rework.
Hire a Tier 2 engineer and you’ll see: faster PRs, more review cycles, some rework, but overall solid delivery.
Hire a Tier 1 engineer and you’ll see: very fast PRs, lots of review feedback, significant rework, pressure to accept and move on.
The difference compounds. By month three, a Tier 1 engineer will have cost you more calendar time in review and rework than they saved by drafting fast.
The Real Problem With “AI-Proficient”
The phrase doesn’t distinguish. It lumps together engineers who’ve added a tool into their workflow with engineers who’ve changed how they think about validation.
By 2026, every engineer knows how to use AI. What matters is whether they know how not to.
The ones who do are the ones who’ll stay longer, need fewer review cycles, and catch their own bugs before you do. They’re not the fastest shippers. They’re the most reliable ones.
When you’re reviewing CVs this week and you see “AI-proficient” for the hundredth time, that’s your signal to probe. Not on the tool. On the process. On what they validate. On what they don’t trust.
Based on 500+ engineering assessments across the world, 2019–2025.