How to hire the right people in the AI era
We are moving from an economy of creation to an economy of selection, where operational judgment is the main skill that the machine cannot fake.
Welcome to issue #97 of FutureBrief. Every week I share practical insights on AI & automation trends, tools, and tutorials for business leaders. If you need support on your technological journey, join our community and get access to group chat, Q&As, workshops, and templates.
Ninjabot delivers ready-to-deploy sales AI and automation tools that allows business operators to stop doing busywork and start managing leverage.
I see many founders rushing to hire AI experts and prompt engineers.
They see a resume listing ChatGPT proficiency, agentic workflow architecture, or RAG implementing experience and assume this person is the key to automating their business.
They imagine a new hire who will come in, wave a magic wand over their operations, and make the manual work disappear.
But in reality, hiring a junior employee with powerful AI tools often creates more work for the founder, not less.
The reason is simple. AI allows inexperienced people to generate mediocre work at infinite speed.
If you hire someone who lacks judgment, they will flood your inbox with hallucinations, bad code, and generic copy. They don’t just make mistakes; they scale them. And you spend your weekends cleaning up the mess.
We need to stop hiring for ChatGPT experience. We need to start hiring for the one thing AI cannot fake: operational judgment.
The economy of selection
Gartner forecasts that by the end of 2026, half of all organizations will require AI-free skills assessments. They are correct to do so. We are moving from an economy of creation to an economy of selection.
We used to hire a writer based on how well they could draft a blog post from scratch. We used to hire a junior developer based on their ability to write syntax. Now, the LLM drafts the post or writes the function in four seconds.
The human’s job has shifted entirely. The new job is not to create. It’s to verify, edit, and connect. It is to ask: Is this actually true? Does this align with our brand voice? What happens to our data privacy if we paste this customer log into this window?
Recent data backs this up. The Human Capital Premium study from earlier this month showed that workers who augment AI (fixing and guiding it) command a 56% wage premium over those who are simply replaced by it.
Your goal is to hire operators, not task doers.
A task doer takes a prompt and gives you the result. An operator asks, “Why are we doing this prompt?” and “What breaks if the API fails?”
What to look for in the era of AI
If AI experience is a vanity metric, what are the actual signals of a high-value operator in 2026? Based on my experience, there are three critical traits you must interview for.
Radical skepticism
We know that AI has a persistent problem with accuracy. A forensic audit of AI research papers found a 17% phantom citation rate (references that look real but don’t exist). This is a massive liability if you are using AI to generate precise outputs like client reports or legal documents.
The most valuable employee is the one who treats AI output as guilty until proven innocent. You are looking for the person who checks the source, verifies the math, and acts as the firewall between the AI’s hallucinations and your company’s reputation.
Cognitive diversity
This is the most overlooked competitive advantage in the AI era. While everyone is trying to hire the same standardized efficiency, the data suggests you should look elsewhere. The Human Capital Premium study found that neurodiverse team composition is a stronger predictor of innovation output than raw AI infrastructure spending.
Why? Because AI is the ultimate normalizer. It produces the average of the internet. To break out of that average, you need brains that work differently.
People with ADHD, for example, often excel at the hyper-focus and pattern recognition required to debug complex agent workflows. In a world where AI handles the routine, the weird thinkers become the asset.
Systems thinking
You don’t want someone who can just run a prompt. You want someone who understands the consequences. AI models update constantly, and agent behaviors drift. A good hire anticipates this. They don’t just build the automation. They ask, “What happens to this data in six months? What if the API stops working for a day?”
My current interview playbook
To find these operators, I’ve stopped using standard interview questions. Asking someone “Tell me about a time you overcame a challenge” invites a rehearsed script. Here is how I test for the skills above.
The Hallucination Trap
In the interview, I give the candidate a short report that I know was generated by AI and contains a subtle but factual error. Perhaps a hallucinated statistic or a nonexistent competitor. I tell them, “Here is a research brief generated by our internal AI. Please review it and prepare it for the client. You have 15 minutes.”
If they fix the formatting, smooth out the grammar, and say “It looks good,” they fail. They are trusting the machine too much.
I want the candidate who stops and says, “I checked this source, and I can’t find it. The AI made this up.”
The Black Box test
Most candidates are great at answering clear questions. But AI is also great at answering clear questions. To find an operator, you must give them an unclear situation.
I give them a task that looks something like this: “We have a client who wants to automate their lead flow. They use a CRM I won’t name, and they get leads via email. Propose a solution.”
I have deliberately left out critical information: the volume of leads, the specific CRM, the budget, and data privacy constraints.
The AI expert uses ChatGPT to write a generic proposal for Hubspot and Zapier. They guess.
The operator replies with questions. “Is it 10 leads or 10,000? Is the data GDPR sensitive? What is the budget?”
AI guesses. Operators verify.
The shadow day
Finally, I test out their real skills with a paid shadow day. Because anyone can sound smart on a Zoom call. I need to see them handle the friction. So I hire them for a day, and I give them a messy, real problem like a folder of disorganized invoices and watch their workflow.
Do they spend four hours trying to code a complex Python script with ChatGPT that never actually runs? That’s the complexity trap.
Do they manually type it all out? That’s the inefficiency trap.
Or do they build a simple, boring automation and manually check the outliers? That’s the operator approach.
The pilot and the plane
Once you hire them, the relationship with AI needs to be codified. In my offer letters, I start to include an augmentation agreement.
It establishes that the human is 100% responsible for the output. The “AI made a mistake” is not a valid excuse.
It also sets strict boundaries on data privacy. No proprietary data goes into public models.
This frames the relationship correctly. You are the pilot; the AI is the autopilot. If the plane crashes, the pilot is at fault, not the software.
The most valuable skill in 2026 isn’t coding. It’s the ability to look at a perfect-looking AI output and say, “Wait, that doesn’t make sense.”
Hire for the skeptics. They will save you a fortune.
— Yuri
🛠️ Builder’s vault
If you are a Premium member, you can find this entire process (the Black Box email templates, the Hallucination Trap scoring system, and the Augmentation Agreement) in the Builder’s Vault.
Forward this to a colleague who’s wrestling with manual processes. They’ll thank you.
What’s your take on today’s topics? Did you like it, or is there something I missed?
Building modern tech for SMBs? Reach 20,000+ decision-makers who are actively implementing AI, automation, and no-code tools. Become a sponsor.




“Once you hire them, the relationship with AI needs to be codified.”
this is a really smart move here. you need to be able to have those kinds of expectations set. +1
Strong framing on the selection economy shift. The Hallucination Trap interview tactic is clever, most candidates wont catch teh fabricated source. I've seen this play out where companies hire someone with AI certifications but they lack the judgement to question outputs. The cognitive diversity point deserves more attnetion especially as everyone gravitates toward standardized profiles.