The Warning HR Leaders Need to Hear
Joseph Fuller, Professor of Management Practice at Harvard Business School and co-lead of the Managing the Future of Work project, doesn't mince words:
"Historically in HR functions, the correct stance is to be cautious. These are people and their lives we're dealing with. But given the productivity unlock that AI will create, the greater risk will be going slower than your competitor."
In a recent episode of The TechWolf Podcast, Fuller laid out a case that should matter to every Talent Acquisition leader evaluating AI tools: the industry's instinct for risk avoidance, long considered a virtue, has become a liability.
For those of us building AI for hiring, Fuller's research validates something we've observed directly: the companies waiting to see how AI plays out are falling behind companies that are learning by doing.
AI as Fundamental Infrastructure
Fuller compares AI to the arrival of alternating current in the late 19th century—not an incremental improvement, but a transformation of how work gets done.
That analogy is worth sitting with. Alternating current didn't just make factories slightly more efficient. It enabled entirely new industries, new ways of organizing work, new categories of jobs that didn't exist before.
Fuller's argument is that AI will do the same. The question isn't whether HR functions will adopt AI. The question is whether they'll adopt it fast enough to capture the productivity gains before competitors do.
"They with the most data that use it fastest win."
This isn't abstract strategy. It's a statement about compounding advantages. Organizations that experiment with AI in talent management now will learn faster, iterate faster, and build capabilities that late adopters will struggle to replicate.
The Paradox of HR Caution
Fuller identifies a paradox at the heart of HR's current moment.
For decades, HR has been wired for caution. And for good reason: decisions about people's livelihoods deserve care. Moving slowly, validating thoroughly, avoiding mistakes—these instincts protected employees and organizations alike.
But Fuller argues that the calculus has changed. In an environment where AI can unlock 40%+ productivity gains across hundreds of occupations, the conservative approach is no longer conservative. It's risky.
"Caution is comfort dressed up as competence."
This framing—from the TechWolf team summarizing Fuller's argument—captures something important. The instinct to wait for more data, more validation, more proof before adopting AI feels responsible. But if your competitors are learning while you're waiting, caution becomes its own form of competitive disadvantage.
What This Means for Hiring AI
Fuller's research has direct implications for how talent acquisition teams should think about AI adoption.
The Case for Speed Over Perfection
Traditional enterprise software adoption follows a familiar pattern: extensive evaluation, pilot programs, phased rollouts, change management. This approach made sense when the cost of a bad decision was high and the cost of waiting was low.
AI changes that equation. The technology improves rapidly. The competitive landscape shifts quickly. The organizations that learn fastest gain advantages that compound over time.
This doesn't mean abandoning diligence. But it does mean recognizing that the cost of waiting has increased dramatically. A six-month evaluation process isn't thorough—it's expensive.
The Role of Human-in-the-Loop
Fuller's argument might seem to favor fully automated AI systems. Move fast, remove friction, let the algorithms work.
But that's not what the research actually suggests.
Fuller emphasizes that AI should augment human decision-making, not replace it. The productivity gains he describes come from humans working more effectively with AI assistance—not from removing humans from the loop entirely.
This aligns with what we've built at Virvell from day one. Our platform is designed around a simple principle: AI should collect information and surface patterns. Humans should make decisions.
We don't score candidates. We don't auto-reject. We don't make hiring recommendations. Those constraints aren't limitations—they're design choices that reflect the actual capabilities of current AI systems.
Fuller's research suggests that organizations building around human-AI collaboration will outperform both those who resist AI entirely and those who delegate decisions to algorithms they don't fully understand.
Cross-Verification Over Single Scores
One of Fuller's key points is that the value of AI isn't in making singular judgments. It's in processing information across multiple sources faster than humans can.
This has direct implications for hiring workflows.
Traditional screening tools—whether manual or AI-powered—tend to generate single outputs: a score, a ranking, a pass/fail determination. The implicit promise is that the AI "knows" something meaningful about candidate quality.
Fuller's framework suggests a different approach: use AI to detect patterns and discrepancies across multiple data points, then let humans interpret what those patterns mean.
Consider what becomes possible when AI compares information across a pre-screen interview, reference conversations, and background verification:
- Does the candidate's self-reported experience match what their references describe?
- Are there inconsistencies in stated tenure or responsibilities?
- Do the candidate's claims about their role align with how their former manager describes it?
These are questions where AI adds genuine value—not by making judgments, but by surfacing signals that humans should investigate. The interpretation of those signals belongs with people who can ask follow-up questions and exercise contextual judgment.
The Implications for Vendor Evaluation
Fuller's research suggests several questions worth asking when evaluating AI hiring tools:
Does the vendor position speed as a feature or a risk? Organizations that emphasize extensive pilot programs and phased rollouts may be optimizing for their own comfort rather than your competitive position.
Is the AI designed to replace human judgment or augment it? Systems that generate scores and auto-reject candidates are betting that AI can make reliable hiring decisions. Current AI capabilities don't support that bet.
Does the tool enable learning? Fuller's argument is that the organizations who learn fastest win. AI hiring tools should generate insights that make your team smarter over time—not black boxes that obscure how decisions get made.
Can it detect patterns across multiple data sources? AI's strength is processing information across touchpoints that humans would struggle to compare manually. Tools designed around single data sources miss this capability entirely.
The Road Ahead
Fuller's message is urgent but not alarmist. He's not predicting mass displacement or recommending reckless adoption. He's making a more nuanced argument: the organizations that approach AI strategically—moving fast, learning continuously, keeping humans in the loop—will outperform those who either resist or over-automate.
For talent acquisition specifically, this means:
- Start experimenting now. The cost of waiting exceeds the cost of imperfect early attempts.
- Keep humans in the decision-making loop. AI augments judgment; it doesn't replace it.
- Use AI for pattern detection, not scoring. Play to the technology's actual strengths.
- Build learning into your process. The goal isn't just efficiency—it's getting smarter over time.
Fuller's research validates what we've observed building Virvell: the future of hiring AI isn't about replacing recruiters with algorithms. It's about giving talent teams superpowers—the ability to process more information, catch more discrepancies, and make better decisions faster than they could alone.
The organizations that figure this out first will have advantages that compound. The question is whether you'll be learning alongside them or trying to catch up later.
Ready to see what human-in-the-loop AI screening looks like?
Virvell bundles AI pre-screen interviews, voice AI reference checks, and background verification into one platform—with cross-module intelligence that catches discrepancies, and zero automated scoring or candidate rejection.
Book a DemoReferenced: Joseph Fuller on The TechWolf Podcast, "Why Skills-Based HR Is Still Stuck." Fuller is Professor of Management Practice at Harvard Business School and co-leads the Managing the Future of Work project.