Meta Is Going to Let Job Candidates Use AI During Coding Tests

42 geox 46 7/29/2025, 1:43:02 PM wired.com ↗

Comments (46)

dekhn · 52m ago
Interesting. I interviewed at Meta a decade ago and was put off by the process (inexperienced devs who memorized leetcode medium problems). But I still get daily requests from their recruiters to try again (I am a senior leader in ML engineering). I would need to spend a fair amount of time thinking about what it would be like to do an interview with LLMs.

But honestly, I'd rather spend that time on figuring out how to use LLMs to interview people better (for example I already had an LLM write a collaboritive web editor with built-in code runner, so I don't need to license coderpad). I could see updating my prompt to have the coding agent generate a text box for entering prompts during the interview. Either way, I still expect candidates to be able to explain what a hash table is in their own words.

SkyPuncher · 1h ago
I do interviews for my company. We allow AI (and even encourage it). Use or lack of use of AI has zero correlation with candidate performance. In fact, I’d say there’s a slight negative trend against those who use AI.

We have candidates build a _very_ simple fullstack app. Think something like a TODO app, but only like 2 core functions. This is basic CRUD 101 stuff.

I’ve seen a boatload of candidates use AI only to flame out after the first round of prompting. They literally get stuck and can’t move forward.

The good candidates clearly know their fundamentals. They’re making intentional decisions, telling the AI scoped actions to take, and reviewing pretty much everything the AI generates. They reject code they don’t like. They explain why code is good or bad. They debug easily and quickly.

LLMs are very, very talented but context still matters. They can’t build proper understanding of the non-technical components. They can’t solve things in the simplest way possible. They can’t decide that to explain to the interviewer “if performance is a concern, I’d do X but it’s a time-bound interview so I’m going to Y. Happy to do X if you need”.

MichaelNolan · 9h ago
I'm curious what they will come up with. For smaller companies, leetcode style tests probably aren't the best, but for large companies that hire 10s of thousands of devs, leetcode style tests have a lot of good qualities. People who criticize leetcode usually prefer take home projects, pair programming with a current employee, "just a conversation" or something else, but these all have serious drawbacks at scale. Despite leetcode's flaws it has a lot of benefits:

* Objective and clear grading criteria.

* Unlimited supply of questions.

* Easily time bound to 30 min or 1 hr.

* Legal - i.e., no worries about disparate impact.

* Programming language/framework agnostic.

sp527 · 7h ago
I can't even begin to imagine what sort of mind could observe the quality of Big Tech's software output and conclude that there's nothing wrong with their hiring process.
triceratops · 7h ago
Big Tech's software is faster and less buggy than the median software product.
artyom · 7h ago
Source?

Big tech software is successful and runs at scale.

I've got anecdotal experience in both worlds and no. Big tech software isn't faster (what you have is way more compute resources usually), and the claim about "less buggy" gives me goosebumps.

triceratops · 7h ago
> Source?

All the software I use. Netflix works perfectly every time. HBO Max is garbage. Amazon's website and app are pretty good, although the actual goods sold are trash. Costco is exactly the other way around.

qwerpy · 7h ago
My problem with Big Tech software isn’t code quality, it’s the deliberate user-hostile decisions they make. Leetcode-style interviews appear to be doing fine at getting people who can write code.
paxys · 7h ago
Now look at their market cap.
saubeidl · 7h ago
It's almost as if capitalism wasn't a good system for deciding.. well, anything, really.
thr0w · 8h ago
This makes sense in principle, but then how do you do technical evaluation? I'm generally most interested in hearing the candidate think out loud about the problem, and explain their approach. LLMs do the "thinking", and also explain the approach.
ndriscoll · 8h ago
As always, it requires a knowledgeable interviewer. Is the candidate using it as a slightly advanced autocomplete? That's fine. Are they relying on it to design things for them/can't articulate out loud what they're hoping it will produce for them/can't identify and articulate when and how it does something stupid or wrong or differs from how they were thinking of doing it? Then they don't know what they're doing.

I do peer coding with people at work and we have copilot on. We'll discuss what we're trying to accomplish and make little comments out loud along the lines of "sure, almost" or "not quite, copilot" when it generates something. It's obvious whether someone is working in a way where they know what they want the tool to do.

tracker1 · 7h ago
It's funny but I've been turning Github CoPilot off more often than on... it's absolutely been fantastic with some boilerplate SQL, and other tedious things, but any time I've tried doing complex things it seems to just get in the way more than help.

As to interviews, I'd be happier if whatever tool they're using actually had working API code sense working... so many of the tools I've had to use in interviews just had broken LSP results. I've also had issue when a given question requires specific domain knowledge over a third party library as part of the question.

marssaxman · 6h ago
For sure - I don't care about the code, I care about watching the applicant work, so I can decide whether I would want to work with them - that being the whole purpose of a job interview. If the "work" consists of delegating everything to a robot, I learn nothing and the interview is a waste of time.
add-sub-mul-div · 8h ago
It feels like a slow generational evolution beyond considering one employee different from another. They're biding their time until the AI can babysit itself.
bwfan123 · 9h ago
Meta interviews in the past have been straight regurgitation of leetcode. They want to measure your flow and typing speed (sarcasm).

So, the interview can now be 2 leetcode hards in 20 min. Earlier, it was typing solution code from rote memory. Now it is furious vibe-coding copy-pasta from one browser window to another.

More seriously, what will the new questions look like ? In the age of LLMs how does one measure ability objectively. Is it with extreme underspecification of the probem ?

dangus · 9h ago
It’s going to be just like the Google/StackOverflow era where your value as an engineer is based on your resourcefulness.

The value of an employee who says “I don’t know how to do that” or “I’ll need to ask my coworkers for help” versus one that says “I am sure I can figure out just about anything by googling” is night and day, and I think the same is true with AI.

Half of the battle is going to be knowing what to ask for.

Lastly I’d like to point out that it makes general sense to test people on the real tools they’ll be using to get their work done. E.g., you wouldn’t get much value testing a bus driver on using a manual transmission if all your buses are automatic. Most corporate leaders are expecting and even demanding that their employees use AI tools to get their work done.

thefaux · 6h ago
I don't know how to do that is a great answer that I wish people would give more often.

> Most corporate leaders are expecting and even demanding that their employees use AI tools to get their work done.

Imagine for a second that I am an aspiring monopolist. Wouldn't a great way to achieve this be to make people believe that I am their agent, when I am really their adversary? Would I really offer them in good faith a tool that they could use to compete against me? Or would I try to get them to accept a trojan horse that destroys their profits from within. Once, I have siphoned enough profit to significantly damage the business, I can come in, borrow money to buy the company that I basically destroyed and then sell off the parts to an ignorant buyer who doesn't realize who badly I have wounded this once good business, or I just write off the loss for tax purposes.

paxys · 7h ago
This is born out of necessity. I have seen some of the coding interview "cheating" tools in the market today and it is ridiculous just how good they are at helping a candidate fake not just the answer but their entire thought process. Leetcode style interviewing is basically dead, at least if done remotely.

The only way to fight this from the employer side is to embrace these tools and change your evaluation criteria.

supportengineer · 7h ago
This escalating war will only result in them hiring those most skillful at cheating.
msgodel · 7h ago
If you're good at interviewing I really don't think AI tools should make a difference tbh.
zamalek · 9h ago
Makes sense. It's like we were prohibited from using calculators in school when I was growing up. If you're testing for on-the-job performance then you should have on-the-job tools.
tracker1 · 7h ago
I'm mixed on this one... In terms of schooling, it's about understanding how math works... not how to operate a calculator.

Similar to graphing calculators vs basic scientific calculators... if you're trying to demonstrate you understand an equation, having the calculator do that for you doesn't show you understand what it is doing.

Where in a job, you're probably not going to need to rebalance a B-Tree in any practical sense. For that matter, in my own experience, most actual development rarely has people optimizing their RDBMS usage in any meaningful way, for better or worse.

It's the difference between learning and comprehension and delivering value. Someone has to understand how the machine works to fix it when it breaks. And there are jobs at every level. Most of them aren't that constrained in practice though.

zamalek · 5h ago
> Similar to graphing calculators vs basic scientific calculators... if you're trying to demonstrate you understand an equation, having the calculator do that for you doesn't show you understand what it is doing.

That isn't entirely true and brings up an important nuance. I understood algebra and calculus extremely well but I routinely got half-score for fudging a calculation or somesuch because multiplication has a foundation of rote knowledge: you need to memorize you ten-times table. To this day (at 37) I continue to use algebra to do basic multiplication in my head (7*6 = 7*5+7 = 7/2*10+7 = 35+7 = 42).

Sure, using a graphing calculators solve the problem isn't demonstrating understanding, but 90s kids were simply asking for the basic calculator with 2 registers (display and M) and basic math and "sci" (sin, cos, sqrt, etc.) operators.

Preventing use of basic calculators does nothing to demonstrate knowledge of math at all, it actually hinders it.

tracker1 · 3h ago
There's nothing wrong with breaking down 7*6 how you did, I do the same. It's not far from how common core, aka new math actually tries to teach it. It actually shows you understand the mechanics, where using a calculator to come up with the same answer does not... it only shows you know how to push buttons into a calculator.
zamalek · 3h ago
> using a calculator to come up with the same answer does not

I'm not sure why you're still talking about this. A basic "non-graphing" (90s "scientific calculator") is not capable of doing all the work for you. All it did was the basic things like add, mul, sub, sin, cos, etc. I am referring to one of these[1].

> It's not far from how common core, aka new math actually tries to teach it. It actually shows you understand the mechanics

I understand the mechanics of mathematics but still fared more poorly than I should have because I messed up a simple op like multiplication more often than I should have. My point is that had I had access to a basic calculator I would have scored significantly better in school. I went from Bs to As (80%, not sure what that is in GPA) in university by mere virtue of being able to use a simple calculator.

Again, not being able to use a [basic] calculator tests a signal that isn't actually important. Not being able to use Google/AI/whatever in your interview similarly tests an unimportant signal. The most important hire signal for a SDE is actually how they operate in a team and how they communicate with stakeholders, which these coding tests don't do.

[1]: https://upload.wikimedia.org/wikipedia/commons/4/4f/FX-77.JP...

tracker1 · 3h ago
I'm actually against the leetcode style coding tests. They're exceedingly lopsided against more experienced developers further from school or younger learning where memorizing various memory structures is/was more emphasized.

I prefer a paired interview scenario with actual questions about approach to working through problems. As an interviewee I do horribly with the leetcode questions, there's no opportunity for clarifying questions, and some systems penalize you for looking away from the screen even. It's unnerving and I refuse to do them any more.

I also don't mind the "here's a task/challenge that should take a half hour or so." where you simply create a solution, commit to github, etc.

As an interviewer I prefer the walkthrough interviews as well... I usually don't get into pedantic exercises, or language/platform nuances unless the applicant uses "expert" on more than one or two subjects.

lesuorac · 9h ago
Well, while I agree with the second half of your statement. It only applies to when you're testing on-the-job performance. If you want to test if somebody learned multiplication then you do need to take away a calculator otherwise you're just testing if they can multiply.

I'm not sure when an employer should really care about whether their cashier can add/subtract correctly vs just use the output from the machine. But in the education setting where you're making a claim that students will learn X you should be testing X.

psunavy03 · 9h ago
The entertaining thing for at least Xennials and before is how our teachers used to tell us "out in the world, you won't just always have a calculator in your pocket!"

And yet here we are . . .

joshstrange · 9h ago
I'm currently running the interview process for the company I work for and I allow AI on the coding tests for the same reason I allow googling/etc:

I want to simulate as close to the real environment you will be coding in if you come and work here.

My "rules" on AI/LLMs at work is that you can use whatever tools you want (much like SO before) but any code you commit, you are responsible for (also unchanged from before LLMs). I never want to hear "the LLM wrote that" when asked about how something works.

If you can explain, modify, build on top of the code that an LLM spits out then fine by me. We don't require LLM usage (no token quotas or silly things like that) nor do we disallow LLMs. At the end of the day they are all tools.

I've always found leet coding, whiteboarding, memorizing algorithms, etc to just be a silly form of hazing and a bad indicator of how well the employee will perform if you hire them. In the same way I still think my college was stupid for making us write, on paper, a C program with all the correct headers/syntax/etc for an exam or how I got a 90/100 on my final SQL exam because I didn't end my, otherwise perfect, queries with a semicolon.

freedomben · 9h ago
I run our interviewing process as well and fully agree with you. An interview process that most closely mirrors "real work" gets much better results. Some of the best people we've had were really terrible at algorithms and leet code, but wrote quality code (that they tested!), had strong work ethics, were willing to be around off-hours for the occasional production outage, took ownership/accountability for their tasks, and were enjoyable people to work with/around. It's good to have at least one person on the team who is good with algorithms and stuff for those times when that comes up, but having that be a requirement for everyone on the team is an anti-pattern IMHO
joshstrange · 9h ago
I completely agree.

We've had almost every candidate (even some we _didn't_ hire) thank us for the interview process and how we went out of our way to make it stress-free and representative of the job. I'm not interested in tricking people or trying to show off "look how smart we are and how hard our test was"... puke. I want to work with solid performers who don't play games, why would I start that relationship off by playing a bunch of stupid games (silly interview tests/process)?

And here's the thing, even with our somewhat easier/less-stress testing process, we still got high quality hires and have had very few cases where we have later had to let someone go due to not being able to do the job. This isn't a "hire fast, fire fast" situation.

jghn · 9h ago
I agree with you that the real rubric is that I'd expect the candidate to be able to explain everything they did and why. For the most part I don't care how they got there.

Was thinking about this recently in a conversation about whether or not the candidate should have a full screen share so that interviewers could see what they're looking up, etc. I realized that I'd reached the point where I now trust my fellow interviewers less than I do the candidate at these bits. Personally I think it's nice to see how people are leveraging tools, but too often I see interviewers dig candidates anyways on the specifics.

I've now found this across multiple companies now has been that when coding exercises are "open book" like this that other interviewers *are* actively judging people based on what they're googling, and these days LLMing. If you're going to allow google but then fall back to "I can't believe they had to look *that* up!", that's not how it works.

sokoloff · 8h ago
I think “how to write for loop in $LANGUAGE” is something that it’s fair to judge a candidate on if they claim (or the job requires) recent experience in $LANGUAGE.
garciasn · 9h ago
reverendsteveii · 9h ago
interviews should be about determining whether you can use the tools available to deliver the desired product, not some sort of purity test to determine whether you can build without tools. also I want to see you interact w AI because as of right now it's both an incredible and deeply flawed tool and your ability to recognize when it's about to walk you off a cliff is of increasing importance as we discover the limits of what it can do.
bgwalter · 9h ago
Zuckerberg has no clue about software development. He always wants young people, huge bullpen offices, moonshot projects like the Metaverse that fail.

Facebook open source software does not have great code quality. In some projects that always have been a huge mess they are now adding claude.md files to guide their beloved "AI". They did not add these files for humans before.

I think Facebook software is a lost case where it does not matter if you perform the weekly rewriting by LLM or by kLOC driven humans.

supportengineer · 8h ago
He is a business man. All business men make money by exploiting a resource.
dangus · 9h ago
And yet they’re wildly successful, including the wide level of adoption of their open source projects like React.

Meta’s profit per employee is over $250,000, higher than Qualcomm. There is no Meta competitor in any of their verticals that has a larger customer base.

It seems to me that your definition of “software quality” and “lost cause” is factually wrong based on the metrics of success that matter.

And in any event, it is an engineer’s tunnel vision fallacy to believe that software quality is the most important factor to stakeholders. People will prefer buggy/slow software that solves their problem over fast/stable software that fails to solve their problem.

thefaux · 6h ago
And are almost never offered fast/stable software that solves their problem. When they are, it will come with aggressive fud by the competition or they will just lobby the government to mandate the incidental complexity of their own software so that the competition cannot even enter the market without being brought down to their level.
mepian · 8h ago
> higher than Qualcomm

That’s a pretty low bar for a software company.

gronglo · 7h ago
Why is Meta still interviewing SWEs, is my question. Surely, at this point, an agent can beat any human candidate at any task. What am I missing?
paxys · 7h ago
You are missing the fact that an agent cannot beat any human candidate at any task.
gronglo · 7h ago
Mark Zuckerberg himself predicted that AI would replace all mid-level engineers at Meta by the end of this year, with senior engineers continuing to work on tasks such as planning and architecture. With the release of Claude 4 it feels like, if anything, he was too conservative with his prediction — we're already there, and Mark must know it. So what's he playing at?
paxys · 7h ago
Look at Zuckerberg's actions, not what he says in podcasts to shill his AI tools and boost his company's stock price. Meta is currently hiring hundreds of software engineers every month and paying $200-300k/yr for entry level and $700k+ for senior roles. Why do you think that is?
saubeidl · 7h ago
AI snake oil salesman talks up AI snake oil, news at 11.