Ethical questions aside, if you interview engineers remotely, how are you dealing with the proliferation of AI-assisted interview cheater software?
Have you had someone pass an interview and then later they can barely perform?
I am able to know if someone is good or bad at an interview from how they talk and their logic more than their coding skills.
This is only a problem if you are really really bad at conducting interviews. I have had interviews in the past that treated me like a child asking basic code literacy questions. These kinds of interviews aren't helpful to anybody.
Instead the way to get past this foolishness is to ask open-ended questions expecting precise answers where the questions are themselves not precise. This presents too much variance. For example: Talk to me about the methods of your favorite Node code library. In that case the candidate has to pick, on the fly, from any of the libraries that ship with Node and immediately start talking about what like about certain methods and how they would use them.
Another example: Tell me about full duplex communication on the web. AI will immediately point you to WebSockets, but it won't explain what full duplex means in 3 words or less or why WebSockets are full duplex and other solutions aren't.
Another example: Given a single page application what things would you recommend to get full state restoration within half a second of page request? AI barfs on that one. It starts explain what state restoration is, which doesn't answer the question.
In other words AI is not a problem so long as you don't suck as an interviewer.
I can vouch for this. Once you're at a senior or lead level these things are easy to weed out.
I used to ask a simpler (for AI) question. The candidate reads out the first sentence. By this time I'd have already established that the candidate is not genuine. Our interview process let's ride out the interview as a courtesy and to also try to extract something out of the candidate that the company could use.
Anyway once they read out the first sentence from the AI with utmost sincerity, I'd follow-up with deeper questions into the topic at hand. 99% failed to answer the second question well. The ones who let me ask the third and fourth questions are devs who still have their original thinking hats in place but just use AI out of nervousness or who generally don't interview well. Those we'd explore further and suggest for lesser roles/alternate streams etc. This all my experience and others MMV.
Are those good interview questions? It's easy to have done many things in web dev, without having touched node, web sockets, or state restoration. I had to look up the last one: Couldn't find much relevant from an internet search. Sounds like dumping frontend state into local storage and/or the app DB maybe, then reversing? TCP would be another way to have full duplex comm over the web; I would have gone for that prior to web sockets.
More to the point: Those are all things that you could go from being unequipped to answer, to answering well quickly from research. So, it sounds like these questions would select for people who have used these technologies and techniques, vice for good developers.
"Couldn't find much relevant from an internet search. Sounds like..."
Maybe that is why AI barfs on it. But, yes, performance is something users universally care very much about even if many businesses aren't willing to accept that, so the question remains highly relevant. Another way a good interviewer, who has a background in performance, to consider this is to start asking performance questions that a candidate could answer from AI and the asking the candidate why some performance techniques work better than others and by how much. Then you can really see if they have a high confidence AI answer that they cannot quality.
At any point, AI answers aren't a challenge for people who are confidence enough in the craft to ask more qualified questions and determine honesty from the resulting nonverbal signals.
>Given a single page application what things would you recommend to get full state restoration within half a second of page request? AI barfs on that one. It starts explain what state restoration is, which doesn't answer the question.
Deepseek gave me a good answer to that.
I think they're good enough at this point to fool interviewers up to at least the senior level. Especially for these sorts of questions that encourage vaguish rambling answers about general knowledge.
Hey! Creator of stealthinterview.ai here.
After working for 15 years at FAANG, startups and no-name companies, I have conducted more than a 100 interviews for all levels, mostly in engineering.
In my experience with technical interviews specifically, there are 4 types of candidates:
- the memorizer - the mathematical brain - the project builder - the coding enthusiast that knows a language well but can’t do algorithms
Most of the time, I have encountered candidates in buckets 3 and 4. Many showed debugging skills, communication skills, but lacked the right answer to trapping rainwater.
I was told to reject candidates who couldn’t pass these problems mainly on the grounds of solved or not solved, even if they clearly communicated.
The only reason I got into FAANG companies myself is because I was the good memorizer. I couldn’t solve most of these problems today without months of prep.
At the end, I left my FAANG job recently because I realized it’s going to be the same or worse at any other company, because internally it’s all the same B.S. once you make it in. Sure, you’ll get a fat salary, but it’s a slow grind.
Instead, I chose to build things.
Is the interview process today bad? I think so.
What can candidates do about it? Take it in their own hands and play the game or get played.
What is the alternative? Depends. There are many companies that don’t do crazy algorithm interviews and pay really well. In my opinion, if you are hiring hundreds of engineers per year, take-home assessments and reviewing them is literally a job on its own. Making candidates build apps from scratch and deploy it, test it, present it? Maybe, but certainly opinionated, not binary.
New grads don’t have experience, but experienced engineers do. You shouldn’t need to ask people with 10 years of experience about trapping rainwater from LC. There are so many other things to ask, discuss, and gauge experience, scope and depth.
Personally, if Im hiring junior engineers, and they have the ability to use LLM to solve the problem and explain it, I see no problem with that. When I worked for Amazon knowing the kind of development that happens someone who is able to code with an assistant could do the necessary work faster than someone who codes without it.
This is the same type of test as the takehome tests that a lot of companies give, where you can use google or stack overflow previously to do research.
If I was hiring for my own project and needed people that can problem solve, I would be asking more involved questions that LLMs could not solve, because LLMS will give you just the standard most commonly used solution.
For example an analogy to another industry, LLMs can't tell you how to design a mountain bike correctly at a level of detail that matters, because there is no guide online that tells you how to do this.
If an AI can pass your interview and an AI can’t do the job you are hiring for, there is by definition something wrong with your interview process
I strongly believe technical interviews should try to mirror a pair programming session on a problem as if it was work, rather than a quiz or interrogation type format.
If someone asked during such a session (where it's cameras, screenshare) that they wanted to do something like google some documentation, I wouldn't see that as a problem. Obviously it's a problem if someone just googles for a solution and pastes it in.
I see LLMs in the same way. No issue with them using it do something like take the pseudo code they wrote in front of me and turned it into an implementation. Especially if they could talk through this code and make suggestions about further changes and so on, clearly showing they understand what's going on.
The real concern is going to be when sophisticated agents can impersonate (clone voice and video) in a convincing way, as well as the capabilities to see the screen and type away as if it was a real person, and they're responding to you in real time.
If the software is based on the models made by large companies, they'll be happy to give you recipes. They would refuse if the coding request mentioned something about cracking passwords or dumping credit cards. And all of them will have a meltdown if you try to ask them to say something politically incorrect (what a bizarre world that would be if that became the new captcha system for humans trying to figure out if they're wasting their time with a fake human).
That said, this is going to be a cat and mouse game. There will be nothing to stop people from fine tuning models to get around being "jailbreaked" to reveal themselves as LLMs. Perhaps the best means is taking the time to research problems that causes "vibe coding" to completely fall down. And that is likely going to be things that are novel and haven't been littered all over the internet. That has a knock on effect of making such interviews a bit more interesting for the people doing them too.
Leetcode style questions no longer work. If it's solvable with a few functions within 1 hour, AI will solve it in 5 minutes.
If the job is cutting trees, you can't measure them by how long they take to cut a tree, but whether they have stamina to cut through multiple trees.
Take home assignments work, and the good news is they can be shorter now. 1 day or 4 hours of work is enough of a benchmark. Something like a Wordle clone is about the right level of complexity.
Things we look for:
1. Do they use a library? Some people are a bit egoistic about doing things the easy way. GenAI will make a list of words, which is both wasteful and incomplete when they can find a dictionary of words. Do they cut down the dictionary to the right size? It should only be the words not definitions.
2. Architecture? What do they normally use? How do the parts link to one another? How do they handle errors?
3. Do they bring in something new? AI will usually use a 5 year old tech stack unless you give it a specific one, because that's around the average of code it's trained on. If they're experienced enough to tell AI to use the new tech, they're probably experienced enough.
4. Require a starting commit (probably gitignore) and ask them to add reasonable sized commits. Classic coding should look a bit like painting. Vibe coding looks like sculpting, where you chip bits off. This will also catch more critical cheating, like someone else doing the work on their behalf - the commits may be from the wrong emails or you'll see massive commits where nothing gets chipped off.
5. There are going to be people who think AI is a nuisance. Tests like this will help you benchmark the different factions. But don't give them so much toil that it puts the AI users at a large advantage and don't give overly complex "solved" questions that the AI can just pull out from training.
Can you walk me through what type of architecture someone could express making a wordle clone in four hours?
We basically look for this: https://docs.flutter.dev/app-architecture/guide
It's similar across all mobile platforms. People call it MVVM, Bloc, MVP, etc. But we want to see the pattern of UI-repo-service and unidirectional data flow. This is three layers and it can be as little as 3 files. If someone can grasp that, it's good enough.
There's not a lot of ways you can screw up hiring mobile devs; most skills are trainable. But this is what costs months. One guy once looped the viewmodel inside another viewmodel because he was using them as the data store and observable. He was let go but it took 2 years to remove everything he did from production even though he worked there for 3 months and most of it was via rewriting entire blocks of code.
We watch for overengineering and perfectionism. Some people insist on more blocks or splitting it into modules. But can they do that in 4 hours?
AI can write all of it easy enough, but it's just a 2-3x multiplier. Some assume it's an infinite multiplier and they can just subscribe to Cursor on the spot and it'll be done in a blink. But do the parts connect in the order given above?
Also Wordle is not ideal for architecture, but it's good enough. It requires a solid understanding of the top level of things, and connection to the data layer. What if we switched the word source from the dictionary to the API? That's a bit more of an advanced test.
An even more advanced test would be structuring it for TDD, which would need understanding of DI, factories, tests and gotchas, etc. I don't think this is doable even with AI yet though.
As a UI test, Wordle is pretty good because you have all your logic and variables on one layer and display it via a different, more complex layer.
These days? Short-list them. https://www.ycombinator.com/companies/domu-technology-inc/jo...
I laughed, but it kind of makes sense. If you are looking for a hyperfast scale-up and exit, you do not care one bit about the quality of the code.
It's a fun experiment how large you can grow an AI system before it falls apart because nobody knows how it works and it's too complex for the AI to grasp, but I imagine this breaking point will increase as AI gets better.
As a potential customer, this is like buying from a private equity owned business. You know you are buying a heap of shit, but hopefully it's cheap (for a while)
12-15 hour days and weekends? Holy shit! 50% of your current code must be vibe coding... P sure this wasn't a thing until recently.
Well paid collections agent? Collections engineer, if you will.
Your onboarding will be making collection calls.
Good god, this is the kind of shit that gets funded by Y Combinator? I had to double check that that page wasn't some kind of joke.
This is most “AI” companies. It’s kind of sad tbh because I’m building an actual company and when I talk to founders in SF I feel like they’re running scams or just lying. It wasn’t like this 5 or more years ago.
Oh well. Maybe I just don’t understand the game.
That confirms a suspicion I've had for a while. These days I tend to assume that 95% of anything associated with cryptocurrency or AI is nothing but a hype-driven tool for extracting money from investors who should know better but somehow never do, no matter how many fads they live through.
But because of the quality of this message board I've always assumed that Y Combinator made wiser investments and didn't just throw money at whoever said the magic buzzwords of the day. Guess not.
I hire for a variety of knowledge work roles (albeit not software engineering).
If somebody can figure out a way to pass the interview with AI they can probably figure out how to do the job. If they can’t, they get fired. Some people who pass the interview without AI end up getting fired, too.
I don’t think there’s anything unethical about using AI to pass an interview.
> I don’t think there’s anything unethical about using AI to pass an interview.
Assuming a job description / interviewer explicitly prohibits the use of AI, then using it "in stealth" represents a basic lack of integrity. And I guess it needs to be explicitly stated, but this is orthogonal to how well they would actually perform in the actual job.
How would this be ethically any different than a student who sneaks in an LLM using a tiny camera and wireless BT headset during an exam?
I don’t follow every rule in the world and don’t expect anyone else to either.
It’s a risk using AI in a job interview situation like you describe and if I thought the risk was worth the reward I might do it.
I mean maybe there’s some very small number of classified/protected data situations (not just “these are our sales numbers and are private”) where I think maybe the integrity component actually matters.
But if it’s just some random job that doesn’t like people using AI for some bogus reason, give me a break. They can make the rules as easily as people can break them.
The school thing isn’t that different except for the risk and the reward. The risk is higher the closer you are to graduation and the reward isn’t that great since colleges are so easy these days that you almost certainly don’t need to do anything against the rules to graduate.
With respect, are you seriously so naive as to believe that a script kiddie—someone already willing to use an AI background service to cheat on a job application—is going to spend even a femtosecond of thought reflecting on whether using dishonest means to secure the job might be more problematic in certain fields, like medicine or high-security roles?
Right that’s why I don’t care that much about AI use in interviews even when it’s “banned.”
People are going to break rules.
There are rules you would break that I wouldn’t break. There are rules I would break that you wouldn’t break. There are rules other people would break that you and I wouldn’t break.
I decide the rules that I’m willing to break and don’t particularly care what other people are willing to break. I can’t control them. I can control me.
A lot of people breaking a rule might make me more willing to break it myself or it might not. Depends on the situation and the rule.
I’m not sure what point you’re trying to make?
exactly THIS. we 100% encourage using anything and everything for the interview but are structuring them now such that AI could help but you gotta still now what you are doing. my colleague best described this as “open book, open notes” exams at the Uni - those were always the hardest :)
I think you hit on a key part. You have to adjust a bit so that AI doesn’t allow anyone to pass through your process.
Like I used to do small writing samples from most candidates. That’s gone now and for client facing roles, I replaced it with mock meetings. I send over prep material + meeting goal and then let the candidate lead a 20 minute meeting with me playing the client. At the end we talk about how it went.
I see candidates who clearly used AI to prep and kill it and then candidates who did little prep (AI or otherwise) and try to wing it.
How does ai help you prep for a client meeting? Telling you how to structure a meeting? Giving you topics? For someone who has done 100+ meetings I'm not sure I see the advantages unless you are using ai to replace google then you run the risk of learning fake information.
There are mock interviews and mock meeting services that would really help you prepare.
What you are suggesting is using ai to learn to drive when you should be getting in the car.
> What you are suggesting is using ai to learn to drive when you should be getting in the car.
No it isn’t.
> For someone who has done 100+ meetings I'm not sure I see the advantages
When you’ve tried using AI to assist in your meeting prep what has your experience been like?
Honestly though... let's take Copilot as an example. if they're good with Copilot, why wouldn't you just hire them and let them be good with copilot? We all just need to accept that the value of memorizing syntax will trend to zero.
there's plenty of ways to gauge someone's technical competency without asking quiz style questions or leet-code questions in the interview.
your classic whiteboard - manual way, using physical from or the whiteboarding web tools.
you can discuss design patterns, how they would solve issues you've ran into in production and compare approaches.
it seems as if software 'engineers' are the only snowflakes that have to go through this ritual.