Google is going to try out these AI-supported coding interviews which could very well change how companies decide who to hire for engineering positions. It seems candidates will be allowed to use Gemini for some of the interview, and this points to the idea that in the real world, working with AI is now a key skill, not just a way to get to the answer faster.
Why Google is changing course
AI coding helpers have gone from being something new and interesting to something engineers use every day. Google knows this is happening inside the company, and Business Insider says a large amount of Google’s own code is now created with help from AI, before an engineer looks it over.
This change also has to do with Google investing more and more in Gemini. Gemini is now the base for their coding, search, and programs that help you get things done, and is at the heart of Google’s overall AI plans. Making the interview process match how people actually work with Gemini protects Google’s investment in the system, and creates a new expectation for applicants.
How the pilot will work
From what’s being reported, Google is trying out a system where candidates use an AI helper approved by the company during one particular part of the interview. In the ‘code comprehension’ section, candidates look at code that already exists, find errors, and make it work better, while the interviewer watches how they work alongside Gemini.
The trial run is likely to begin later this year with specific Google Cloud teams in the United States. At first they will focus on junior and mid-level engineering roles – these are the ones with a lot of openings, and where people are already commonly using AI to help them.
Google calls this approach ‘human led, AI assisted’. They aren’t forbidding the use of tools entirely, but instead want to see if applicants can use AI in a sensible and effective way, and quickly, as they would need to on the job.
What interviewers will measure
The interview isn’t just about getting the right answer. Interviewers will be looking at how clear applicants are when telling the AI what to do, how they check the AI’s responses, and whether they can find mistakes the AI makes.
In reality, this means candidates will have to be quick, but also use their judgement. They have to show they can control the AI, check its suggestions, and ultimately make the final decision without simply accepting the code it generates.
To make expectations explicit, here are the capabilities Google aims to observe:
– Write clear, targeted prompts
– Verify AI generated answers
– Identify mistakes in generated code
– Improve code quality with AI support
Strategic stakes and competitive pressure
This isn’t only a small adjustment to hiring. It’s a message to the whole industry that being able to use AI is now a basic expectation for engineers. If Google starts to treat AI use as normal during interviews, other companies may have to do the same, or explain why they are still testing candidates without letting them use any tools.
People who support this idea say interviews should be more like real work, where engineers use assistants to write, test and improve code. Those who disagree say depending too much on AI could hide a lack of basic skills, and make it harder to judge someone’s core ability to solve problems.
Google’s explanation seems to be intended to satisfy both sides. By only allowing Gemini to be used during the ‘code comprehension’ part, Google can test someone’s ability to judge and apply their knowledge, while still keeping other sections of the interview for testing someone’s raw ability to think about algorithms.
What comes next
Business Insider and other tech news sources say the trial is expected to begin later this year. Google will start with certain Cloud teams in the US, learn from the first groups of candidates, and then refine the interview format.
If the trial works well, it could completely change how people get ready for interviews. Instead of memorizing interview questions, applicants might need to practice giving instructions to AI systems, testing the results, and explaining the advantages and disadvantages of different approaches in real time.
The effects of this could be widespread. For a long time, coding interviews have rewarded people who can solve problems on a whiteboard by themselves. A ‘human led, AI assisted’ approach changes what is considered excellent to include coding skill, thinking about the whole system, and collaborating with AI.
Right now the message is clear: being able to use tools like Gemini is becoming part of the job requirements, and Google wants to see that ability during the interview process, not just listed on a resume.





