Recently, there has been articles in Financial Times and The Independent stating there is 'plenty of evidence humans hire in their own image — often preferring candidates who share their favourite hobbies and pastimes. Can machines be more objective?'
What is being discussed and tried here is interesting and exciting, but potentially worrying too. As with many technological developments the devil is in the detail of how they are used. I have three points I'd like to raise:
First, if we assume that this system is all it claims to be, and I haven't seen it yet so it's hard to say, but let's give it the benefit of the doubt. The developers have created a wonderful way of finding a candidate like your best performers, which is great, unless it isn't. Because top performers are rarely that just because of what they are on paper. We are all in equal parts subject to both our own abilities and our environment; nature and nurture is as true in the workplace performance as in child development. You can hire five people who the system, or your own interviewing, tell you are all the same, I guarantee that you'll see five different performances. Have you ever hired a top performer from another company, only to see them struggle or even fail when they come to you? The environmental differences are critical. It's not enough to find someone who fits the job description, even if that's a clone-like copy of your top performer, you need to see them in the context of the team, the company and the people they will manage and be managed by. It is this comparative and environmental balancing that marks the top recruiters, whether professionals or hiring managers, from those who simply match points on a list. In my opinion if this system is all it is claimed it is a very useful tool to tick the first box, but that's only half the job, the second half is the piece where talented hiring experts will still be 100% necessary.
Secondly AI is increasingly clever and predictions have it replacing all kinds of jobs over the coming decades, so why not recruiters? But people are always going to need to interface with people at some point, at least I hope they are otherwise we really have made ourselves redundant as a species. It's great that systems may be able to tell us how people will react under stress and to certain stimuli, but they won't be able to tell us how they react to people. If what you're hiring for is someone to only interface with a computer, then a computer is a great decision maker, but if you need to assess their ability to speak to a person... you'll need to ask a person.
Third and finally we complain about interviewer bias, and often with due cause, but let's not be too enamoured of our creations, machines can be biased too. It's an old adage in computing that Garbage In means Garbage Out. Only a small bias in the input data will skew all your results fundamentally. Without the human factor to correct it AI will confidently continue that bias until it rusts. This bias could be perpetuating hiring the same type of people, which after all is what you've asked it to do, but since when has a market stood still? Companies and teams need to evolve and improve, and evolution is not something machines do well. Even if you ignore that issue, consider what an AI system recruiting salesmen in a typical engineering company just thirty years ago would likely have recommended; lots of middle aged white men, because they were the successful performers. Society is organic, it evolves, AI is a machine, it doesn't. This is a fundamental disconnect in my opinion. That's not to say that AI selection systems may not be fine tools to assist an interviewer, just like the best psychometrics, but to successfully replace human interviewers AI will have to learn to evolve, and as has been seen in experiments, that creates a whole basket of bias issues all of it's own: (http://www.independent.co.uk/life-style/gadgets-and-tech/news/ai-robots-artificial-intelligence-racism-sexism-prejudice-bias-language-learn-from-humans-a7683161.html)