On their 1st day, a deepfake remote hire will steal your secrets, plans, data, and install ransomware.
Also, if a single North Korean deepfake new hire continues to be employed for months, the damages they can cause will continue to grow. And they won’t stop growing unless you accidentally stumble across their secret. Also, smart talent leaders must realize that this deepfake problem isn’t going away. But instead, it will continue to grow and get more sophisticated (according to one recent report, deepfake fraud has increased by 92% since the beginning of the pandemic).
What Is A Deepfake Candidate?
If you’re not familiar with the concept of deepfake applicants and candidates. This expensive and soon-to-be widespread scam is designed to fool interviewers into believing that a “fake electronic caricature” is actually a highly qualified real candidate. It is called “deepfake” because the image and voice the interviewer experiences during the video job interview are so realistic. You would have to examine it “deeply in order to have a chance of determining that it is a fake.”
Until You Reveal Its Tremendous Costs… No One Won’t Take This Threat Seriously
Some of the costs of hiring a deepfake are obvious. Executives do not immediately see and understand many other big-buck cost areas. So below, in descending order based on negative impacts. You will find the top 10 organizational cost/damage areas that will likely be incurred as a result of the hiring of a single deepfake candidate. And for a large corporation, the accumulated cost of a well-placed fake candidate can easily exceed $100 million. Obviously, with the high probability of incurring these shocking high costs, anything less than a serious companywide deepfake prevention effort will do.
- The ransomware they plant will likely cost you millions – some deepfake candidates are only created with a single task in mind, planting ransomware. And that will occur on their very first day. However, this insidious planted software may not be activated until they decide to leave or are caught because of the position that they are hired into. This fake new hire will have access to your financial data, which will help them calculate the amount of ransom money you can afford. And to make matters worse, if the fact that you had to pay ransomware becomes public. It will likely also hurt your brand image, sales and stock price.
- They will steal and reveal the personal information of your customers and employees – if they are given access, the fake new hire will instantly steal the valuable personal and financial information of your customers and employees. They may use that information as a driver to get you to pay a ransom. Or they may simply sell the sensitive information on the dark web. And if this breach becomes public, it will literally cost your organization millions to make everyone impacted “whole.”
- They will steal your valuable product secrets – if you are an innovative company, this will likely be the highest cost damage area. Because once the deepfake hire is given access, they will instantly capture information on your best-selling products, your new product development plans, and which employees are your chief innovators. Simply knowing that competitors have seen this strategic product information may, unfortunately, force your organization to change its direction literally.
- They will learn and then continually exploit your security weaknesses – the deepfake new hire will try to capture your security system and its flaws. First, they will attempt to capture all of your passwords. And then only available information on your current security strengths and weaknesses. They, along with their behind-the-scenes colleagues, may exploit these weaknesses immediately. Or worse, perhaps, use that security information (that you don’t even know they have) for future raids long after the fake employee is gone. And after this first new-hire success. The behind-the-scenes team will learn and then improve their interview-faking process. And that will, unfortunately, likely drive them to try another soon, even more, damaging fake candidate incursion into your organization.
- The resulting slowdown in hiring will delay some of your vital projects – even if you merely suspect that you might hire a deepfake spy as an employee. The fear of it happening again may result in managers postponing or even stopping hiring remote workers for their most sensitive projects. This period of slowed hiring will result in your team being short-staffed. Which will frustrate your current workers, reduce your capabilities and delay many projects that involve confidential information.
- As new hires, they will, unfortunately, get their probing questions answered – because they are new hires. Many employees will likely go out of their way to be helpful and to answer their “probing questions.” Because the inquiries are coming from a naïve new hire, few will challenge why they need this valuable information so soon.
- Getting new hires up to speed will take longer – out of an abundance of caution. Some organizations may purposely extend onboarding and delay providing data access to remote new hires that deal with data. Imposing these delays on all new hires may slow reaching their expected “time to productivity” by as much as a month. And now, especially during times of high turnover. This added delay will frustrate your new hires and lower their team’s productivity.
- Some deepfakes may continue working for you – because after they are hired, no additional reference checking is traditionally done on a remote employee. The dishonest new hire may stay on forever. For example, this routinely happens with North Korean technologists because they can’t legally be hired outside their country under their real identity. Once they are safely inside, these North Korean deep fakes may decide to continue to get a monthly paycheck indefinitely.
- The added costs of rehiring will be significant – after discovering a deepfake hire. You, unfortunately, will have to go through the time and expense of rehiring. And that will also mean that your team will lose the productivity of this position while it is vacant again for several additional months. Also, be aware that your legal expenses will also go up. Because any new hiring procedures that you must now implement to catch additional deepfakes will have to meet all of the complex local legal privacy and discriminatory rules.
- You may even be hurting your country – and last but not least. Some of these deepfake new hires are actually citizens of countries that may be enemies of your home country. You must realize that one of the primary goals of this new-hire intrusion may be political espionage. So by failing to catch deepfake candidates, you are providing the enemies of your country with secrets, ransom dollars, and sometimes monthly payroll dollars.
What Are The Steps In This Fake Candidate Process?
This deceptive “fake candidate” process begins when “a bad actor” secretly steals the identity of a real data expert. The team of thieves then makes an online application for one of your 100% remote jobs using this real person’s stolen identity. Then comes a brand-new wrinkle. Next, they use advanced AI-driven video and audio technology to create a complete synthetic likeness of the person they are misrepresenting. This fake image is then used to represent the visual and audio representation of the real person during their remote interviews. This deepfake technology is so believable that it’s impossible for the untrained to spot. And after the imposter is hired and given their new-hire credentials. The deepfake remote worker will immediately begin wreaking havoc in every important data area that they were given access. Many will leave after only one day of damage, while others will continue to cash in until they are caught. But they are never prosecuted because they work remotely and can simply vanish without a trace and move on to try to apply at another company successfully.
There Are Things That You Can Do To Help Spot Deepfakes
Below you will find each of the seven categories of possible approaches for spotting a deepfake candidate. However, you must be aware upfront that, unfortunately, the tools in each category all have major shortcomings. So I recommend that you never assume that anything other than a multistep and continuing process will be able to catch all of your deepfake candidates and new hires.
1) Start with these foundation steps
- Assume that success will require multiple approaches – never rely on one single approach because every identification process has flaws. And because deepfakes are constantly upgrading their approach. You, too, must continually update your “spotting process.”
- Focus on your hiring process for remote workers – currently, deepfake hiring issues only occur on 100% remote work jobs. So look for obvious flaws and missing process steps in your hiring process for remote jobs.
- Focus on technical and data jobs – realize upfront that most deepfake candidates apply for technical jobs where the employee deals mostly with data. These job families often include database management, engineering, IT, marketing research, and product design. Also, conduct continuous benchmarking to learn the job families other organizations are currently experiencing deepfake applicants.
- Never assume that a single identification approach will be enough – because even a well-designed single approach won’t work all of the time. It is essential that you use a multipronged effort to spot these deepfakes during your hiring process.
- Educate and train your interviewers on deepfake identification methods –– raise awareness and educate your interviewers for jobs that are likely to be targeted. And when possible, have them participate in deepfake identification training (MIT, for example, offers a website to train people on how to identify deepfakes). Microsoft has also created a quiz to help identify deepfakes. To further promote awareness of how difficult it is to discover them, it is recommended to add tests like this to your annual and new manager cybersecurity training.
- Work with identity theft vendors – because many deepfake candidates steal another person’s identity. It’s important to realize upfront that even the best background-checking firms can’t tell you whether the person you are interviewing is the same person they are doing the background check on. So work with one of the identity theft vendors to see if they will help you determine if the identity of your current named candidate has recently been stolen.
- Identify them… using a live in-person interview – require or threaten to require an in-person interview because that alone will scare away many deepfakes. Or use one of your employees working in their current location. Require the candidate to meet them and show their picture ID (make sure that all personal information is taped over)
2) Soliciting the help of people that actually know the real candidate
- Identify them… with the help of their references – start by being suspicious if all the references they provide are obscure and difficult to check. Next, tell them which references you want to contact and look for any hesitancy on their part at this step because most fake candidates won’t know the actual contact information of the real person’s actual references. In many cases, it’s best that you yourself find and verify the actual contact information for each reference. Also, consider checking several more additional references for applicants for these remote jobs. During your reference checking, try to talk to the actual person live. Start the conversation by telling each reference about your deepfake issue. And ask them if they knew that the person was actively looking. And then consider asking them for ways that you might use to determine if this applicant is a deep flight (like an obscure but non-personal piece of information that only the real person would know).
- Identify them… using current and past employees – if you are a large organization, the odds are that someone in your employee base knows them. If so, ask an employee that knows them to contact them to see if they are in job search mode.
3) Identify them through the use of “clever hard to fake an answer” interview questions
Identify them through the strategic use of interview questions that only the real person could answer.
- Give them a real problem that only an expert would know the answer – not only will giving them an advanced technical question as part of the interview help you gauge their capabilities. But it may instantly scare away some impersonators.
- Ask them about their created content – scan their LinkedIn profile for professional content. And then ask them detailed questions about something they have written or presented.
- Ask them for details on their previous work – most fakes use a real person’s resume. So they are likely to know few details about the real person’s past work. So infer that you have talked extensively with their past manager. And then ask them to give details on a past project failure. Because, in most cases, only someone who went through a major failure would know the details. Also, infer that you know their last manager well. And ask them to tell you something about that manager that only his/her team members would know. And then verify that information during reference checking.
- Ask the fake candidate about things that many bad actors wouldn’t know – because many of these deepfakes live in North Korea, China, Nigeria, or Russia. Sometimes you can identify these fakes by asking them questions about Western civilization that hackers from these countries would be unlikely to know the answer to (in a manner similar to what an intelligence officer would ask when interrogating a prisoner with no papers or ID). Example – what is CNN? Or say something harsh about the above countries and see if you get a harsh reaction.
- Ask them about personal things – within legal boundaries, consider asking them about on-the-job insider things or personal things that only a real person would know. For example, say that you heard that they have tattoos and ask them to show you one of them on the video.
- Ask them to show several IDs to the camera – because a fake candidate would not have the actual physical IDs (or passport) of the real candidate. So ask them on camera to show you one or more of their picture ID’s during a video interview (make sure their age, race, height, and weight are physically taped over).
4) Identify them by spotting contradictory information
- Identify the actual candidates through the extensive use of skill testing – both real and deepfake candidates can fool you during interviews that involve mostly talking. However, if you supplement those interviews with vigorous technical skill tests. You will screen out most of the barely qualified deepfake candidates.
- Identify them… by cross-verifying – most professionals have LinkedIn profiles. So look for comparison errors between their LinkedIn profile and their submitted resume. First, look for a significant difference in their dates of employment. Also, look for rapidly created profiles with only basic information and few connections. And finally, look for conflicts between what they say during interviews and what they say in their resume and on their social media sites.
5) Identify them because of physical anomalies that you spot during their video interviews
Please note that in scientific research humans have been proven not to be very successful in identifying deepfakes. So be aware that every one of the following techniques has not been proven to be completely accurate and reliable. So I highly recommend that you rely on human spotting only as a last resort.
- Identify them… through anomalies spotted in the video – many argue that a good sign that this is an AI-driven interview is when the candidate’s eyes don’t blink (or they blink too much). Or when there appears to be a bad lip-synch. Or any time when the actions or the look of the candidate do not appear to be real. Others argue that you should look for cases where their skin appears too smooth or too wrinkly. While two others urge you to look for hair or facial hair or eyes and eyebrows that don’t look natural. Failing to see a reflection in the candidate’s glasses can be another final clue to consider.
- Identify them – through voice anomalies – deepfake candidates often use voice imitation software in order to hide any accent that might make the interviewer suspicious. You may be able to spot this voice substitution by looking for instances during the interview when the video picture doesn’t seem to align with the audio. Or when you hear a cough or sneeze, but it does not appear simultaneously on the video feed. Of course, any unusually high levels of nervousness may indicate this isn’t the real voice of the deepfake candidate.
6) After they are hired, continue trying to identify deepfakes and other spies
- Continue your identification efforts during onboarding – in case you don’t catch them during your hiring process. Continue your identification effort during onboarding. This process adds value because after the candidate becomes an employee. There are fewer legal issues to overcome when verifying personal information (versus as a candidate). In some cases, it also makes sense to stretch out the onboarding time for those that work in data-centric jobs. Until you are completely sure that the remote worker is actually who they say they are. And finally, realize that a new-hire fake may quit immediately if, during onboarding, you tell them that you expect to meet with them in person at the next major industry event.
- Consider modifying the new hire’s work until you are confident about them – in some cases, it will make sense to limit the new hire’s access to sensitive data initially. You may also want to have your security team monitor their work during their first month. And because there may be other spies in your organization. You may want security to look at the work actions of any employee conducting unusual data searches, attempting to bypass security features, or copying unusual files. And finally, it’s also important to do a postmortem failure analysis each time you catch one of these deepfakes to reduce the chances of it happening again.
7) Identify them using software solutions
Please note that no single tool can detect all types of deepfakes with 100% accuracy. Therefore, it is recommended to use a combination of methods to ensure the authenticity of the candidate.
- Identify an AI-driven interview… using software – utilize deep fake spotting technology (such as FakeCatcher, Sensity). There are also a variety of deepfake products offered by Microsoft and smaller vendors, which can be automatically connected via APIs to recordings or live interviews to detect fraud.
- Identify fake media used by the candidate – because it’s so easy to use deepfake technologies, you may need to verify the authenticity of images and videos submitted by the candidate (Truepic).
- Use software to detect answer fraud and candidate authentication – utilize simple-to-use vendors (such as filtered.ai) to identify discrepancies in technology skills such as peer assistance, GitHub, and LinkedIn.
- Use multi-factored authentication (MFA) – verify candidate details by using email, phone, video, and smart card verification methods.
- Email verification: The candidate can be asked to click on a link sent to their email to confirm it’s really them.
- Phone verification: The candidate can be asked to provide their phone number, and a code will be sent to their phone via text or call for them to confirm their identity.
- Video verification: The candidate can be asked to participate in a video call, during which they can show their ID or other forms of identification (with personal information obscured) in order to verify their identity.
- Smart card verification: The candidate can be issued a smart card that confirms their identity and can be used to authenticate them when joining the interview platform.
- Add additional technology checks – utilize video technologies that adhere to encryption standards with regular security updates (ex., Google Meets). And use any software that can accurately identify the precise IP address of the interviewee to detect additional risks (Azure Active Directory Identity Protection). While these methods are not a 100% reliable way to verify a candidate’s IP address on their own, they can be used in conjunction with the tools above for additional security.
|If you only do one thing – survey each of your hiring managers that have hired a fully remote new hire during the last year. Ask them to identify whether any of these candidates appeared to be fake or not the actual person (named on the resume) during their video interviews. Also, ask these managers if any new hires for a WFH position actually logged in online on their first day. But then quickly disappeared, never to be heard from again because either of these events should be considered a wake-up call for talent executives and hiring managers at your company.
Most organizations have no formal way of tracking when and how often deepfake hiring attempts occur. This is a major error because even the FBI has recently issued a public warning about this serious and growing problem. So ask around among those involved in hiring in order to see if your organization has recently had a remote new hire working in a data job that logged in on their first day and then disappeared. Because if the answer is yes, without knowing it, you have likely already been fooled by an AI-generated deepfake candidate. So the time to act is now because literally every expert predicts that this deepfake candidate problem will only continue to get worse.
- Please share these solutions by sending this article to your team/network or posting it on your favorite media.
- Next, if you don’t already subscribe to Dr. Sullivan’s weekly Aggressive Talent Newsletter, you can do that here.
- Also, join the well over 11,000 that have followed or connected with Dr. Sullivan’s community on LinkedIn.