Ryan Wright and Matthew Jensen have phished thousands of people over the past decade, and they’re not planning to let up anytime soon.
The two aren’t hackers angling for valuable data or funds; they’re researchers working with companies, governments, and universities around the world to understand why we so often fall for phishing attacks and what organizations can do to mitigate the threat. Corporate security departments go to some lengths to educate people about phishing, which accounts for 90% of all data breaches – but an estimated 30% of fraudulent emails are opened nonetheless. With the cost of a successful attack averaging $3.8 million, that’s an uncomfortably high share. And it could grow as cybercriminals exploit the disruption caused by the pandemic and the steep rise in employees working from home, where increased distractions may cause them to lower their guard.
Drawing on their research, Wright (the C. Coleman McGehee Professor of Commerce at the University of Virginia) and Jensen (the Presidential Associate Professor of Management Information Systems at the University of Oklahoma) have identified several ways to bolster the effectiveness of security training.
Add A Mindfulness Component
Many organizations require employees to complete off-the-shelf training modules on a regular basis – often annually or biannually. That’s useful, the researchers say, for alerting people to common threats and giving them basic guidelines for evaluating incoming messages. But sheer repetition of rules-based training doesn’t necessarily increase resistance to attacks, they caution. In fact, after a point it can be counterproductive, desensitizing people to the training and giving them a false sense of mastery over the lessons – which they then ignore.
Part of the problem is that rules-based training promotes what the Nobel-prize-winning psychologist Daniel Kahneman calls System 1 thinking. This type of fast, automatic processing is efficient but can result in careless decision-making and leaves employees vulnerable to attacks that depart from the rules. “Rather than ask people to memorize a laundry list of constantly changing cues,” Wright says, “organizations can take a more holistic tack”: adding mindfulness instruction. The goal is to encourage System 2 thinking – a more reflective, analytical approach.
In a field study involving 355 university students, faculty, and staff members, the researchers and colleagues compared three groups of participants, all of whom had gone through basic security training. The first group received additional rules-based instruction. The second group was taught to use simple mindfulness techniques: Pause if an email requests action; consider the nature, timing, purpose, and appropriateness of the request; and consult a third party about any suspicions. The third group received no additional training. Ten days later, the researchers launched a mock phishing attack. They found that 13% of those given additional rules-based training took the bait, as did 23% of those who got no additional training – but just 7% of those instructed in mindfulness techniques fell prey. Subsequent work by the researchers’ colleague Christopher Nguyen obtained similar results and showed that the heightened resistance lasted several months.
Take A Teamwide Approach
Security measures are often thwarted by the “weakest link” problem: If just one person responds to an attack, it may succeed. To understand whether group dynamics can lessen this vulnerability, Wright and colleagues conducted a two-year field experiment in the 180-person financial unit of a large university. Mapping the employees’ positions in their work groups and social networks and phishing them several times, they learned that the more central, or connected, people were in either type of group, the less likely they were to succumb to an attack. For example, employees in the top quartile of centrality in their work groups clicked on links in the phishing messages just 14% of the time, while employees in the bottom quartile did so 35% of the time. The researchers also found that the higher a team’s overall computer efficacy, the more resistant each member was to phishing attacks.
These findings indicate that employees can learn valuable security lessons from teammates, formally or informally – a dynamic that managers could capitalize on. “Instead of saying, ‘It’s that time of year: Complete your IT training when you can’ and then never talking about it,” Wright says, “managers could conduct team trainings and hold each team accountable for results.” Organizations could also use network analysis to identify especially susceptible employees and could provide additional training to people who are peripheral or new to their teams.
“Making the Lessons Personal Means They’re More Likely to Stick”
As the chief information security officer at Fannie Mae, Christopher Porter oversees security training for nearly 7,500 employees along with several thousand independent contractors and consultants. He recently spoke with HBR about how the organization works to defend against phishing attacks. Edited excerpts follow.
What kinds of phishing education do you engage in?
There’s the usual broad training that’s mandatory across the organization. We target other efforts toward specific units. Accounts payable and finance groups, for example, face unique attacks and need to develop special immunities. In addition, each month we conduct a mock phishing exercise around a specific theme. If people click on one of the test emails, they get immediate feedback – a short video shows them exactly what made the message a phish. If they fail two or more tests in a 12-month period, they participate in additional group training to bring them up to speed. Finally, we’ve mounted a weekly security awareness campaign: Every Friday we post a blog addressing some aspect of detecting a phish and what to do when finding one – it’s critical to get people to report attacks. We continually reinforce the actions employees need to take.
What do you focus on in the monthly exercises?
There are three main themes. The first is loss: An attacker threatens to take something away from people if they don’t respond. The second is promises: People are told they’ll get something if they click on a link. The third has to do with emotions – attempts to exploit things like curiosity. It’s important to know which of these approaches our users are most susceptible to so that we can target our training accordingly. We also look at what kinds of attacks are out in the wild at a given moment. These days Covid-19 is providing a huge lure.
Research has found that simple mindfulness exercises can boost people’s resistance. Have you utilized that approach?
We try to get people to use the “stop, think, act” process. For instance, we encourage them to pause if they see a banner identifying a message as external to the organization and, before they continue reading it or take any action, to ask themselves whether they were expecting the email, whether they know the sender, whether anything feels funny. That has improved our resistance over time.
How do you keep people from clicking through the training without actually absorbing it?
First, we try to make it fun. We incorporate professionally created cartoon videos that focus on specific security lessons, sometimes with voiceovers by stars – the comedian Jon Lovett did one. Second, we’ve drawn on research showing that if you teach employees to protect their information at home, they’ll take those lessons back to the office and apply them to company information. To that end, we’ve shown people how to set up multifactor authentication to keep their personal financial information safe. During tax season we remind them that they may get fraudulent messages supposedly from the IRS. And we do a lot to help people protect their families. For example, we had a woman come in and talk about her experience as a child being abducted by an online predator and how parents can protect their own kids online. The research shows – and we have found – that making the lessons personal means they’re more likely to stick.
One finding from the study took the researchers by surprise: The more that employees interacted with or even just trusted their IT help desk, the more likely they were to fall for phishes. Those people may have felt “indemnified” against threats, the researchers posit. “If a credit card is stolen, the credit-card company covers the losses, making people less concerned with protecting their cards; we theorize that something similar is happening here,” Wright explains. “If people think, ‘The help desk will keep me safe if I click on something wrong,’ they’re not owning the protection of their data or learning from their interactions.” Managers could incentivize employees by making security compliance part of their annual reviews, he says – and help desks could make sure users understand the warning signs they missed rather than simply fixing the problem, as commonly occurs.
Use Gamified Training
Another way to leverage group dynamics is to add a competitive element to cybersecurity exercises. The researchers and colleagues conducted three experiments involving 568 participants who played the role of an intern taught to identify and report suspicious messages and then given a variety of tasks, among them managing the boss’s inbox. As the subjects went about their work, they encountered five phishing emails. In the first two experiments, their reports were posted on leaderboards of varying designs. In the third experiment, leaderboards were compared with several other anti-phishing measures, singly and in combination: a training video, labels marking emails as “external” if they came from outside the organization, and labels warning that particularly suspicious emails might be phishes.
The leaderboard was highly effective at encouraging reports while keeping false positives in check; only labels explicitly warning that emails might be phishes got better results. It was especially powerful when paired with training. But some designs proved better than others. The optimal configuration made reporters’ identities visible to all and both awarded points for correct reports and deducted them for false alarms. “External motivation turned out to be far and away more effective than intrinsic incentives,” Jensen says.
Nobody is going to spend time hunting down phishes for the fun of it. But organizations can take these steps to emphasize the importance of detection and reporting and to make those activities more effective and rewarding. When it comes to employees’ falling for fraudulent messages, “It’s really hard to get to zero,” Jensen says. “You have to take a layered approach.”
originally posted on hbr.org