I do not think it is still necessary to explain that phishing is a major threat for businesses and individuals. By now, most companies have one type of phishing training or another. But, are we sure these exercises work?
If we want to measure our training efficiency, we often perform regular phishing exercise and measure the results. If our phishing education was efficient, we should see a negative trend. Right? If we perform exercises every quarter, we should obtain something like that:
Looks good, isn’t it? Except we don’t know why there is a bump in the numbers in Q4. Is our training not working? Maybe is it due to the end of year exhaustion. Who knows? Or maybe the scenario we used in Q4 is more relevant to our context. Context is a key factor influencing phishing susceptibility. Unfortunately, it is hard to measure. So, we can’t accurately predict, nor define a level of efficacy for our phishing scenarios. Basically, comparing click/ration between different scenarios is utterly useless to measure progress and phishing risk reduction. So, how do we do?
Siadati et al. published an excellent article in 2017 highlighting this very issue. As the variance between scenarios can be as high as 40% (our research showed that it could be up to 60%), we cannot rely on inter-scenario measurement to measure the efficiency of our training. To say otherwise, the difference in the percentage of people clicking on a phishing link between two phishing scenarios sent to the same people at the same time can be as high as 60%.
Instead, they suggested using a system using multiple scenarios in parallel. The scenarios are used repeatedly with different groups of the population (groups are randomized). In our example, this would give this:
As you can see, we now have the four same scenarios sent to four groups of people in our population. Notice the 27% gap between scenario C and scenario D in Q1, like we had in our first example. Now, we don’t really care for the click ratio itself. What we would like to see is a downward trend for each scenario. And that’s what we’ve got. Same scenarios, same people, and a totally different, more accurate, measurement of our progress.
This protocol requires a yearly plan (that we should have anyway) and a sufficiently big enough population to have, at least, 30 persons in each group (for statistical significance).
There are, unfortunately, other pitfalls in our metrics that we have to take into account but that will be the subject of another post (and included in a short document we will publish very soon).
To improve our security and efficiency, we need well-trained people. It doesn’t have to be everything, but it should be enough to make their lives easier and/or safer. One of the difficulties nowadays is catching people’s attention, even at the office. Forget about long documents—maybe even short ones. When we, as people, want to learn something, we will probably turn to YouTube in the first place. Short educational videos and micro-learning aren’t just buzzwords; they are the current trend in self-education. So, why don’t we embrace the trend?
Let’s take just one example. What will create a better learning context: a cheat sheet with some Microsoft Windows shortcuts or this 47s video created by GUI ESP?
It’s clear, short, aesthetically pleasant, and likely more memorable than a list of keyboard shortcuts.
As another example, we designed a simple communication to remind our customers about this simple yet important behavior: locking your computer when you leave it unattended. We based communication on a simple gesture: hitting the Windows and L keys when you stand up (many people don’t know how easy it is to lock a computer, so they don’t do it systematically). Our main focus here is to teach them how to do it. As the key combination is the first thing you read and we associate it with the words “lock” and “leave,” we create a way to remember the key combination (a mnemotechnic) and when to perform it.
So, as always, think about KISSS (Keep It Simple, Stupid and Seductive) and aim at small and precise behaviour changes.
More and more we see cybersecurity professionals using surveys about attitudes and intention as performance indicators of their interventions. While questions like “Do you think it is important to use complex passwords” might give an insight on someone’s attitude toward password complexity, they are not good indicators of our human-risks. Values, Attitudes, Intentions and Behaviours are sometimes confusing concepts for some people. Here is a quick summary of the differences between values, attitudes, intentions and behaviours and what we should do to reduce the gap.