Click ratio is a useless metric for phishing!

I do not think it is still necessary to explain that phishing is a major threat for businesses and individuals. By now, most companies have one type of phishing training or another. But, are we sure these exercises work?

If we want to measure our training efficiency, we often perform regular phishing exercise and measure the results. If our phishing education was efficient, we should see a negative trend. Right? If we perform exercises every quarter, we should obtain something like that:

Typical phishing metrics

Looks good, isn’t it? Except we don’t know why there is a bump in the numbers in Q4. Is our training not working? Maybe is it due to the end of year exhaustion. Who knows? Or maybe the scenario we used in Q4 is more relevant to our context. Context is a key factor influencing phishing susceptibility. Unfortunately, it is hard to measure. So, we can’t accurately predict, nor define a level of efficacy for our phishing scenarios. Basically, comparing click/ration between different scenarios is utterly useless to measure progress and phishing risk reduction. So, how do we do?

Siadati et al. published an excellent article in 2017 highlighting this very issue. As the variance between scenarios can be as high as 40% (our research showed that it could be up to 60%), we cannot rely on inter-scenario measurement to measure the efficiency of our training. To say otherwise, the difference in the percentage of people clicking on a phishing link between two phishing scenarios sent to the same people at the same time can be as high as 60%.

Instead, they suggested using a system using multiple scenarios in parallel. The scenarios are used repeatedly with different groups of the population (groups are randomized). In our example, this would give this:

As you can see, we now have the four same scenarios sent to four groups of people in our population. Notice the 27% gap between scenario C and scenario D in Q1, like we had in our first example. Now, we don’t really care for the click ratio itself. What we would like to see is a downward trend for each scenario. And that’s what we’ve got. Same scenarios, same people, and a totally different, more accurate, measurement of our progress.

This protocol requires a yearly plan (that we should have anyway) and a sufficiently big enough population to have, at least, 30 persons in each group (for statistical significance).

There are, unfortunately, other pitfalls in our metrics that we have to take into account but that will be the subject of another post (and included in a short document we will publish very soon).

Reference:
Siadati, H., Palka, S., Siegel, A., & McCoy, D. (2017). Measuring the effectiveness of embedded phishing exercises. 10th {USENIX} Workshop on …, Query date: 2019-03-12. https://www.usenix.org/conference/cset17/workshop-program/presentation/siadatii

Discover more from Apalala srl

Subscribe now to keep reading and get access to the full archive.

Continue reading