Author Archives: enicaise

Is computer-based training effective to prevent phishing?

Cybersecurity managers and CISOs often ask why they should run regular phishing exercises and not just having their users following a yearly computer-based training (CBT), as they do for most of the other topics.

Beside the commercial sales pitch a security firm could deliver (full disclosure, besides my research activities, I also work as a security consultant), what does sciences says about that?

Not so much, unfortunately.

One of the first attempt to answer this question was made by Aaron Ferguson in 2005. Ferguson is an NSA visiting Professor at the famous West Point US Military Academy. He sent phishing emails to 512 West Point cadets after they received 4 hours of computer security instructions. The phishing email, called the West Point Carronade by Ferguson, tricked 80% of the cadets. While the scenario was quite targeted and the context highly favourable to make this scenario successful, it was quite a success, despite the training.

In 2010, Davinson & Sillence trained users using “Phishing Phil”, an online game about email phishing. Their goal was to evaluate the impact of the level of risk communicated on the users’ behaviour. By their own words “There was no effect of the training programme on secure behaviour in general”. Unfortunately, they did not measure the actual behaviour of the users before and after the training.

In 2013, Jansson & von Solms conducted a series of phishing exercises on an academic institution in South-Africa. He ran four scenarios in parallel on seven different groups, in two waves. The subjects who clicked during the first waves received and embedded training (meaning, the link they clicked or the attachment they opened displayed a warning about their insecure behaviour) and a warning email. They also had the opportunity to follow online training by clicking on a link displayed in the warning page. The next week, the same users received either the same email, either a different one. There was 42.63% less click during the second wave than the first. This seems to indicate that simple feedback and a short training right after having clicked (embedded training) can reduce phishing susceptibility.

In 2019, Gordon et al. conducted a series of phishing exercises on 5416 employees of a US healthcare institution. After the 15th exercise, they identified the “offenders” (those who clicked 5 times and more on the previous exercises). They provided computer-based training to these offenders a continued to measure their results to the 5 next exercises. While the phishing exercises reduced the click ratio for both the offender and the non-offender groups, the CBT provided to the offenders did not decrease the click rates than the non-offenders (low-risk) group.

That is not a lot of data to form an opinion. Still, it seems that a simple message embedded in the message received by people clicking in a phishing exercise’s link is more efficient than a CBT to reduce phishing susceptibility.


  • Davinson, N., & Sillence, E. (2010). It won’t happen to me: Promoting secure behaviour among internet users. Computers in Human Behavior, 26(6), 1739–1747. Scopus.
  • Ferguson, A. J. (n.d.). Fostering E-Mail Security Awareness: The West Point Carronade. Retrieved 3 December 2019, from
  • Gordon, W. J., Wright, A., Glynn, R. J., Kadakia, J., Mazzone, C., Leinbach, E., & Landman, A. (2019). Evaluation of a mandatory phishing training program for high-risk employees at a US healthcare system. Journal of the American Medical Informatics Association, 26(6), 547–552. Scopus.
  • Jansson, K., & Solms, R. von. (2013). Phishing for phishing awareness. Behaviour & Information Technology.

Is Punishment an effective deterrent for Phishing?

During a scientific literature review of Phishing-related articles, I stumbled onto a fascinating article on the “Deterrent effects of punishment and training on insider security threats” (Kim, 2020). All scientific articles deserve attention, but this one caught mine a bit more as punishment, or at least the fear of it, is considered an ineffective option to reduce risk, at least without providing a way to cope with the threat. This assumption is often based on research performed on health-related communication. They often tend to measure an attitude (how we feel about something) or an intent (what I think I will do in the future in a context) rather than an action.

Also, phishing is, from my point of view, a specific case as it often occurs as an “accident” during a “normal” activity (going through and reading our emails). Hence, it’s likely linked to a lack of good habits and vigilance than disregard for cybersecurity policies.

On the other hand, our vigilance depends on the context. If we consider any email as suspicious, we will probably be less likely to fall for a phishing email. However, it might create an additional cognitive workload and increase the users’ level of stress (or not, it hasn’t been evaluated, up to my knowledge).

Kim et al. tested the effect of punishment in a real-life setting using an exciting paradigm. To avoid the typical laboratory experiment’s contextual impact, they performed their studies in a governmental organization in Korea. They sent a first phishing email to a group of employees. They then split the people who failed the test in two groups: one that received a punishment (a visit of the security team, a temporary loss of the access to the network and a threat for a bad note for its annual performance review) and a second, control, group of unpunished people.

Twenty weeks later, they sent a second phishing email and compared the click ration between the two groups. 17,5% of the punished group clicked on this second email link while 43.2% of the not punished one clicked on it. Although the sample size is relatively limited (101 persons in total for both groups), the effect is significant (p=0,005). Also, it is noticeable that the results were significantly different between people with a low or a high position in the organization, the employees having a low position clicking significantly less than their high position colleagues (punished: 7,1 vs 46,7% – p=0,002 and not punished: 37,8% instead of 71,4% – p=0.210).

These results are based on a tiny sample and must be treated with the necessary scientific doubt. Still, they raise some questions. Twenty weeks is a very long time. It seems more efficient, in the long term than any training (phishing exercises training maintains effectiveness for a month, at most three, depending on the research).

Also, as Kelsey Pipers reminds us in an article published on Vox, context matters. The results obtained in one context can often hardly be replicated in another one. Still, we should put that to the test and measure if any punishment can effectively reduce the risk of phishing in another context.


  • Bora Kim, Do-Yeon Lee & Beomsoo Kim (2020) Deterrent effects of punishment and training on insider security threats: a field experiment on phishing attacks, Behaviour & Information Technology, 39:11, 1156-1175, DOI: 10.1080/0144929X.2019.1653992
  • Kelsey Pipers (2020) Why we can’t always be “nudged” into changing our behaviour,,

Behavioural aspects of Cybersecurity: a systematic review

In April 2019, ENISA, the European Network and Information Security Agency, published an outstanding document (two in fact): “Cybersecurity Culture Guidelines: Behavioural Aspects of Cybersecurity”.

This document is the result of four evidence-based reviews of human aspects of cybersecurity performed by different scholars: two based on the use and effectiveness of models from social science used to improve cybersecurity, one on qualitative studies; and one on current practise within organisations. The details of these reviews are presented in the second document: “TECHNICAL ANNEX: EVIDENCE REVIEWS –
Review of Behavioural Sciences Research in the Field of Cybersecurity”

The guidelines are a 30 page document including recommendations on a process and metrics to improve cybersecurity culture in an organisation.

The annexes are barely bigger and offer a condensed summary of relevant research, weaknesses, and strong points. It should interest any scholar working on the topic.

You will also find a link to ConstructDB, the incredible database of 792 theoretical constructs gathered by Becker & Sasse (2018) during their review of scientific literature on the subject. An outstanding work!

A nice summary of the current literature is summarized in the five following points extracted from the report:

  • Over-reliance on self-report measures:
    The vast majority of studies relied on self-report measures of cyber-security behaviours (or intention to behave). This is potentially problematic since self-reporting does not always correlate with actual behaviour (Wash, Rader & Fennel, 2017). However, there was some evidence of more creative measures being used.
  • The poverty of models used:
    Three models for understanding human aspects of cyber-security dominated the studies reviewed. Both Protection Motivation Theory (See our recent short article on this theory) and the Theory of Planned Behaviour (TPB) were well represented in the review. Unfortunately, it seems that the research literature has not moved from these pathway models to consider wider contexts or to integrate insights from behaviour change or persuasive design.
  • Threat models do not help to predict behaviour:
    Within PMT, people’s motivation to protect themselves from potential threats is determined by the relative balance of the severity and likelihood of the threat and the potential ways to cope with that threat. Studies that have studied the role of threat in predicting this motivation towards protection have reported either weak, neutral or even negative effects of increasing threat appraisal on motivation to take protective action. Similarly, studies of increased severity of punishment for violation of IS policies have found that they can backfire and lead to less compliance.
  • Coping models have more value predicting behaviour:
    On the other hand, studies that have included the resources people have to cope with a threat have produced more promising outcomes. The second element of protection motivation theory is the individual’s appraisal of their likely response to a threat, both in terms of the likely efficacy of the response and their own ability to complete the required response. These two factors are commonly referred to as ‘response efficacy’ and ‘self-efficacy’. In later PMT models, the cost of completing the response was also factored into the model. Within the theory of planned behaviour, ‘self-efficacy’ or ‘perceived behavioural control’ are used to signify the users’ belief in their ability to complete the desired behaviour. Across studies of PMT and TPB, coping / self-efficacy was a reliable, moderately strong predictor of cyber-security intention and behaviour. This suggests that interventions that seek to improve users’ ability to respond appropriately to cyber-threats (and belief that those responses will be effective) are more likely to yield positive results than campaigns based around stressing the threat.
  • Demographics and personality are not particularly useful:
    Relatively few studies in the review also studied personality or demographics (e.g. age, gender). Those that did found mixed results, with both older and young users often being found to be vulnerable and gender only sometimes linking to security behaviour or attitudes. Personality rarely linked to security behaviour in a consistent way, although there was some evidence that models of general decision making might be more predictive.