Blog

Is Punishment an effective deterrent for Phishing?

During a scientific literature review of Phishing-related articles, I stumbled onto a fascinating article on the “Deterrent effects of punishment and training on insider security threats” (Kim, 2020). All scientific articles deserve attention, but this one caught mine a bit more as punishment, or at least the fear of it, is considered an ineffective option to reduce risk, at least without providing a way to cope with the threat. This assumption is often based on research performed on health-related communication. They often tend to measure an attitude (how we feel about something) or an intent (what I think I will do in the future in a context) rather than an action.

Also, phishing is, from my point of view, a specific case as it often occurs as an “accident” during a “normal” activity (going through and reading our emails). Hence, it’s likely linked to a lack of good habits and vigilance than disregard for cybersecurity policies.

On the other hand, our vigilance depends on the context. If we consider any email as suspicious, we will probably be less likely to fall for a phishing email. However, it might create an additional cognitive workload and increase the users’ level of stress (or not, it hasn’t been evaluated, up to my knowledge).

Kim et al. tested the effect of punishment in a real-life setting using an exciting paradigm. To avoid the typical laboratory experiment’s contextual impact, they performed their studies in a governmental organization in Korea. They sent a first phishing email to a group of employees. They then split the people who failed the test in two groups: one that received a punishment (a visit of the security team, a temporary loss of the access to the network and a threat for a bad note for its annual performance review) and a second, control, group of unpunished people.

Twenty weeks later, they sent a second phishing email and compared the click ration between the two groups. 17,5% of the punished group clicked on this second email link while 43.2% of the not punished one clicked on it. Although the sample size is relatively limited (101 persons in total for both groups), the effect is significant (p=0,005). Also, it is noticeable that the results were significantly different between people with a low or a high position in the organization, the employees having a low position clicking significantly less than their high position colleagues (punished: 7,1 vs 46,7% – p=0,002 and not punished: 37,8% instead of 71,4% – p=0.210).

These results are based on a tiny sample and must be treated with the necessary scientific doubt. Still, they raise some questions. Twenty weeks is a very long time. It seems more efficient, in the long term than any training (phishing exercises training maintains effectiveness for a month, at most three, depending on the research).

Also, as Kelsey Pipers reminds us in an article published on Vox, context matters. The results obtained in one context can often hardly be replicated in another one. Still, we should put that to the test and measure if any punishment can effectively reduce the risk of phishing in another context.

References:

  • Bora Kim, Do-Yeon Lee & Beomsoo Kim (2020) Deterrent effects of punishment and training on insider security threats: a field experiment on phishing attacks, Behaviour & Information Technology, 39:11, 1156-1175, DOI: 10.1080/0144929X.2019.1653992
  • Kelsey Pipers (2020) Why we can’t always be “nudged” into changing our behaviour, Vox.com, https://www.vox.com/future-perfect/2020/2/26/21154466/research-education-behavior-psychology-nudging

Behavioural aspects of Cybersecurity: a systematic review

In April 2019, ENISA, the European Network and Information Security Agency, published an outstanding document (two in fact): “Cybersecurity Culture Guidelines: Behavioural Aspects of Cybersecurity”.

This document is the result of four evidence-based reviews of human aspects of cybersecurity performed by different scholars: two based on the use and effectiveness of models from social science used to improve cybersecurity, one on qualitative studies; and one on current practise within organisations. The details of these reviews are presented in the second document: “TECHNICAL ANNEX: EVIDENCE REVIEWS –
Review of Behavioural Sciences Research in the Field of Cybersecurity”

The guidelines are a 30 page document including recommendations on a process and metrics to improve cybersecurity culture in an organisation.

The annexes are barely bigger and offer a condensed summary of relevant research, weaknesses, and strong points. It should interest any scholar working on the topic.

You will also find a link to ConstructDB, the incredible database of 792 theoretical constructs gathered by Becker & Sasse (2018) during their review of scientific literature on the subject. An outstanding work!

A nice summary of the current literature is summarized in the five following points extracted from the report:

  • Over-reliance on self-report measures:
    The vast majority of studies relied on self-report measures of cyber-security behaviours (or intention to behave). This is potentially problematic since self-reporting does not always correlate with actual behaviour (Wash, Rader & Fennel, 2017). However, there was some evidence of more creative measures being used.
  • The poverty of models used:
    Three models for understanding human aspects of cyber-security dominated the studies reviewed. Both Protection Motivation Theory (See our recent short article on this theory) and the Theory of Planned Behaviour (TPB) were well represented in the review. Unfortunately, it seems that the research literature has not moved from these pathway models to consider wider contexts or to integrate insights from behaviour change or persuasive design.
  • Threat models do not help to predict behaviour:
    Within PMT, people’s motivation to protect themselves from potential threats is determined by the relative balance of the severity and likelihood of the threat and the potential ways to cope with that threat. Studies that have studied the role of threat in predicting this motivation towards protection have reported either weak, neutral or even negative effects of increasing threat appraisal on motivation to take protective action. Similarly, studies of increased severity of punishment for violation of IS policies have found that they can backfire and lead to less compliance.
  • Coping models have more value predicting behaviour:
    On the other hand, studies that have included the resources people have to cope with a threat have produced more promising outcomes. The second element of protection motivation theory is the individual’s appraisal of their likely response to a threat, both in terms of the likely efficacy of the response and their own ability to complete the required response. These two factors are commonly referred to as ‘response efficacy’ and ‘self-efficacy’. In later PMT models, the cost of completing the response was also factored into the model. Within the theory of planned behaviour, ‘self-efficacy’ or ‘perceived behavioural control’ are used to signify the users’ belief in their ability to complete the desired behaviour. Across studies of PMT and TPB, coping / self-efficacy was a reliable, moderately strong predictor of cyber-security intention and behaviour. This suggests that interventions that seek to improve users’ ability to respond appropriately to cyber-threats (and belief that those responses will be effective) are more likely to yield positive results than campaigns based around stressing the threat.
  • Demographics and personality are not particularly useful:
    Relatively few studies in the review also studied personality or demographics (e.g. age, gender). Those that did found mixed results, with both older and young users often being found to be vulnerable and gender only sometimes linking to security behaviour or attitudes. Personality rarely linked to security behaviour in a consistent way, although there was some evidence that models of general decision making might be more predictive.

References:

Protection Motivation Theory (PMT)

Many theories are used to explain and predict human behaviour. Protection Motivation Theory is one of those theories sometimes used by cybersecurity professionals to prepare their programs. Is it a good choice?

Ronald W. Rogers proposed the Protection Motivation Theory (Rogers, 1975) to explain the effect of fear appeal in communications on the audience’s attitude change. Initially, Rogers developed PMT to explain health-related behavioural changes like the impact of fear-appeal on smokers’ behaviour. In 1983, Rogers and Maddux revised the model to include self-efficacy as an influencing factor (Maddux & Rogers, 1983).

PMT suppose an effect of the perceived efficacy of coping response, the perceived self-efficacy to perform the coping response and the probability of the threat on the attitude towards the coping response. We summarised the different variables and their effects in the figure below: Protection Motivation Theory – variables and effects.

PMT is now also used in an information security context by different researchers. As Menard et al. (2017) showed in their literature review on PMT, its application to the information security field gives mixed results.

It was mainly used to explain the impact of threat perception and perceived self-efficacy on changes in security behaviours or attitude in a population (Chou & Sun, 2017; Grimes & Marquardson, 2019; Ismail et al., 2017; Jansen & van Schaik, 2018; Menard et al., 2017; Milne et al., 2009).

If we take the specific case of phishing, these studies did not provide a specific model. Still, they suggest that perceived self-efficacy and threat perception might play a role in the process of detecting phishing emails.

It is an interesting model for health prevention professionals, but probably not for human-centrric cyber security ones.

Bibliography

  • Chou, H.-L., & Sun, J. C.-Y. (2017). The moderating roles of gender and social norms on the relationship between protection motivation and risky online behavior among in-service teachers. Computers & Education, 112, 83–96. https://doi.org/10.1016/j.compedu.2017.05.003
  • Grimes, M., & Marquardson, J. (2019). Quality matters: Evoking subjective norms and coping appraisals by system design to increase security intentions. Decision Support Systems, 119, 23–34. Scopus. https://doi.org/10.1016/j.dss.2019.02.010
  • Ismail, K. A., Singh, M. M., Mustaffa, N., Keikhosrokiani, P., & Zulkefli, Z. (2017). Security Strategies for Hindering Watering Hole Cyber Crime Attack. Procedia Computer Science, 124, 656–663. https://doi.org/10.1016/j.procs.2017.12.202
  • Jansen, J., & van Schaik, P. (2018). Testing a model of precautionary online behaviour: The case of online banking. Computers in Human Behavior, 87, 371–383. Scopus. https://doi.org/10.1016/j.chb.2018.05.010
  • Maddux, J. E., & Rogers, R. W. (1983). Protection motivation and self-efficacy: A revised theory of fear appeals and attitude change. Journal of Experimental Social Psychology, 19(5), 469–479. https://doi.org/10/cbzjj7
  • Menard, P., Bott, G. J., & Crossler, R. E. (2017). User Motivations in Protecting Information Security: Protection Motivation Theory Versus Self-Determination Theory. Journal of Management Information Systems, 34(4), 1203–1230. Scopus. https://doi.org/10.1080/07421222.2017.1394083
  • Milne, G. R., Labrecque, L. I., & Cromer, C. (2009). Toward an understanding of the online consumer’s risky behavior and protection practices. Journal of Consumer Affairs, 43(3), 449–473. Scopus. https://doi.org/10.1111/j.1745-6606.2009.01148.x
  • Rogers, R. W. (1975). A Protection Motivation Theory of Fear Appeals and Attitude Change1. The Journal of Psychology, 91(1), 93–114. https://doi.org/10/cb4jgn