In April 2019, ENISA, the European Network and Information Security Agency, published an outstanding document (two in fact): “Cybersecurity Culture Guidelines: Behavioural Aspects of Cybersecurity”.
This document is the result of four evidence-based reviews of human aspects of cybersecurity performed by different scholars: two based on the use and effectiveness of models from social science used to improve cybersecurity, one on qualitative studies; and one on current practise within organisations. The details of these reviews are presented in the second document: “TECHNICAL ANNEX: EVIDENCE REVIEWS –
Review of Behavioural Sciences Research in the Field of Cybersecurity”
The guidelines are a 30 page document including recommendations on a process and metrics to improve cybersecurity culture in an organisation.
The annexes are barely bigger and offer a condensed summary of relevant research, weaknesses, and strong points. It should interest any scholar working on the topic.
You will also find a link to ConstructDB, the incredible database of 792 theoretical constructs gathered by Becker & Sasse (2018) during their review of scientific literature on the subject. An outstanding work!
A nice summary of the current literature is summarized in the five following points extracted from the report:
- Over-reliance on self-report measures:
The vast majority of studies relied on self-report measures of cyber-security behaviours (or intention to behave). This is potentially problematic since self-reporting does not always correlate with actual behaviour (Wash, Rader & Fennel, 2017). However, there was some evidence of more creative measures being used.
- The poverty of models used:
Three models for understanding human aspects of cyber-security dominated the studies reviewed. Both Protection Motivation Theory (See our recent short article on this theory) and the Theory of Planned Behaviour (TPB) were well represented in the review. Unfortunately, it seems that the research literature has not moved from these pathway models to consider wider contexts or to integrate insights from behaviour change or persuasive design.
- Threat models do not help to predict behaviour:
Within PMT, people’s motivation to protect themselves from potential threats is determined by the relative balance of the severity and likelihood of the threat and the potential ways to cope with that threat. Studies that have studied the role of threat in predicting this motivation towards protection have reported either weak, neutral or even negative effects of increasing threat appraisal on motivation to take protective action. Similarly, studies of increased severity of punishment for violation of IS policies have found that they can backfire and lead to less compliance.
- Coping models have more value predicting behaviour:
On the other hand, studies that have included the resources people have to cope with a threat have produced more promising outcomes. The second element of protection motivation theory is the individual’s appraisal of their likely response to a threat, both in terms of the likely efficacy of the response and their own ability to complete the required response. These two factors are commonly referred to as ‘response efficacy’ and ‘self-efficacy’. In later PMT models, the cost of completing the response was also factored into the model. Within the theory of planned behaviour, ‘self-efficacy’ or ‘perceived behavioural control’ are used to signify the users’ belief in their ability to complete the desired behaviour. Across studies of PMT and TPB, coping / self-efficacy was a reliable, moderately strong predictor of cyber-security intention and behaviour. This suggests that interventions that seek to improve users’ ability to respond appropriately to cyber-threats (and belief that those responses will be effective) are more likely to yield positive results than campaigns based around stressing the threat.
- Demographics and personality are not particularly useful:
Relatively few studies in the review also studied personality or demographics (e.g. age, gender). Those that did found mixed results, with both older and young users often being found to be vulnerable and gender only sometimes linking to security behaviour or attitudes. Personality rarely linked to security behaviour in a consistent way, although there was some evidence that models of general decision making might be more predictive.
- The ENISA Cybersecurity Culture Guidelines’ page: https://www.enisa.europa.eu/publications/cybersecurity-culture-guidelines-behavioural-aspects-of-cybersecurity
- The technical annexes’ page: https://www.enisa.europa.eu/publications/cybersecurity-culture-guidelines-technical-annex-evidence-reviews/