Beyond Posters and Phishing Tests: Designing Security Awareness That Changes Behaviour

If you work in security, you’ve probably heard or said some version of this sentence:

“We need more awareness.”

So you roll out an LMS module, schedule a phishing simulation, maybe add a gamified quiz. Completion rates look good, the dashboard is full of green ticks… and yet passwords are still reused, suspicious emails still get clicked, and policy violations keep appearing in the incident reports.

The uncomfortable reality emerging from recent research is that most security awareness and training programmes struggle to produce meaningful, sustained behaviour change. Security leaders invest heavily in SETA (Security Education, Training and Awareness) but only a small fraction of practitioners describe their programmes as truly effective. At the same time, a growing body of systematic literature reviews and expert studies gives us a much clearer picture of what actually drives secure behaviour, and why many current approaches fall short.

This blog article brings together insights from several major academic papers and turns them into a practical narrative for CISOs, security leaders and awareness managers. The core message: if you want different behaviour, you have to treat awareness as a learning and change process – not a communication campaign.

Rethinking What “Awareness” Really Means

In many organisations, “awareness” has become shorthand for artefacts: a poster, a video, an annual e-learning course. But a systematic review of information security awareness in public and private organisations frames it very differently. It describes awareness as a process aimed at changing people’s perceptions, values, attitudes, habits and the wider organisational culture around information security.

That definition is worth sitting with for a moment. It implies that awareness is not just about knowing a few rules. It’s about:

  • noticing risks,
  • caring enough to act securely,
  • developing secure habits,
  • and operating in a culture where secure behaviour is the norm rather than the exception.

SETA programmes are the formal tools we use to move towards that state. Yet the research shows that many programmes are poorly grounded in theory and weakly evaluated. They often boost short-term knowledge or self-reported intention but fail to demonstrate sustained changes in real behaviour, such as fewer successful phishing incidents or better data handling.

At the same time, multiple reviews highlight something else: organisations are not starting from scratch. They already use a surprisingly rich toolbox of methods, e.g. from gamification to behavioural models, even if those methods are not always tied into a coherent strategy.

What Organisations Are Actually Doing Today

When researchers looked across organisations, they found a wide variety of methods used to develop security awareness:

  • Many organisations use phishing simulations, just-in-time prompts, or violation detection and feedback to tie training to real behaviour.
  • Case studies, role-playing and scenario-based exercises are common, especially in more mature environments.
  • Some programmes include gamification and serious games: points, leaderboards, scenarios and competitions to make security topics more engaging.
  • Some programmes are explicitly designed using behavioural theories such as Protection Motivation Theory or the Theory of Planned Behaviour. These theories help structure messages about risks and compliance.
  • There are constructivist approaches where employees learn through problem-solving, discussion and real cases rather than passively consuming content.

More recent work on cybersecurity training focuses on specific formats. Phishing simulations and interactive exercises are particularly popular for addressing social engineering. Virtual labs and hands-on environments are often used for technical staff. E-learning remains the backbone for broad, organisation-wide messages.

So the problem is not a lack of ideas. The problem is that we still know too little about which combinations of methods actually change behaviour in the long run. Many evaluations rely on self-reported surveys directly after a training, without any control group or long-term follow-up. Studies often show that people know more and say they will behave better, but we don’t see whether this translates into fewer incidents six or twelve months later.

Researchers call this the behavioural intention–actual behaviour gap: people may intend to follow security policies, but under pressure, with high workload or confusing tools, they still take insecure shortcuts. Awareness that doesn’t account for this gap risks becoming a box-ticking exercise.

The Behavioural Science Under The Surface

If you want to move from vague “awareness” to actual behaviour change, it helps to understand the behavioural science that underpins many of these studies. A large theory-based review of employee security behaviour identifies dozens of models, but four appear again and again. You don’t need to become a behavioural psychologist to use them, think of them rather as a practical design toolkit.

  1. Theory of Planned Behaviour (TPB)
    This theory says that behaviour is shaped by three things: your attitude towards the behaviour, the social norms around you, and your sense of control over performing it. For security, that means employees are more likely to comply if they think it’s worthwhile, see their peers and managers doing the same, and feel that the secure way is realistic in their daily work. If any of those three elements is missing, compliance drops.
  2. Protection Motivation Theory (PMT)
    PMT distinguishes between how people appraise threats and how they appraise coping strategies. Fear alone is not enough. People need to understand that the threat is serious and relevant, but they also need to believe that the recommended action is effective and that they are capable of doing it. The most effective security messages balance realistic risk with high self-efficacy and clear, achievable responses.
  3. General Deterrence Theory (GDT)
    This is the classic “sanctions” view: if violations are likely to be detected and punished, people are less likely to violate rules. But research suggests that what really matters is perceived certainty and fairness of enforcement, not just the harshness of penalties. Over-reliance on fear of punishment can also backfire by undermining trust or driving issues underground.
  4. Technology Acceptance Model (TAM)
    TAM focuses on how people adopt systems. They are more likely to use a security control if they perceive it as useful and easy to use. This is painfully familiar to most security teams: if the “secure way” is slower, clunkier or poorly integrated into workflows, users will try to bypass it, no matter how much training they’ve had.

Taken together, these theories suggest a simple but powerful principle: awareness content should not exist in a vacuum. It has to align with how people perceive threats, what they think their peers are doing, how usable the tools are, and how the organisation handles violations.

Awareness as Teaching and Learning, Not Just Communication

One of the more interesting recent studies approaches cybersecurity awareness explicitly as a teaching–learning process. Instead of treating it like a marketing campaign (“push messages, hope something sticks”), the authors look at it through an educational lens.

Several ideas emerge from this perspective:

  • Employees don’t start from zero. They already have beliefs and mental models about security. Some might be accurate, some could be dangerously wrong (“I’m too unimportant to be targeted”, “IT will always catch the bad stuff”). Effective awareness has to surface and challenge these misconceptions, not simply add more facts on top.
  • Learning is often triggered by cognitive dissonance: that uncomfortable feeling when you realise that your current behaviour doesn’t match your values or what you now know to be true. Realistic scenarios, incident stories and reflections can create this kind of productive tension.
  • Social and experiential learning matter. People learn powerfully from discussing real incidents, sharing experiences, and trying things out in a safe environment. A 20-minute click-through module cannot replicate what happens in an interactive workshop or red-team debrief.

Putting this together, awareness looks less like a set of isolated messages and more like a learning journey:

  1. Understand what people currently believe and how they behave.
  2. Confront those beliefs with realistic examples that gently but clearly reveal their limitations.
  3. Offer better explanatory models and concrete strategies.
  4. Reinforce and support new behaviours in real work contexts.

Research shows that while some organisations use such constructivist approaches, they are still the exception rather than the rule. Most programmes remain heavily focused on telling rather than teaching.

Designing Awareness That Actually Changes Behaviour

So how do you translate all of this into a concrete strategy?

First, start from behaviour and context, not from content. Instead of asking “What topics should we cover this year?”, ask “Which few behaviours would most reduce our risk if we changed them?” That might be reporting suspicious emails promptly, using secure channels for customer data, or following change processes in IT environments. Once you know the behaviours, look closely at the context: remote work, open-plan offices, frontline operations, public sector constraints. All of these shape what is realistic for people.

Second, use behavioural theory as a design checklist rather than a citation in a slide deck. When you craft a campaign or training:

  • Ask how it affects attitudes (“Does this make the secure option seem sensible and worthwhile?”).
  • Think about norms (“Do we show that people like them and especially leaders behave this way?”).
  • Consider perceived control (“Are we giving them clear steps and tools that actually work under pressure?”).
  • Check threat and coping appraisals (“Are we creating realistic concern, but also showing that their actions truly make a difference?”).
  • Reflect on usability (“Is the secure path the easiest path in the tools and processes they use?”).

Third, design programmes as coherent journeys rather than disconnected events. A phishing simulation, a workshop and a short e-learning module can reinforce each other if they share common language and concepts. For example, a simulated phishing campaign might feed into a discussion where employees analyse real examples, share experiences, and co-create reporting strategies. Short follow-up nudges and just-in-time prompts can then keep those behaviours visible in daily work.

Fourth, invest in conceptual change, not just recall. Instead of checking whether people can remember policy text, focus on whether their underlying assumptions have shifted. Are they still saying “It won’t happen to me”? Are they still convinced that security is purely IT’s job? Designing scenarios and questions that pull these beliefs into the open is one of the most powerful things you can do.

Finally, measure what actually matters. Completion rates and quiz scores tell you almost nothing about real risk. Where possible, track behavioural indicators: click and report rates in phishing campaigns, frequency and type of policy violations, time to report incidents, adoption of secure configurations, and so on. You don’t have to run a full academic experiment, but simple approaches like staggered rollouts or control groups can give you a much clearer sense of what’s working.

The Real-World Constraints And How To Work Within Them

Of course, all of this sits within the constraints of everyday organisational life. Budgets are tight, people are busy, and security is just one of many competing priorities.

That’s why it often makes sense to start small and deep rather than broad and shallow. Pick one or two high-impact behaviours and design a genuinely thoughtful, theory-informed programme around them. Pilot it with a specific group, measure outcomes, and refine. Once you’ve demonstrated value there, you have a stronger case to expand.

Diversity in the workforce is another challenge. Developers, factory workers, finance staff and senior executives all experience security differently. The evidence suggests that context really matters, so aim for a common core framework with tailored implementations: the same underlying principles, but different examples, channels and emphases for each audience.

Remote and hybrid work add further complexity. People are more isolated from colleagues, may share devices at home, and juggle blurred boundaries between personal and professional digital life. Here, just-in-time prompts, embedded guidance in tools, and strong leadership role-modelling become especially important. Awareness has to live where work happens, not just in the training platform.

From Checkboxes To Culture

When you pull all these threads together, a clear picture emerges.

Technical controls remain essential, but human behaviour is at the heart of most security outcomes. Humans don’t change because we tell them to; they change because their beliefs, norms, tools and incentives shift in a way that makes new behaviours feel natural and worthwhile. Awareness that treats people as passive recipients of information will continue to disappoint.

The good news is that you don’t have to design in the dark. The research base is now rich enough to guide smarter decisions. If you treat awareness and training with the same rigour you apply to firewalls and SIEM rules, i.e. grounded in theory, tested in practice, and refined with data, you can move beyond posters and phishing tests to something far more powerful: a security culture in which secure behaviour is simply “how we do things here”.

Sources

Links to research papers that inspired this blog article: