Guiding and Incentivizing Cyber-Security Behavior

 Dr. Eran Toch 

Researcher

Humans are consistently referred to as the weakest link in cyber-security. While cyber-security technologies provide a powerful technical solution, employees’ failure to comply with enterprise security guidelines is the cause of the majority of breaches in enterprise computing. Mounting evidence prevents us from anymore assuming that users will simply follow the organizational cyber-security policy. To create an effective security environment, we need to understand and develop systems that would incentivize users to adopt positive cyber behavior.

Recent studies are focused modeling user reactions to cybersecurity systems, showing that disciplinary action has only marginal effect in making employees comply with cyber-security systems [1,3], and that by adapting the cyber-security response to users, we can increase the overall security [5]. However, established theories of human cyber-security behavior, such as Protection Motivation Theory (PMT) and the Theory of Planned Behavior, take for granted that users must use the system, and thus do not take into account the need to incentivize the user to use the system and to comply with its policies [5]. We turn to the literature of behavioral economics and human-computer interaction as an inspiration for finding ways to incentivize and guide users. One major concept we turn to is gamification, defined as applying game and design techniques to non-game applications to engage users. However, to the best of our knowledge, there is still no understanding on how gamification can be used in the context of the cyber-security system’s interaction with users, and how can non-monetary methods can be used to influence and guide users’ behavior.

We suggest to conduct theoretical and empirical research with two objectives in mind: first, to propose and evaluate a theory that explains users’ decision-making given negative and positive incentives; second, to test how can we influence users’ decision-making processes by designing gamification-based incentive systems. By the end of the study, we plan to offer a toolkit for optimal design of incentive systems for cyber-security that enhance the user involvement in the interaction in enterprise security systems. Our working hypothesis is that a successful system of incentives can balance the negative incentives by adding explicit gamification incentives. Given that most cyber-security risks are abstract, adding explicit incentives can make a significant and measurable change in users’ behavior. We will test the effects of incentive systems in comparison to standard blocking and warning techniques, how different types of incentives fare (e.g., scoring versus leaderboards), and what are the effects of negative incentives versus positive incentives.

The research will combine theoretical and empirical aspects, taking specific types of enterprise cyber-security scenarios as the basis for a series of experiments with human subjects that aim to evaluate our theory. The research methodology will involve: (1) Reviewing the current and the relevant behavioral economics and human-computer interaction models; (2) developing models of human decision-making that reflect a cost/profit behavioral economic model with the possibility of evading the system; (3) empirically evaluating these models using data from real-world cyber-security systems; (4) designing incentive systems for influencing users towards particular outcome; (5) conducting controlled experiments with human subjects to evaluate decision-making processes and the effects of incentive systems.

The empirical real-world analysis will be based on data that was provided by Checkpoint Inc., which describes how users in a large enterprise interact with UserCheck, an application that alerts users of suspected breaches and allows the user to authorize of legitimate communications. Our initial analysis of the data provides a promising start to model the users’ interaction with the system. The controlled experiments will be based on “Security-Robot”, a technology developed in our lab, which is used to recreate cyber-security experience for users. It can be used for simulating systems such as malware detection and URL filtering using a Chrome browser extension. We plan to measure the user’s behavior with well-established measures from the domain of usable security, including behavioral modeling, perceptual impact of security feedback (using eye movement analysis and other means), and user satisfaction measures.

Tel Aviv University makes every effort to respect copyright. If you own copyright to the content contained
here and / or the use of such content is in your opinion infringing, Contact us as soon as possible >>