Insider Threats: Computer Science Researcher Eunice Santos Develops Innovative Framework for Modeling the Threats from Within

Date

Whether it’s a hacker in a basement or a spy from a rogue nation, we tend to think of cybersecurity as defense for attacks from outsiders.

But the most serious threats to an organization’s security (in cyberspace or the physical world) come from within—from a trusted employee, former employee, or business partner.

Examples are numerous. An Anthem employee stole Medicaid data on 18,000 people 2016-17. A third-party vendor caused a massive data breach at Target in 2013. An employee or a contractor leaked CIA hacking secrets in 2017.

The insider may cause the problem knowingly (theft or espionage) or unknowingly (by falling for a phishing scheme, for example). But whatever the cause, it’s usually very costly for the organization. Insiders are responsible for at least half of data breaches, and insider crimes cost organizations more than other types of cybercrime—an average of nearly $9 million per organization annually, according to a 2018 study by the Poneman Institute.

Besides implementing measures to protect valuable data and monitor employees, organizations also may one day have another option: computer models to identify valid insider threats and assess the type of threat an insider presents. Such models are very difficult to create, because human behavior is inherently very complex. There are different drivers for creating opportunities for crime; each individual has different triggers, pushes, and pulls that create opportunities for insider threats.

But one Illinois Tech researcher, Eunice Santos, the Ron Hochsprung Endowed Chair and Professor of Computer Science, and computer science department chair, is leading an initiative in this complex area. Among other accomplishments, Santos is a former senior research fellow of the U.S. Department of Defense’s Center for Technology and National Security Policy.

Working with John Korah, research assistant professor of computer science; Ph.D. students Vairavan Murugappan and Suresh Subramanian; and researchers at Dartmouth College, Santos has proposed a framework in which there are eight distinct insider threat types, and that those types may be identified by measuring three important individual qualities: predictability, susceptibility, and awareness (PSA). The work appears in “Modeling Insider Threat Types in Cyber Organizations” in the Proceedings of IEEE International Symposium on Technologies for Homeland Security (HST).

In the PSA model, “predictability” is the ability to foretell the insider’s reactions to events and to other stimuli to which he or she exposed. “Susceptibility” is the tendency of the insider to become involved in an action that either directly or indirectly affects the organization, due to external or internal manipulation. “Awareness” is ability to detect manipulative intent.

In order to formulate the initial models for measuring the predictability, susceptibility, and awareness of insiders, Santos leveraged her previous work in socio-cultural modeling, including work on infusing cultural and behavioral factors in social network modeling.

The researchers then implemented a computational model to demonstrate the viability of the framework with synthetic scenarios based on actual insider threat cases.

Among other things, they tested to see if using bias could be an indicator of predictability, defining “bias” as an inclination or prejudice towards or against a person, entity, or idea. For the initial model of insider predictability, they examined four categories of biases—socio-cultural, arising from age, gender, education, and other factors; emotional (whether the insider is anxious, for example); situational (invoked by external events or stimuli); or social network (invoked by the insider’s preference for certain groups or organizations). They found bias can both increase and decrease predictability and so is not a good indicator of predictability.

On the other hand, they were able to correlate events like buying an expensive car, getting demoted, or getting divorced with increased susceptibility to bribery or deception, two vulnerabilities frequently appearing in the team’s surveys of insider case studies. Manipulated insiders were frequently either overtly persuaded to perform malicious acts through bribery or were rather innocently co-opted into betrayal through deceit that they did not detect, the researchers wrote.

In investigating awareness, they explored the effect on awareness of three manipulation techniques—trust-based manipulation, empathy-based manipulation, and false identity-based manipulation. They were able to show that someone who went through a bad romance, for example, and was then subjected to a phishing attack had decreased awareness when compared with the average technology worker.

Among the next steps for the PSA model, the researchers propose further refining the eight insider threat types and investigating their relationships to observable malicious insider behavior.

View more coverage of this story »