Abstract: Improved Security Detection & Response Via Optimized Alert Output - A Usability Study

Cut the noise, hone the signal

Once in a while, you get shown the light in the strangest of places if you look at it right ~Garcia/Hunter

I’ve been absent here for many months, but it has been with purpose. My dissertation, Improved Security Detection & Response via Optimized Alert Output: A Usability Study, is complete, and I’ve successfully defended it; pursuit of my PhD is complete, a new journey begins. I’ll begin with posting the abstract here. I’m in the midst of the dissertation publishing process, but once ready, it will be available in a fully open source capacity, no paywalls or subscription required. I’ll also share all the data (fully anonymized) as well as statistical routines and analysis in R. I’ll continue to post the related artifacts, including to full dissertation in via the R bookdown and thesisdown packages. I look forward to sharing this research with you while discussing it in a variety of forums and extending it to additional research opportunities. Stay tuned here for more.

Abstract

Security analysts working in the modern threat landscape face excessive events and alerts, a high volume of false positives alerts, significant time constraints, innovative adversaries, and a staggering volume of unstructured data. Organizations thus risk data breach, loss of valuable human resources, reputational damage, and impact to revenue when excessive security alert volume and a lack of fidelity degrade detection services. This study examines tactics to reduce security data fatigue, increase detection accuracy, and enhance security analyst experience using security alert output generated via data science and machine learning models. The research determines if security analysts utilizing this security alert data perceive a statistically significant difference in usability between security alert output that is visualized versus that which is text based. Security analysts benefit twofold: the efficiency of results derived at scale via ML models, with the additional benefit of quality alert results derived from these same models. This quantitative, quasi-experimental, explanatory study conveys survey research performed to understand security analysts’ perceptions via the Technology Acceptance Model (TAM). The population studied is security analysts working in a defender capacity, analyzing security monitoring data and alerts. The more specific sample is security analysts and managers in Security Operation Center (SOC), Digital Forensic and Incident Response (DFIR), Detection and Response Team (DART) and Threat Intelligence (TI) roles. Data analysis indicates a significant difference in security analyst perception of usability in favor of visualized alert output over text alert output. The study’s results show how organizations can more effectively combat external threats by emphasizing visual rather than textual alerts.

Improved Security Detection & Response via Optimized Alert Output: A Usability Study from Russ McRee

comments powered by Disqus