OPM Breach Anniversary- How Far Have We Come?

OPM Breach Anniversary- How Far Have We Come_ .jpg

By Christy Wyatt

It has been two years since we first heard about one of the largest data breaches in the history of the federal government, hitting the Office of Personnel Management (OPM) and exposing the sensitive personal information of more than 22 million current and former employees. Finally, just last month, the FBI arrested a Chinese national on charges related to the malware used in the breach. From personnel files and fingerprint data to security background clearance information – including extensive behavior and lifestyle details, financial information, and even names of relatives, the breadth and depth of information obtained meant devastating consequences that would be felt for years to come and left us all wondering: how could that happen?

There’s no denying that, since then, security threats and data breaches have started to border on ubiquitous. The total number of data breaches in that same year (2015) was 781 – jumping to 1,093 in 2016 and just recently hitting a half-year record high of 791 in 2017, according to the Identity Resource Center. From major enterprises like Yahoo and Verizon to smaller entities like the Bronx Lebanon Hospital Center and Wildlife Sporting Licensing sites, it’s clear that nobody is immune to security threats or the catastrophic effects that may follow.

But just how far have we come, exactly, in the two years since that catastrophic event across both the public and private sector? With news of the recent arrest, I spent some time re-reading the Oversight Committee Report, published in September 2016, and it seemed prudent to resurface some of the challenges that contributed to the OPM data breach and reflect on the progress made since.
 

Outdated and Costly Legacy Systems

The report notes,

“There is a pressing need for federal agencies to modernize legacy IT in order to mitigate the cybersecurity threat inherent in unsupported, end of life IT systems and applications… the agency missed opportunities to prioritize the purchase and deployment of certain cutting-edge tools that would have prevented this attack.”

It’s no secret that the government is plagued with legacy systems. Federal agencies, as a whole, spend over $89 billion annually on IT, but a majority of that money (upwards of 70 percent) is focused on maintaining and operating legacy IT systems. The catch is that some of these systems are so legacy that many can no longer be patched or updated with new security capabilities. This well-known gap in agency security posture is a prime target for malicious actors.

But this isn’t a challenge specific to the public sector. Recent research from the Ponemon Institute shows that companies are still overwhelmingly relying on legacy technologies and governance practices to address potential threat vectors. A prime example: 94 percent of those surveyed indicated that they still use a traditional network firewall to mitigate threats, even as they acknowledge an increasingly complex and evolving threat landscape that now includes things like unsecured IoT devices, botnets, DDoS attacks and anonymized malicious activities.

The bright spot here is that we have options, and there is solid evidence that we are moving in the right direction. Cloud-based infrastructure provides a direct path to modernization and efficiency, particularly for budget-constrained organizations and agencies – and we see those investments, and the associated funds, being prioritized. On both the private and public sector fronts, we’re both increasing our overall security spend while starting to shift investments in prevention-only approaches to those that also put a focus on detection and response.

Hyper Focus on Outside Adversaries

A related point, as we’re talking about perimeter-focused defenses like network firewalls, is one that’s easy to glaze over in the Committee’s report:

“The OPM data breaches illustrate the challenge of securing large, and therefore high value, data repositories when defenses are geared towards perimeter defenses...”

This is an area where we largely continue to struggle, on all fronts. Despite the increased visibility of insider threats, and the potentially extensive damage they can do, the emphasis – as shown in security budgets and priorities – continues to be placed on external threat vectors. Let me be clear: there is absolutely a need to protect against these, and I’m the first to advocate for building a comprehensive, layered defense. But more often than not, the discussion of insider threat gets drowned out by that of outside risk and hardening perimeter defenses when the fact is that more and more external actors are finding and exploiting vulnerabilities from within.

Even once insider threats are acknowledged, the perception is that incidents and breaches are driven by malicious employees or actors, but a significant portion – up to 68 percent – of the risk is due to employee carelessness. While it’s an oversimplification to immediately link personal email usage to malicious intent, it is impossible to ignore the fact that personal email accounts can absolutely be used as an avenue for data theft. And therein lies the problem for IT: while most people using personal email at work are not doing anything nefarious, how do they find the ones that are? How do they see when an employee is compromised through phishing or other means, or recognize bad actors representing themselves as trusted employees?

The lack of rich intelligence around compromised or malicious users drives the “lock and block” controls across an organization. What is needed is user intelligence to shine a light on where risky behavior is compromising the enterprise, whether maliciously or negligently.


The Path Forward

To see where risky behavior is compromising the enterprise, CIO’s and CISO’s need to look inward. Having access to user intelligence in near real time can be an invaluable tool in bridging this gap, enabling an organization to see areas of risk without infringing on user privacy. The addition of context and machine learning to user behavior metadata can help an organization both detect and prevent data breaches at scale. Legacy forms of employee monitoring like keystroke logging or video-capture of screen content cannot adapt to the modern enterprise, or live within today’s employee privacy requirements. Systems that look for internal threats without the addition of user intelligence are missing the critical contextual data that cut through the noise.

In these two years following the attack, OPM implemented a number of programs to help prevent, and mitigate, an attack from taking place again. The agency created its Continuous Diagnostics and Mitigation program that offers a full suite of tools and sensors to scan for and respond to threats on their networks; began a major push to encrypt its data and require strong authentication processes for each of its internal users; and implemented a ‘Zero Trust’ security model. While there is still work to be done, the active response from that fateful day in 2015 is something any enterprise should take note of and keep in mind.


About Christy Wyatt

Christy is Chief Executive Officer of Dtex Systems and serves as a member of the board.  Most recently Christy was Chairman, CEO and President of Good Technology the global leader in mobile security across the Global 2000.

More About Christy