Unaccountable and thus Unattainable Network Security

By: Curtis Dukes, Executive Vice President

Ransomware involves a request for a sum of money calculated to be less expense and trouble than trying to restore a system. Usually, it is a threat to lock screens or hold critical data ransom. That same level of access though can clobber services. An adversary determined to do real harm could possibly tamper with firmware or physical operations as was done with Stuxnet. Damage caused by WannaCry and the Equifax incursions is relatively minor compared to what could have happened.

A Necessary Discussion

What if critical hospital services had been taken offline for several days? What if surgeries could not have been performed? In most cases, the cybercriminals did not want anyone to be hurt, they just wanted to create enough nuisance to get a financial reward. The Equifax theft did not result in reports of stolen identities and consumers are almost numb at this point about losing their personal data. We’ve reached the point where no one is particularly surprised when these breaches take place and too few feel real urgency.

We have learned that companies either fear to patch their systems or do not know what to patch. We know phishing attacks are still succeeding and that, whenever possible, we should require some form of two-factor authentication. We know there are very expensive security products for detecting and preventing attacks but we don’t know when to pin the blame for a breach on those vendors. Earlier this year, we wrote a blog post on how we need more disclosure of information about these incidents. It is up to the cybersecurity community to discuss how to take these accounts of what is going wrong and to move toward making the right parties accountable for their consequences.

Starting the Conversation: Patching

Has the “ship it and fix it later” culture of IT finally caught up with us? Who takes responsibility when the IT department has to weigh the risk of a cyber incursion against the fear that their website will crash if a patch is deployed and irritated customers take to Twitter and destroy the company’s credibility?

Whose responsibility in the organization is it to know which applications harbor third-party components that need critical patches? If a company uses open source software, whose job is it to update the software that relies on the components? What about boutique software developed by third-parties under contract or in-house development by a team that has gradually dispersed? Let’s read all the contracts for all the software and find out who is financially liable not to use software libraries with known vulnerabilities, to provide notices that a vulnerability exists and patches that will not crash a host that provides a critical service.

If a developer follows one of the myriad software maturity models, is there a guarantee that adherence will result in software that won’t crash the host platform? Is there a requirement to only work with libraries and tools that have sufficient longevity and support to allow updates to be made? Is every developer of every software library required to disclose all third-party components in a manifest that has names that can be definitively tracked if there is a vulnerability?

Why is it we need to hurry to patch? What about all those other expensive security tools that companies buy? When exploits like WannaCry occur, the life cycle of an exploit is quickly researched. Why can’t defenses be put in place that detects and prevent these attacks? How can we have any confidence that we can defend against a zero-day if we can’t defend a device with a flaw that has been understood for over a decade?

Asking about Accountability

These are all fair questions. These are all uncomfortable questions. These are questions about accountability, liability, and effectiveness. These questions, however, can’t be answered without more data and a determined exploration as to why compliance falls short, companies choose to take obvious risks, developers rush to market, and countermeasures fall short. When a breach occurs, we need a report that details what products, what practices, what developers, what hosts, and what tools are involved. We need to be able to compare reports across incidents to establish patterns. Those patterns will lead to discussions about what could have or should have happened. Eventually, they will lead to answers about whom to blame – and going forward, what to buy and how to build.