Just when you thought the intrigue surrounding former National Security Agency (NSA) contractor Edward J. Snowden’s leaking of American surveillance tactics could get more convoluted, it has. This is not a reference to Snowdon’s journey to find political asylum. Rather, it has to do a vexing question that impacts risk management in government and industry as well that came to the fore on the weekend talk shows in the U.S. That question is: Can IT staff be trusted?
It appears that not only is this a legitimate concern, but it is now a top of mind one.
Who is watching the watchers?
As this NSA story has unfolded, I have expressed concerns about the FISA court and who is watching the watchers. The reason is that thus far it appears very few requests for surveillance of our call records and Internet activities have been denied by the court. What the talk shows highlighted was another part of this multi-headed hydra. Who is protecting government and industry from malicious behavior that can threaten national security and corporate viability from internal threats by powerful systems administrators, their staffs and outside contractors?
Good question! It is one without an easy answer.
There was an extensive article in The New York Times about this subject that reviews the issue as well as the comments made by various security experts. It is worth a thorough reading. However, several things resulting from the public airing of the issue stand out.
First. Unfortunately, it is no secret that systems administrators are in a perfect position, because of their incredible access to the entire computing and communications environment for which they are responsible, to do enormous damage. Look at what Snowdon accomplished, and he was just a contractor. Plus, it is hard not to resonate with if not totally agree with the statement by Eric Chiu, president of Hytrust, a computer security company, that “the scariest threat is the systems administrator…The system administrator has godlike access to systems they manage.”
Second. As NSA director Gen. Keith B. Alexander noted on TV, his agency will broadly institute “a two-man rule.” This rule would limit the ability of each of its 1,000 system administrators to gain unfettered access to the entire system. For movie fans and history buffs, you may remember that just such a rule has been in place to in theory prevent a rogue member of the military from launching a nuclear strike against someone or something they had a grievance with.
The General said the rule, already in place in some parts of the intelligence community, requires a second check on each attempt to access sensitive information. As the article notes this is a concept borrowed from the field of cryptography, and is more and more commonplace in commercial ventures where information is encrypted and you need to sets of keys to unlock the doors.
Third. Even two-step authentication is likely to be problematic. In fact, John R. Schindler, a former NSA counterintelligence officer who now teaches at the Naval War College, is quoted as saying that the “buddy system” would help. “But I just don’t see it as a particularly good long-term solution,” he said.
Fourth. All of the experts agreed that the best protection is having trustworthy people, a limited number of “super users” and better surveillance/accountability of those with unlimited access. Technology can play a role to the extent that real-time monitoring with good policies and rules at least could sound the alarm when anomalies occur.
Does this mean that a smart bad actor can be thwarted? Again all of the experts agree. The short answer is NO! The reasons are many not the least of which is they have the requisite knowledge to avoid detection and the skill to wreak havoc.
Christopher P. Simkins, a former Justice Department lawyer whose firm advises companies including military contractors on insider threats, summed it up well in stating, “At the end of day, there’s no way to stop an insider if the insider is intent on doing something wrong,” he said. “It’s all about mitigating.”
From the big headlines like the NSA leaks to corporate espionage cases and other types of malfeasance, the issue of who is minding the store and what do we know about them and how carefully should they be watched is now top of mind, or at least the concern is out in the open. How deep background checks should and need to go, is an interesting dilemma. As numerous studies, including the exhaustive work that Carnegie Mellon University does on the subject, have consistently shown, realities are that as the cartoon Pogo famously said, “We have met the enemy and they are us.”
This brings us back to the question posed at the top about whether IT can be trusted. It seems while complicated the answer is, with the proper safeguards in place, yes but only so far. The best we seem to be able to hope for is that some combination of technology and rigorous oversight can keep much of the malicious behavior in check, but never all.
What actions the government and all entities take to better mitigate risks as a result of what has transpired are likely to remain out of view. At a minimum it appears that everyone is going to be looking at more encryption, multi-step authentication, closer scrutiny in hiring and increased surveillance.
SAM is a series of kits that integrates hardware and software with the Internet. Combining wireless building blocks composed of sensors and actors con…
Artificial intelligence is changing the way businesses interact with customers. Facebook's announcement this week is just another example of how this …
In the upcoming webinar "Apache Spark: The New Enterprise Backbone for ETL, Batch and Real-time Streaming," industry experts will offer details on clo…
In a stunning new report by Carbon Black, "Hacking, Escalating Attacks and The Role of Threat Hunting" the company revealed that 92% of UK companies s…
To make 5G possible, everything will change. The 5G network will involve new antennas and chipsets, new architectures, new KPIs, new vendors, cloud di…