Threat Stack Blog

Continuous security monitoring for your cloud.

Announcements and Highlights: Threat Stack at AWS re:Invent 2014

by Kristin Dziadul, posted in Announcements, AWS, AWS ReInvent

We just returned from a great week in Las Vegas, NV for the AWS re:Invent Conference. The conference brought together over 13,500 developers, architects and many other technical users of the Amazon Web Services (AWS) infrastructure for an intensive four-day event. It is the year’s top spot to dive deep into the most pressing AWS topics and issues as well as learn about new services and brings the entire AWS ecosystem together.

Threat Stack at AWS re:Invent

Read More [fa icon=long-arrow-right"]

Nov 20, 2014 12:29:24 PM

[fa icon="comment"] 0 Comments

It’s Here! Threat Stack Launches Out of Beta at AWS re:Invent

by Doug Cahill, posted in Announcements, AWS, AWS ReInvent

Today we’re extremely excited to announce the general availability of Threat Stack!  

Right on the heels of our successful beta program with hundreds of active users, and a very busy summer that included shipping many new features, hiring key members of our executive team, and participating in several major AWS events, we have officially launched our service at the AWS re:Invent Conference. We are thrilled and honored that Threat Stack, among just a handful of other companies, was selected by Amazon Web Services (AWS) to join Amazon's CTO, Dr. Werner Vogels, on stage during his Start-up Launch Keynote to introduce our services to the entire AWS community.

Read More [fa icon=long-arrow-right"]

Nov 14, 2014 9:00:00 AM

[fa icon="comment"] 0 Comments

Nov 6, 2014 9:47:00 AM

[fa icon="comment"] 0 Comments

Bringing Infosec Into The DevOps Tribe: Q&A With Gene Kim

by Pete Cheslock, posted in DevOps, SecDevOps, InfoSec

Last week, I had a call with Gene Kim, founding CTO of Tripwire and author of The Phoenix Project (see end of post for more details). I've known Gene from the DevOps community for awhile now, so we took this time to dive into all things DevOps and Security, in the end resulting in this great Q&A to share with you all on what bringing Security into DevOps means for us all.

Read More [fa icon=long-arrow-right"]

Oct 8, 2014 4:51:00 PM

[fa icon="comment"] 0 Comments

CVE-2014-6271 And You: A Tale Of Nagios And The Bash Vulnerability

by Jen Andre, posted in Nagios, Attack Vector, Linux, Monitoring, Bash Exploit, Vulnerabilities, Threat Stack Agent

The internet is yet again feeling the aftereffects of another “net shattering” vulnerability: a bug in the shell ‘/bin/bash’ that widely affects Linux distributions and is trivial to exploit. The vulnerability exposes a weakness in bash that allows users to execute code set in environment variables, and in certain cases allows unauthenticated remote code execution.

Possible vectors for attack include:

Read More [fa icon=long-arrow-right"]

Sep 25, 2014 4:52:00 PM

[fa icon="comment"] 0 Comments

Sep 4, 2014 4:53:00 PM

[fa icon="comment"] 0 Comments

Sep 3, 2014 4:53:00 PM

[fa icon="comment"] 0 Comments

Threat Stack vs. Red Hat Auditd Showdown

by Jen Andre, posted in Linux Security, SecDevOps, Auditing, AuditD

One of things we like at Threat Stack is magic.  But since magic isn’t real, we have to come up with the next best thing, so we’ve hired one of the libevent maintainers Mark Ellzey Thomas (we like to call him our ‘mad kernel scientist’) to make our agent the best in its class. 

Many of the more savvy operations and security people that use our service are blown away by the types of information we can collect, correlate, and analyze from Linux servers. They say something to the effect of, “I’ve tried to do this with (Red Hat) auditd, with little to no success… how do you guys do it?”  

Read More [fa icon=long-arrow-right"]

Aug 21, 2014 4:54:00 PM

[fa icon="comment"] 0 Comments

Dan Geer’s Mandated Breach Reporting Vision Of The Cyber-Future: Can The Security Industry Help?

by Jen Andre, posted in EVENTS, PII, HIPAA, Breaches, Attack Vector, Security, Black Hat 2014, Reporting

code

Note: We’re taking a break this week from our weekly SecDevOps series to offer some key takeaways from Black Hat which our co-founder, Jen Andre, attended.

By now, many of the talks of Black Hat 2014 have been published online. Dan Geer, who delivered this year’s opening keynote in Vegas, has once again lit up the security community with his controversial vision of security and privacy in the internet of the near future.

In one proposal, he outlines a policy where organizations are mandated to report security breaches (within a certain scope). Here, he draws a corollary to the CDC. Currently, medical providers are required to report any observed instances of ‘certain communicable diseases’ to the Center for Disease Control, in order to mitigate the public health risk of a widespread pandemic. Dan posits: should we have similar mandates for reporting security breach events?

To anyone who has been the victim of identity theft or privacy violations due to mismanaged security practices, the answer to this question is a resounding: “of course”. After all, you don’t want to find out on a mortgage application that someone has hijacked your credit score because some crucial PII was leaked by an insecure web app you used 5 years ago.  Or that you can’t board a plane because at some point your stolen PII was used to create an account to launder money to a terrorist group you never heard of.

The argument for mandated security breach reporting is not new, though it remains controversial.  

If we look at the state of breach reporting today, there is already legislation in the United States that mandates exactly that for certain classes of data. PCI-compliant entities are required to report credit card breaches up to their private financial institutions (and with this comes serious consequences, e.g. heavy fines per card stolen), and many states (California leading the charge) also require reporting directly to the owners of the data stolen. HIPAA mandates similar disclosures. And no, the US isn’t the only country where this kind of legislation is being mandated.  

Can we presume a similar trend for other classes of data going forward?   

Naturally, the devil is the details. Proponents argue that mandated breach reporting is a compelling impetus for organizations to be better stewards of data that is increasingly replicated across cloud services worldwide. Yet I would argue that there are several philosophical and organizational challenges (and by organizational, I mean organizing bodies who control or “own” the internet infrastructure) and technical challenges that make this difficult to execute on at best, and infeasible at worst, except in the most regulated of industries.

If we presume such legislation is inevitable, what kinds of challenges do businesses and other organizations face for compliance? And what kind of technological solutions can we expect to evolve as an industry to make this easier? How does this affect the widespread explosion of SaaS businesses (which have been enabled by the explosion of IaaS) in today’s “cloud”-enabled world? 

Let’s start with some of the technical challenges, which are:

a) How do I know if I’m breached?

The fact is that much of the time, organizations have no idea they’ve been compromised. According to the Verizon DBIR (referenced by Geer in his talk) 70-80% of all breaches are reported by unrelated 3rd parties.   

This is an interesting challenge when it comes to penning legislation -- it makes little sense to have mandates on breach reporting without a specified time window (otherwise anyone could take advantage of a loophole where postpone notification for years). Yet, many SaaS businesses operating today, which in turn are enabling rapid innovation for other businesses, do not have the expertise in-house to know if they are breached, never mind have the capability of response within a certain time window. Which brings us to the other problem:

b) How do I know the scope of the data compromised?

Technically, it’s often quite difficult to reconstruct the path of an attack. As Dan pointed out, the security industry is becoming increasingly specialized, and not everyone can afford to have a security forensics expert on-hand or pay for breach notification services. It is hopeful that security industry innovation in audit logging for systems, APIs, and applications will hopefully put this kind of data in the reach of non-specialized systems operators, but in the meantime:  is a single entry in an application log good enough to assume the best case, or the worst possible scenario?

c) Who is responsible?

Even if you limit the scope of a breach to certain classes of data, data lives online in complex systems. There are many attack vectors, and the scope of the entities responsible for breach reporting remain fuzzy.  

It’s obvious to almost anyone that the people who handle our credit cards should be required to notify us if that data is stolen, but what about the Facebooks of the world? Think about the proliferation of shared authentication: If I use my Facebook credentials to log into my medical records site, is Facebook now mandated to report breaches within a certain time frame to the users of its auth services, so that the medical records site can respond accordingly? Never mind if you legislate widespread PII breach reporting -- the personal information embedded in private social networking sites that could allow an attacker to do real damage (e.g. impersonate me well enough to get access to my Amazon account).    

If I’m an Infrastructure as a Service provider, and I discover a widespread attack against some hosted file store I provide, am I to know that some of the data being stored falls under the breach reporting requirements?

Take it one level down:  If I’m an internet service provider, and a router is compromised that allows hijacking in such a way that I’m able to compromise a medical records site, am I responsible for reporting? As a common carrier, should I even care about the content that I serve?   

The philosophical argument

Finally: the internet’s power lies in its roots as an open communications platform. If some kid creates a web app that calculates my personality type based on a quiz that asks for my birthday, full name, and my blood type, do we enforce breach reporting? Certainly we can enforce that person’s ability to establish a business and make money within the purview of the government that holds jurisdiction, but that raises new concerns.

Will we start to see micro-internets, whose boundaries are dictated by the government entities that control the underlying infrastructure and who has access to what? It’s not hard to see that mandated breach reporting would impose real cost to innovation. I can envision an internet where the reporting onus is so tough that creating a new social media app in the basement with your friends is an impossibility. And so lies the rub.

Will internet startups choose to launch their businesses in the darker, freer parts of the internet whose infrastructure is controlled by more friendly governments (a la what’s happened to online gambling)? Geer alludes to this -- the compartmentalization of internet security policy given the power of state actors and the tension between what we value as our freedom on the internet, and the role of cyber security. Already, the Chinese internet is a very different place from the one you and I know.

In such a world, how can technology help? Can the security industry provide software that mitigates the costs of breach reporting requirements with better-automated ways of detecting breaches? Will such technology be usable and affordable by mass of startups that are out there today building real businesses and driving technology forward?

What do you think? Are Dan’s ideas about mandated breach reporting farfetched, or do they represent a world we could soon live in?

 
Read More [fa icon=long-arrow-right"]

Aug 13, 2014 4:54:00 PM

[fa icon="comment"] 0 Comments

Aug 6, 2014 4:55:00 PM

[fa icon="comment"] 0 Comments

Subscribe via email: