The OpenSSL Heartbeat Vulnerability: Forgotten Attack Vectors

The web is abuzz with reports of the OpenSSL Heartbeat vulnerability. It’s not an understatement to say that this is the most serious vulnerability to come along in several years. There are many good write-ups about it and I don’t need to repeat them here, but the bottom line is that sensitive information like oh, usernames and passwords can be remotely scraped from sites like, oh, your bank. And there is no log to indicate that it happened.

So while everyone focuses on getting their web servers patched, I wanted to point out a few areas where people may not be looking:

  • Mail servers use SSL to send mail and deliver it to you. If you run a mail server, patch your system and restart your mail services.
  • Windows servers don’t just run IIS. They run Apache, too. Sometimes vendors bundle Apache and/or OpenSSL with their applications and start an administrative interface on high-numbered ports.
  • Proxy servers, especially those that do SSL-interception, may be vulnerable.
  • SSL-based VPNs are ripe for the picking.
  • Embedded devices may use OpenSSL. A lights-out remote management style device would be another juicy target for attackers.

After you have your publicly-facing web site patched, think laterally. There’s more to this than meets the eye.

Posted in Encryption, Incident Response, Risk Management, Vulnerabilities

Changes with OSSEC

After many years, I have decided to step down from the OSSEC core team. It was not a decision I made lightly, but due to some recent changes in the project, I felt I would be more useful as a contributor, rather than as a maintainer. That is to say, I will still be hanging around, but I’ll leave the heavy lifting to the other guys.

If you haven’t checked in on the OSSEC happenings lately, hop on over to Github. There are lots of things going on.

What’s next for me? I’m not quite sure yet. I still have a strong desire to develop better OSSEC rules for detecting attacks. Maybe I’ll find another open source project that needs help. Then again, it’s about time to start my garden. The future has yet to be written. :)

Tagged with:
Posted in Intrusion Detection

With Your Finger on the Trigger…

It was a pretty ordinary day. I think I was doing a review of our firewall ruleset–a decidedly monotonous but necessary task. Then in came an alert that McAfee had deleted a file on one of our workstations. That doesn’t happen often, but it’s also not out of the ordinary. A minute later there was another alert. Then another. By this time, my attention had peaked and my heart rate started to increase. Did we have an outbreak?

The CISO, who also gets these alerts, nearly crashed into me as we raced to each other’s office. Something was definitely up and we had to act fast. The pressure was on. An incident was happening and the first step was to contain the threat. We had two choices:

  1. It could be a bad DAT update. We could disable AV across the company until we had a chance to identify the problem, roll back the DAT and turn AV back on. If we didn’t disable AV, then we might have hundreds of blue-screening computers to deal with.
  2. It could be a worm spreading throughout the company. If we disabled AV, that could be catastrophic. It could allow the malware to take complete hold of the company.

So, what was the first thing I did? Nothing. That’s right, nothing. I took a few deep breaths. I centered myself. I prepared to act, but did not act.

When dealing with an incident, it’s imperative that you keep a cool head and keep your wits about you. It’s vital that you block out all distractions, including your superiors if they can’t help you in that moment, and focus on containing the incident. Minutes matter. Seconds matter. This is no time for disruptions.

Fortunately, the CISO and I work well together as a team, and he has the technical chops to be useful in situations like this. So we analyzed the symptoms:

  • The files being deleted were not always the same. Many of them were trusted executables that, as far as we could tell, had not been modified.
  • The files being deleted weren’t newly dropped onto the system
  • The files being deleted weren’t always binaries.
  • The detection component was always Artemis, which has a higher risk of false-positives.
  • Web traffic did not correspond with the pattern we were seeing.

Considering that we blocked executables at the proxy, combined with the other symptoms, we made the call that it must be a problem with Artemis and disabled that component temporarily in order to contain the immediate threat. We then followed with the remaining incident response steps: eradication, recovery and lessons learned.

It turns out we were right and we made the right call. But we could have just as easily made the wrong call. In the heat of the moment, you sometimes have to make the best decision you can with the facts at hand and then deal with the consequences of that decision.

A mature incident response program will allow for bad calls as well as good ones. It will have checks and balances. It will stand behind the people making the call if they are otherwise good people, even if it turns out to be the wrong call. It will be team-focused and not individual-focused. It will learn and grow and it will evolve. The point is to have a program in place and do something. The rest will follow.

Posted in Incident Response, Intrusion Detection

Malicious Data From Trusted Companies

Last night, I received one of the typical malicious “you have a package waiting” spams to an email address that I have only used at one place–in this case DynDNS.com. It included a link inviting me to print a shipping label, which of course leads to malware.

I logged into my DynDNS account with the intention of notifying support that they may have been breached, when I happened to stumble on some community forum posts indicating that others were experiencing the same thing. This in turn linked to an acknowledgement by the company that they were at least aware of the issue.

In this case, DynDNS believes that a business partner was responsible. Although we can never really know for sure, it does point out that your security is dependent on an entire ecosystem of trust. You trust companies that you do business with not to send you malicious data, and they trust companies that they do business with to do the same. They also trust that company to not send malicious data to their customers, because they have inherently opened themselves up to some liability for some quantifiable benefit.

Tony Perez of Sucuri Security does a decent job explaining how external services can compromise the integrity of your web site. It’s not uncommon these days for major sites to deliver malicious data to their users by way of ad networks.

The one thing I see missing with DynDNS and others like them is an acknowledgement of responsibility. When a company chooses to places trust in a business partner, that doesn’t mean they are off the hook for security. The company needs to hold that partner accountable, as the customer should hold the business they are interacting with accountable. The responsibility and accountability has to end with someone and in this case, it is DynDNS that is responsible for the breach of trust.

No one does security perfectly, but remember to vet your partners carefully. Think about what customer data you share with your business partners and remember to extend your incident response plan to include scenarios where that partner becomes compromised, because ultimately, you are the one accountable to your customers.

Tagged with:
Posted in Incident Response, Risk Management

OSSEC CON 2013 Materials Available

My and my esteemed colleagues’ presentations from OSSEC CON 2013 are now available. The conference summary can be found here and my presentation can be found here.

It was great meeting everyone and we had some great discussions surrounding how to make OSSEC better. Version 2.7.1 is just around the corner and we are really looking forward to some exciting changes for v3.0 (still in planning). Once again, I would like to thank Trend Micro for graciously hosting this event and for their generous contributions to the open source community.

Tagged with: ,
Posted in Intrusion Detection, Log Analysis, Log Management

OSSEC CON 2013

Please join me at the second annual OSSEC conference, OSSEC CON 2013. I have the pleasure of joining Scott Shin, CTO of AtomicCorp, and Santiago Gonzalez, Director of Professional Services at AlienVault, in presenting. Time is running out to register, so make sure you sign up before there is no space left. What’s the cost? Free. Nothing. Zip. Nada! And there’s even free lunch. Hope to see you there.

Tagged with:
Posted in Intrusion Detection, Log Analysis

Voting Without Photo ID

I successfully voted for President of the United States tonight without showing a photo ID. Perhaps some background is in order…

Last year, Texas passed a law that required voters to present photo ID to vote. A federal court later overturned the law, ruling that it would impose “strict, unforgiving burdens” on poor minority voters. Regardless of your views on whether this was a just ruling or not, this means voters do not have to present a photo ID to vote.

Other forms of identification are acceptable to vote, such as a utility bill. So with that in mind, I decided to test the response of the election workers to see if they had been properly instructed on the new law.

So with my water bill in hand, I handed it to the poll worker. She asked for photo ID. I replied that I was going to use my water bill as identification. The worker next to her asked if I had a driver’s license and I replied in the affirmative, but that for the purposes of voting I was going to use my utility bill. We went back and forth a couple more times until the original worker recalled that she had been told in training that using a utility bill was acceptable. One last attempt was made asking me for my driver’s license number, to which I declined. Eventually, I was allowed to vote.

To my surprise, I was then given the option of a paper ballot or an electronic vote. Of course, I chose paper since electronic voting has been found to be less than robust.

So what did I learn from this experience? Was this a vast right-wing conspiracy to deny me the right to vote by ignoring the judges’ ruling? No, I don’t think so–not for a moment. I think this was simply a case where the workers did not know how to account for the one individual who declined to show a photo ID, who did not follow the norm. It was a training issue. But that’s not to say this is not an important issue. Whether the intention is underhanded or not, I did feel pressured to show ID and that was not required by law.

Election integrity is important in all aspects, whether it is due to vulnerable electronic voting machines or a failure in process. People are just as important as technology and in no other area can this matter than in a democracy.

Posted in Personal Liberty

Developing a Java Management Strategy

I considered many ways to title this blog post: The Scourge That is Java; Die, Java, Die!; or, perhaps Java, it’s time we had a talk. As a security guy, Java has been my nemesis. It has been far more trouble than it’s worth. Multiple installed versions make management difficult, updating the enterprise never catches all of the PCs and irresponsible vendors may force us to stay on vulnerable versions in order to be supported. All the while, the Blackhole Exploit Kit gets updated within days of a new vulnerability and starts spraying out the infections.

One-hundred percent, yes, one-hundred percent of the malware infections I have been seeing in the last few months have begun from a malicious .jar file download. So I got to thinking, what do we really need Java for? Considering the demonstrated risks, do we need it at all?

Most people will say that you can never get rid of Java entirely. They may be right. There will always be that one business-critical application that requires it. They will also say that you simply cannot block Java from the Internet. This usually means they are mixing up Java with JavaScript.

Java applications are .jar or .class files which are downloaded to the local machine and executed locally. When sourced from the Internet, this means you are allowing arbitrary code to execute on your machine within software which has an ongoing track record of serious security vulnerabilities.  This just won’t do.

So I decided to look at the data. How many Java applications have people really been running, relative to the number of Internet domains they access? I used some queries similar to the following to search through months of logs:

grep ‘URL Accessed’ firewall.log | grep -E “\.jar$|\.class$”

After some further massaging, I had whittled the list down to ninety-six unique domains, many of which only had a single hit. There were less than twenty domains with more than five hits and of those, six had clear business usage. The question became clear: Why allow the entire Internet to potentially execute Java applications on all of these PCs when only a few legitimate sites needed to? The risks clearly outweighed the benefits.

So, how can you protect yourself against Java-based threats today? Consider these steps:

  1. Perform an assessment of internal business applications that may require Java. If there are none, consider uninstalling it entirely and only install it for those people that truly need it.
  2. Include language in your contracts with vendors that mandate they support the most recent versions of Java at all times.
  3. If you need it for some applications, but don’t need to use it through a browser, unplug it.
  4. If Java is needed internally, assess the need for Internet-based Java. Search your logs (you do have logs, don’t you?) for .jar and .class downloads. Normalize the results and try to understand what legitimate use your users have. Don’t be surprised if you find strangely-named downloads from TLDs like .in and .ru–be careful, these may be malicious.
  5. Finally, whatever is left, patch, patch, patch. Don’t wait for a monthly patch cycle. Patch within a week, at most. Antivirus will not save you here. Trust me. I have seen it time-and-time again.

These steps all really point to one thing: we have come to the point where Java cannot be trusted and instead of blocking known bad, a white list strategy is necessary. This can lead to reduced support costs and substantially reduced risk. Let the data do the talking, You might be surprised at what it has to say.

Posted in Risk Management, Secure Design, Systems Hardening, Vulnerabilities

The Future of OSSEC

It has been awhile since the last release of OSSEC and some users wonder if the project is really still active. Well, I am here to tell you that not only is it active, but it has been the most active it has ever been!

So, what have we been up to? As we prepare for the next beta release, which will happen in September, there has been lots going on:

  1. We have been actively searching for uncommitted patches they may have been overlooked. Some of these are over a year old and have been contributed by other users. They fix bugs which have been lingering for awhile.
  2. We have been dusting off rules and decoders that some of us have forgotten to contribute. Many of these are designed to decode additional fields, which should make rules more accurate.
  3. Documentation is being worked on. Dan Parriott has done a wonderful job of writing and maintaining most of the documentation. It gets better all the time. Of course, Dan appreciates tickets and contributions against the doumentation.
  4. Of course, there are new features. I won’t let the cat out of the bag yet, but I think many of them are pretty cool.

The end result is that we hope this will be the most stable and usable version of OSSEC yet. And we hope you’ll try it out and report any issues.

As to the next release after that? Expect big changes that fundamentally change the philosophy of OSSEC. Expect it to have more insight and context about attacks, with dynamic updates designed to have more up-to-date information on a much more frequent basis.

Tagged with:
Posted in Intrusion Detection, Log Analysis

Symposium Presentations Available / The Future of OSSEC

Trend did a great job of outlining our plan for OSSEC in this post. They begin by describing the Symposium, just as I did in my previous post, then go on to lay out a detailed plan for the future. A key theme is making OSSEC easier to use and more relevant for everyone. How do we get to this future? Well, that depends a lot on the amount of help we can get from the very talented members of the community. Remember, OSSEC is a volunteer community project and we welcome the assistance of anyone who wants to pitch in.

They are also hosting my presentations if you care to have a look. You probably won’t get as much out of them as you would have at the Symposium, since my slides tend to be simple, but the general ideas are there.

Day 1 details my experiences applying OSSEC over the last several years and can be found here.

Day 2 outlines what I think OSSEC can be–an extension of the great community that already exists. Access it here.

Ready to pitch in? Let us know…

Tagged with:
Posted in Log Analysis, Log Management