OSSEC Symposium Recap

If you missed the first OSSEC Symposium, you missed a great opportunity to meet fellow OSSEC users and developers, partake in great food and drink and immerse yourself in a day-and-a-half of pure OSSEC geekiness!

I arrived a bit early to Trend Micro’s corporate headquarters on Thursday and was warmly greeted by the receptionist, Vic and J.B. The weather was a beautiful and sunny 72 degrees and the Trend office was framed by a lovely California vista of rolling hills. We chatted casually as we dined on delicious sandwiches and waited for the Symposium to start.

J.B. led introductions and introduced presenters. My first keynote presentation (to be posted shortly) on my experiences using OSSEC was warmly received and I even managed to get a few chuckles with my airport travel shtick. :) We discussed our pain points deploying, using and administering OSSEC, and of course we had many of the same things to say. The day ended with a wonderful Italian dinner with my new friends.

On day two we started the day with breakfast and proceeded to talk about some of the larger deployments OSSEC has seen (over 4000+ hosts reporting to a single manager!) I gave a brief demo of my integration work with ELSA (to be finished shortly) and then we had some yummy Japanese food. After lunch, I had my chance to present my second keynote–my vision for OSSEC. That is to say, what I would do with the project if I had unlimited time, cooperation and talent. A central theme to my second presentation was not just technology, but how valuable and crucial the friendly community aspect is. Combine these two and you start to develop a shared vision.

The day ended with discussions on the practical aspects of how to move forward: these included bug fixes, rule tuning, agent deployment improvements and roles.

Trend has shown their renewed commitment to the project and has publicly promised to keep it free. I can only hope that we join them as community members to make OSSEC better for everyone.

Tagged with:
Posted in Log Analysis, Log Management

OSSEC Community Symposium, July 12-13 2012

Please join me at the first OSSEC Symposium, sponsored by Trend Micro. This is a forum for the OSSEC community to come together and discuss all things OSSEC. We’ll not only talk about what makes OSSEC so effective, but what we can improve upon, with the specific goal of laying out a road map for the next year. The cost is free and Trend is even throwing in the food, so this is a great way to get some free CPEs for your certifications.

I have also been asked to present and I’ll be talking about my experience using and developing for OSSEC, along with my vision for taking it to the next level. I hope too see you there!

Tagged with:
Posted in Intrusion Detection, Log Analysis, Log Management

First Impressions with ELSA: Bye-bye Grep

When I first read about ELSA, I knew it was going to be a game changer. From the very beginning, this log collection and analysis application had addressed many of the problems plaguing adoption of open source log front-ends in the enterprise. It had scalability (as in pretty much unlimited scalability), it integrated with Active Directory, it had reporting and it was designed to handle thousands of logs per second without breaking a sweat. In other words, it was what many expensive SIEMs promised to be but failed to be in execution.

With an upcoming log infrastructure expansion project, I decided to give it a trial run on an old laptop. If things went well there, I could consider piloting the application for production usage. So with a fair bit of trepidation, I followed the instructions in the README and with two simple commands later it was furiously downloading and installing all sorts of stuff. The installation took over an hour on this poor little laptop, since many elements of ELSA were being compiled. Honestly, I didn’t expect things to go well. With all of these Perl modules and Ubuntu packages being installed, something had to fail and stop the entire chain of events. I was a bit surprised to see that it all just worked. At the end of the installation I opened a browser and was greeted with this nice and clean screen.

Since I already had the firewall logging to this box, it was simply a matter of disabling that application so ELSA could grab the port and start receiving syslogs. A few minutes later, I was able to run a basic query by simply putting in the IP of the firewall.

I expected some sort of hourglass. After all, this was a five year old laptop, not a server. But the query results came back pretty much immediately (199 milliseconds to be exact).

Performing additional queries proved to be somewhat more difficult, simply because I had not yet wrapped my mind around the search syntax. I tried 192.168 thinking I would see everything on that segment, since the doc mentioned that wildcards were not an option, but I got no results. After a bit more reading and experimenting , I got the hang of it. Still, it has taken me out of my comfort zone of grep-style regular expression searching and that will take some getting used to.

By clicking on the ‘Report On’ button and then selecting ‘Hour’ I had an instant visual of the level of log activity from this one host. From there I could save or export the results in PDF, Excel, CSV or HTML. Doing a line count in a days worth of logs for one IP using grep would have taken several minutes at least, never mind plotting it. The chart is somewhat Splunk-like, but without five thousand dollars to get started and the threat of locking you out of your log history should you go over your limit.

Poking around a bit more, I begin to see where ELSA really shines. The use of plugins allows you to leverage community intelligence and to do things like see of the IP in the logs is known to be malicious. Can I get an AMEN!!!??

So, what would I change? I am starting to get some ideas, but really, they are just cosmetic things that really don’t seem all that important right now. One thing is clear: ELSA was designed by someone who understands the needs of log analysts. We need a no-nonsense, clean and stable interface, fast results and enough enterprise-level features to be able to justify it in the work place and ELSA fits the bill perfectly. Listen up log pros, this is the one to watch.

Posted in Log Analysis, Log Management

Waging War in the Digital Age

What are the ethical ramifications of waging war via computer? Does war even have to be declared? Where are the boundaries in the virtual world? What happens when machines begin to think for themselves? These are the questions I explore in an article I wrote for the most recent issue of ISSA Journal.

As always, I welcome your feedback. Feel free to challenge my assertions. All constructive comments will be let through moderation.

Posted in Computer Crime, Ethics

When Disclosure Can Kill

What should one do when discovering a vulnerability in a medical device? What if, by disclosing the vulnerability, you could put someone’s life at risk? These are the questions I explore in an article I wrote for the most recent issue of ISSA Journal.

As always, I welcome your feedback. Feel free to challenge my assertions. All constructive comments will be let through moderation.

Posted in Dialogue

3WoO Day 7.1: The OSSEC-O-Lantern

Halloween is a special time of year. It’s that one day where we confuse our children by telling them to not only take candy from strangers, but to go out and beg for it while dressed in an overpriced polyester superhero costume. It’s also the time of year for pumpkin carving.

This is something I have wanted to do for awhile now. Enter the OSSEC-O-Lantern. This is my first attempt at carving something other than a predictable toothy smile into a pumpkin. I thought the OSSEC logo would be perfect, so I gave it a shot.

This is what the pumpkin looks like before it is lit. For the center “eye,” I cut up a lens from a cheap kids pair of sunglasses.

And here is what it looks like lit up. Notice my feeble attempt at getting the OSSEC logo shading right.

As they say in the cartoons, that’s all folks!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Tagged with:
Posted in Log Analysis

3WoO Day 7: Wrapping It Up

Well, despite my best efforts, the day 7 post is going to be a bit delayed. But I think you’ll like it. So, stay tuned.

Tagged with:
Posted in Dialogue

3WoO Day 6: Learning From Malware Part II–The Rules

Yesterday, I blogged about some annoying malware. The point was to learn some of the techniques that this general class of malware uses, so we could write some OSSEC rules to detect it. If you haven’t already read that post, go back and read it so you’ll understand this one.

One indicator of malware is that it changes the hosts file and redirects sites to localhost. There are three different ways we can detect this with OSSEC:

  1. A basic file integrity check, although this doesn’t tell us what changed.
  2. A command output check which echoes the contents of the file
  3. A compliance check

File integrity checks are pretty simple. All you have to do is tell OSSEC to monitor the file. Let’s focus on the command output and compliance checks.

  1. Command output. Unfortunately, the report_changes option is not available for Windows right now. To get the content of the file into OSSEC, we’ll simply use the built-in Windows type command to achieve our objective. Put this in your ossec.conf on the agent (watch for wrapping):
  2. <localfile>
    <log_format>full_command</log_format>
    <command>%COMSPEC% /C type %WINDIR%\system32\drivers\etc\hosts | %WINDIR%\system32\findstr.exe /BVC:”#”</command> <alias>Windows Hosts File</alias>
    </localfile>

    Now, just add a rule like this so you’ll get the diff:

    <rule id=”100027″ level=”7″>
    <if_sid>530</if_sid>
    <match>ossec: output: ‘Windows Hosts File’</match>
    <check_diff />
    <description>Windows Hosts File Changed</description>
    </rule>

    At this point in time, there is a bug in OSSEC, which means that the diff of the file will not be in the email, but it should be in the original alert in alerts.log.

  3. Compliance checks. Compliance checks are part of the rootcheck functionality of OSSEC and are run right after the syscheck (file integrity) scans. They allow you to look within files, registry keys and processes for certain values, and alert on them. For this example, we want to know if there are any entries other than the expected localhost entry. Simply add this to your etc/shared/win_malware_rcl.txt file (watch for wrapping):

[ Site Redirected to localhost] [any] []

f:%WINDIR%\System32\Drivers\etc\HOSTS -> r:^127.0.0.1 && !%WINDIR%\System32\Drivers\etc\HOSTS -> r:^127.0.0.1\s+localhost;

Referring back to the original post, it would not be difficult to take the other examples and make them into general purpose checks. One word of caution: there is currently no local_win_malware_rcl.txt capability, so be sure to back up your changes; otherwise, they will be overwritten when you next upgrade.

What other examples from malware can you think of that would make for good rules? I would be happy to hear about your experiences in the comments.

Tagged with:
Posted in Dialogue, Log Analysis

3WoO Day 4: Learning From Malware

When most people receive an email with a malicious attachment, they do one of two things: either they delete it, knowing that it is malicious, or they get fooled into executing the attachment, which ruins their day. Then there is the third category of people. They are people like me. These people intentionally infect a system to see what kinds of interesting things the malware does. Some guys like football. I like to analyze malware. So when I got an unexpected email purporting to be from FedEx, I had to let the attachment loose on a virtual machine.  I’m going to skip the detailed analysis and post a few screenshots, because the purpose of this post will be to convert what we have learned into something that OSSEC can detect.

As you can see, this is a garden-variety fake HDD scanner. It was pretty annoying–hiding my drives, killing my analysis tools, redirecting search results and the other usual things.

There’s one thing that is almost universally true about malware like this; it leaves evidence on the system. Our job is to look for those changes and write rules to the general class of behavior. It wouldn’t make much sense to write a rule looking for the specific, random file names that this trojan dropped; instead, we want to understand what happens in general.

Take a look at what MalwareBytes found. It also added the following to the hosts file:

127.0.0.1 thepiratebay.org
127.0.0.1 www.thepiratebay.org
127.0.0.1 mininova.org
127.0.0.1 www.mininova.org
127.0.0.1 forum.mininova.org
127.0.0.1 blog.mininova.org
127.0.0.1 suprbay.org
127.0.0.1 www.suprbay.org

What can we learn from this? Consider the following:

  1. Three files were added to run at Windows startup. All of them are from locations in the user’s profile. Two of them are particular suspicious, as the run line begins with rundll32.
  2. Several policies were changed. The effect was hiding standard parts of the OS. This may be legitimate for something like a kiosk, but is not generally used.
  3. Two executable files were written to the root of \application data. Normal apps should not do this and I see this particular behavior with malware a lot these days.
  4. Several URLs were  redirected to localhost. Generally, this is only done legitimately to block ad-serving sites, and there are better ways to do that these days.

As you can see, these are behavioral observations. In part II, we’ll take these observations and make them actionable within OSSEC. Stay tuned.

 

Tagged with:
Posted in Log Analysis

3WoO Day 4: Five Tips & Tricks for OSSEC Ninjas!

Are you an OSSEC ninja? Do you dress in orange and red and laugh maniacally at all of the frustrated attackers who have tried to take you down? Do you take medication for that condition? Ok, well, best of luck to you. Back to the topic at hand.

Here are some of my favorite tips and tricks. They promise to elevate you to the next level of OSSEC guru-ness, or maybe they’ll just help you solve a problem. Either way, in no particular order of importance:

  1. Log everything. If you have the space, I highly recommend that you add the <logall>yes</logall> element to ossec.conf. After restarting OSSEC, all events will be written to logs/archives/archives.log. By default, OSSEC will discard something that doesn’t match a rule, but if you are ever compromised you really should have all logs to examine.
  2. No decoder necessary. Do you need to match on specific text in an unsupported application? While decoders definitely extend the functionality of OSSEC, they are not strictly necessary to get an alert. You can simply create a rule using a simple match. For example: <match>hax0r logged in</match>.
  3. Find all alerts above a specific level. I’m stealing this one from a past ossec-users list post of mine. If you have ever wanted to get a list of alerts above a certain level from your alerts log, this will do it. In this example, we look for anything above level 12:
  4. grep -E -B2 -A3 ‘^Rule: [0-9]{1,6} \(level 1[2345]\) -> ‘.*’$’ logs/alerts/alerts.log \

    | sed ‘s/^–$/ /’

  5. Use OSSEC to monitor OSSEC. Oddly, OSSEC does not monitor itself out of the box. It’s a good idea to monitor your OSSEC instance, if for no other reason than to be confident that you or a trusted colleague are the only ones making production changes. Add this to your ossec.conf (watch for wrapping):
  6. <directories check_all=”yes” realtime=”yes” report_changes=”yes”>/var/ossec</directories>

    Now, because certain directories change frequently, you’ll need some exclusions:

    <ignore>/var/ossec/queue</ignore>
    <ignore>/var/ossec/logs</ignore>
    <ignore>/var/ossec/stats</ignore>
    <ignore>/var/ossec/var</ignore>

    Finally, you’ll want to be careful about the alerts that you receive. One file that is monitored by this is client.keys, which contains, well, your keys. It’s not really a good idea to have OSSEC email you the diff of this file, so I would recommend creating a rule at level 0 to filter it out.

  7. Manage centrally. Having only a few agents, it’s not that big of a deal to configure settings locally. When you get into the hundreds or thousands, it becomes very onerous. OSSEC has the ability to centrally manage your agent settings and I highly recommend that you use it. With this feature, all you need in your agent ossec.conf is the destination IP or hostname of the manager. I also like to enable active response locally in the Windows ossec.onf file because it uses active response to restart the agent–which, in turn, can be used to read the new agent.conf.

If you comb through the source code, you might also be able to find some other nuggets worth having. What have you discovered?

Tagged with:
Posted in Log Analysis