I successfully voted for President of the United States tonight without showing a photo ID. Perhaps some background is in order…
Last year, Texas passed a law that required voters to present photo ID to vote. A federal court later overturned the law, ruling that it would impose “strict, unforgiving burdens” on poor minority voters. Regardless of your views on whether this was a just ruling or not, this means voters do not have to present a photo ID to vote.
Other forms of identification are acceptable to vote, such as a utility bill. So with that in mind, I decided to test the response of the election workers to see if they had been properly instructed on the new law.
So with my water bill in hand, I handed it to the poll worker. She asked for photo ID. I replied that I was going to use my water bill as identification. The worker next to her asked if I had a driver’s license and I replied in the affirmative, but that for the purposes of voting I was going to use my utility bill. We went back and forth a couple more times until the original worker recalled that she had been told in training that using a utility bill was acceptable. One last attempt was made asking me for my driver’s license number, to which I declined. Eventually, I was allowed to vote.
To my surprise, I was then given the option of a paper ballot or an electronic vote. Of course, I chose paper since electronic voting has been found to be less than robust.
So what did I learn from this experience? Was this a vast right-wing conspiracy to deny me the right to vote by ignoring the judges’ ruling? No, I don’t think so–not for a moment. I think this was simply a case where the workers did not know how to account for the one individual who declined to show a photo ID, who did not follow the norm. It was a training issue. But that’s not to say this is not an important issue. Whether the intention is underhanded or not, I did feel pressured to show ID and that was not required by law.
Election integrity is important in all aspects, whether it is due to vulnerable electronic voting machines or a failure in process. People are just as important as technology and in no other area can this matter than in a democracy.
I considered many ways to title this blog post: The Scourge That is Java; Die, Java, Die!; or, perhaps Java, it’s time we had a talk. As a security guy, Java has been my nemesis. It has been far more trouble than it’s worth. Multiple installed versions make management difficult, updating the enterprise never catches all of the PCs and irresponsible vendors may force us to stay on vulnerable versions in order to be supported. All the while, the Blackhole Exploit Kit gets updated within days of a new vulnerability and starts spraying out the infections.
One-hundred percent, yes, one-hundred percent of the malware infections I have been seeing in the last few months have begun from a malicious .jar file download. So I got to thinking, what do we really need Java for? Considering the demonstrated risks, do we need it at all?
Java applications are .jar or .class files which are downloaded to the local machine and executed locally. When sourced from the Internet, this means you are allowing arbitrary code to execute on your machine within software which has an ongoing track record of serious security vulnerabilities. This just won’t do.
So I decided to look at the data. How many Java applications have people really been running, relative to the number of Internet domains they access? I used some queries similar to the following to search through months of logs:
grep ‘URL Accessed’ firewall.log | grep -E “\.jar$|\.class$”
After some further massaging, I had whittled the list down to ninety-six unique domains, many of which only had a single hit. There were less than twenty domains with more than five hits and of those, six had clear business usage. The question became clear: Why allow the entire Internet to potentially execute Java applications on all of these PCs when only a few legitimate sites needed to? The risks clearly outweighed the benefits.
So, how can you protect yourself against Java-based threats today? Consider these steps:
- Perform an assessment of internal business applications that may require Java. If there are none, consider uninstalling it entirely and only install it for those people that truly need it.
- Include language in your contracts with vendors that mandate they support the most recent versions of Java at all times.
- If you need it for some applications, but don’t need to use it through a browser, unplug it.
- If Java is needed internally, assess the need for Internet-based Java. Search your logs (you do have logs, don’t you?) for .jar and .class downloads. Normalize the results and try to understand what legitimate use your users have. Don’t be surprised if you find strangely-named downloads from TLDs like .in and .ru–be careful, these may be malicious.
- Finally, whatever is left, patch, patch, patch. Don’t wait for a monthly patch cycle. Patch within a week, at most. Antivirus will not save you here. Trust me. I have seen it time-and-time again.
These steps all really point to one thing: we have come to the point where Java cannot be trusted and instead of blocking known bad, a white list strategy is necessary. This can lead to reduced support costs and substantially reduced risk. Let the data do the talking, You might be surprised at what it has to say.
It has been awhile since the last release of OSSEC and some users wonder if the project is really still active. Well, I am here to tell you that not only is it active, but it has been the most active it has ever been!
So, what have we been up to? As we prepare for the next beta release, which will happen in September, there has been lots going on:
- We have been actively searching for uncommitted patches they may have been overlooked. Some of these are over a year old and have been contributed by other users. They fix bugs which have been lingering for awhile.
- We have been dusting off rules and decoders that some of us have forgotten to contribute. Many of these are designed to decode additional fields, which should make rules more accurate.
- Documentation is being worked on. Dan Parriott has done a wonderful job of writing and maintaining most of the documentation. It gets better all the time. Of course, Dan appreciates tickets and contributions against the doumentation.
- Of course, there are new features. I won’t let the cat out of the bag yet, but I think many of them are pretty cool.
The end result is that we hope this will be the most stable and usable version of OSSEC yet. And we hope you’ll try it out and report any issues.
As to the next release after that? Expect big changes that fundamentally change the philosophy of OSSEC. Expect it to have more insight and context about attacks, with dynamic updates designed to have more up-to-date information on a much more frequent basis.
Trend did a great job of outlining our plan for OSSEC in this post. They begin by describing the Symposium, just as I did in my previous post, then go on to lay out a detailed plan for the future. A key theme is making OSSEC easier to use and more relevant for everyone. How do we get to this future? Well, that depends a lot on the amount of help we can get from the very talented members of the community. Remember, OSSEC is a volunteer community project and we welcome the assistance of anyone who wants to pitch in.
They are also hosting my presentations if you care to have a look. You probably won’t get as much out of them as you would have at the Symposium, since my slides tend to be simple, but the general ideas are there.
Day 1 details my experiences applying OSSEC over the last several years and can be found here.
Day 2 outlines what I think OSSEC can be–an extension of the great community that already exists. Access it here.
Ready to pitch in? Let us know…
If you missed the first OSSEC Symposium, you missed a great opportunity to meet fellow OSSEC users and developers, partake in great food and drink and immerse yourself in a day-and-a-half of pure OSSEC geekiness!
I arrived a bit early to Trend Micro’s corporate headquarters on Thursday and was warmly greeted by the receptionist, Vic and J.B. The weather was a beautiful and sunny 72 degrees and the Trend office was framed by a lovely California vista of rolling hills. We chatted casually as we dined on delicious sandwiches and waited for the Symposium to start.
J.B. led introductions and introduced presenters. My first keynote presentation (to be posted shortly) on my experiences using OSSEC was warmly received and I even managed to get a few chuckles with my airport travel shtick. :) We discussed our pain points deploying, using and administering OSSEC, and of course we had many of the same things to say. The day ended with a wonderful Italian dinner with my new friends.
On day two we started the day with breakfast and proceeded to talk about some of the larger deployments OSSEC has seen (over 4000+ hosts reporting to a single manager!) I gave a brief demo of my integration work with ELSA (to be finished shortly) and then we had some yummy Japanese food. After lunch, I had my chance to present my second keynote–my vision for OSSEC. That is to say, what I would do with the project if I had unlimited time, cooperation and talent. A central theme to my second presentation was not just technology, but how valuable and crucial the friendly community aspect is. Combine these two and you start to develop a shared vision.
The day ended with discussions on the practical aspects of how to move forward: these included bug fixes, rule tuning, agent deployment improvements and roles.
Trend has shown their renewed commitment to the project and has publicly promised to keep it free. I can only hope that we join them as community members to make OSSEC better for everyone.
Please join me at the first OSSEC Symposium, sponsored by Trend Micro. This is a forum for the OSSEC community to come together and discuss all things OSSEC. We’ll not only talk about what makes OSSEC so effective, but what we can improve upon, with the specific goal of laying out a road map for the next year. The cost is free and Trend is even throwing in the food, so this is a great way to get some free CPEs for your certifications.
I have also been asked to present and I’ll be talking about my experience using and developing for OSSEC, along with my vision for taking it to the next level. I hope too see you there!
When I first read about ELSA, I knew it was going to be a game changer. From the very beginning, this log collection and analysis application had addressed many of the problems plaguing adoption of open source log front-ends in the enterprise. It had scalability (as in pretty much unlimited scalability), it integrated with Active Directory, it had reporting and it was designed to handle thousands of logs per second without breaking a sweat. In other words, it was what many expensive SIEMs promised to be but failed to be in execution.
With an upcoming log infrastructure expansion project, I decided to give it a trial run on an old laptop. If things went well there, I could consider piloting the application for production usage. So with a fair bit of trepidation, I followed the instructions in the README and with two simple commands later it was furiously downloading and installing all sorts of stuff. The installation took over an hour on this poor little laptop, since many elements of ELSA were being compiled. Honestly, I didn’t expect things to go well. With all of these Perl modules and Ubuntu packages being installed, something had to fail and stop the entire chain of events. I was a bit surprised to see that it all just worked. At the end of the installation I opened a browser and was greeted with this nice and clean screen.
Since I already had the firewall logging to this box, it was simply a matter of disabling that application so ELSA could grab the port and start receiving syslogs. A few minutes later, I was able to run a basic query by simply putting in the IP of the firewall.
I expected some sort of hourglass. After all, this was a five year old laptop, not a server. But the query results came back pretty much immediately (199 milliseconds to be exact).
Performing additional queries proved to be somewhat more difficult, simply because I had not yet wrapped my mind around the search syntax. I tried 192.168 thinking I would see everything on that segment, since the doc mentioned that wildcards were not an option, but I got no results. After a bit more reading and experimenting , I got the hang of it. Still, it has taken me out of my comfort zone of grep-style regular expression searching and that will take some getting used to.
By clicking on the ‘Report On’ button and then selecting ‘Hour’ I had an instant visual of the level of log activity from this one host. From there I could save or export the results in PDF, Excel, CSV or HTML. Doing a line count in a days worth of logs for one IP using grep would have taken several minutes at least, never mind plotting it. The chart is somewhat Splunk-like, but without five thousand dollars to get started and the threat of locking you out of your log history should you go over your limit.
Poking around a bit more, I begin to see where ELSA really shines. The use of plugins allows you to leverage community intelligence and to do things like see of the IP in the logs is known to be malicious. Can I get an AMEN!!!??
So, what would I change? I am starting to get some ideas, but really, they are just cosmetic things that really don’t seem all that important right now. One thing is clear: ELSA was designed by someone who understands the needs of log analysts. We need a no-nonsense, clean and stable interface, fast results and enough enterprise-level features to be able to justify it in the work place and ELSA fits the bill perfectly. Listen up log pros, this is the one to watch.
What are the ethical ramifications of waging war via computer? Does war even have to be declared? Where are the boundaries in the virtual world? What happens when machines begin to think for themselves? These are the questions I explore in an article I wrote for the most recent issue of ISSA Journal.
As always, I welcome your feedback. Feel free to challenge my assertions. All constructive comments will be let through moderation.
What should one do when discovering a vulnerability in a medical device? What if, by disclosing the vulnerability, you could put someone’s life at risk? These are the questions I explore in an article I wrote for the most recent issue of ISSA Journal.
As always, I welcome your feedback. Feel free to challenge my assertions. All constructive comments will be let through moderation.
Halloween is a special time of year. It’s that one day where we confuse our children by telling them to not only take candy from strangers, but to go out and beg for it while dressed in an overpriced polyester superhero costume. It’s also the time of year for pumpkin carving.
This is something I have wanted to do for awhile now. Enter the OSSEC-O-Lantern. This is my first attempt at carving something other than a predictable toothy smile into a pumpkin. I thought the OSSEC logo would be perfect, so I gave it a shot.
This is what the pumpkin looks like before it is lit. For the center “eye,” I cut up a lens from a cheap kids pair of sunglasses.
And here is what it looks like lit up. Notice my feeble attempt at getting the OSSEC logo shading right.