Security Lessons from the Links
A couple of weeks ago I was enjoying a round of golf with my friend Mike and the conversation went something like this:
ME: Have you seen the new Brand X “Gigantor” driver? They are saying this thing will add 20 yards to your drives! I think I’m going to trade in my driver for one of these.
MIKE: Why? So you can hit it 20 yards farther into the woods?
That hurt, but Mike wasn’t finished.
MIKE: Look, man. You are talking about spending $400 bucks on a club you will use on a maximum of 14 holes per round. You’d be a LOT better off spending that money on some lessons and fixing the flaws in your swing.
ME: What’s wrong with my swing?
MIKE: Well you do hold the right end of the club, but beyond that, pretty much everything needs some work. You are missing a lot of the fundamentals of a good, consistent swing pattern. Fix that and you may not hit the ball 20 yards further off the tee but you’ll drop 20 strokes off your score.
It was hard to argue with his logic. I was really after a lower score but had fallen prey to the “shiny new object syndrome”. I wanted a quick fix rather than putting in the work needed on my swing fundamentals.
As I thought about this more, I saw a parallel between my golf game and the way a lot of organizations approach information security. In the penetration tests and security program assessments I’ve performed throughout my career, I’ve seen a lot of companies that have chased the shiny new security technology thinking it would cure all of their ills all the while failing to address the weaknesses in the security fundamentals that are at the heart of their security problems.
To a certain extent, I understand why this happens. Security teams often run on lean budgets and have worked for several years on the mantra of “do more with less.” Already exhausted and overworked, it’s very appealing to latch on to any technology that promises to become an instant cure to the security problems in their environment.
This usually creates two problems. First, security teams end up with a dozen different security technologies that don’t integrate with one another and, therefore, actually add to the administrative burden of security. Second, many new technologies simply mask underlying security problems and major incidents still occur when use cases not covered by the technology intersect with the right threats.
The answer, unfortunately is about as popular as telling an overweight person they need to go on a diet. Security teams need to ensure that proper fundamentals are in place across the enterprise.
If you are wondering where to start, here are a few recommendations:
- Establish and enforce secure build standards. At a minimum, disable unnecessary services, restrict use of removable media, turn off autorun features, and limit use of administrator credentials on workstations.
- Be disciplined about patch management. This includes not only Windows workstations and servers but UNIX/Linux systems, 3rd-party applications like Adobe Reader and the Java Runtime Engine, and core applications like IIS, Exchange, Apache, Oracle, etc. Where you can’t patch, apply compensating controls like segmentation or whitelisting to manage risk.
- Address application security. Implement the necessary processes and controls to ensure that web and mission-critical applications are free from the OWASP top ten security defects.
- Evaluate the controls and security tools you already own. Make sure that the tools and controls already in your arsenal are properly configured and used to full effect.
Understand that implementing these fundamentals won’t make your systems, applications, and networks bulletproof, but they will reduce the available attack surface and better position your organization to evaluate and address residual risks.