Guys, you need to patch your stuff.
In April, I had the opportunity to talk to IT pros at four different trade shows. That’s right, four in a month! I’ve been to Germany and the U.K. talking to private and public sectors, as well as to military and governments.
One problem to rule them all: Patching.
It can’t be that hard!
I’ve heard so many things. It started with:
“Honestly, I don’t know what software is installed on all the desktops.”
Ouch. That statement hurts. Unfortunately, it’s a problem that’s not always easy to solve.
A couple of years ago, I was working for a computer gaming dev, and, back in the day, the environment was completely locked down. This comes with advantages.
The end users weren’t local admins, so there was no sprawl, no crazy software deployed everywhere.
As soon as a baseline was established, the IT team knew exactly what was installed, where, and who was using it. Only a few packages were maintained, and you didn’t need a crystal ball to predict the behavior of business applications when a new patch arrived.
A few years later, I was working as a technical support manager for a different company, and they were operating on the principle of selective mistrust. I was a local admin, but my team was given zero permissions.
My current employer, on the other hand, operates on “basic trust” after training all employees. This system comes with advantages, too.
I don’t want to start a discussion of which policy is the best; in the end, it depends on the situation.
But clearly, the third option comes with an essential requirement for an active software inventory. A baseline isn’t enough due to the ongoing dynamics.
If you kick off a search for a
software inventory solution, you’ll find ideas for each budget. It starts with hand-knitted PowerShell scripts that feed a CSV, and ends with omnipotent, cloud-based asset management solutions that support procurement.
Okay, so the point is valid, but there are different solutions available depending on the situation. I need to know what software is in use before I can even consider a patching strategy.
Let’s move on.
“There’s always a new version of X and Y. It’s difficult to keep up with the changes.”
Nope, it’s not.
Many websites exist for precisely this purpose. Also, I would assume that every IT pro spends some time on general IT news each day (that should be part of the job anyway), and these websites usually report the most critical updates. We all know the usual suspects.
An alternative could be to sign up for newsletters to have the news delivered to you. (Well, that’s not an alternative for me as I’m at war with newsletters, but your mileage may vary.)
Even for this task, various software options exist, like freeware, desktop security clients that may come with a
patch management software feature, or a purpose-built solution.
So, keeping up to date doesn’t require much time at all. Let’s move on.
“I can’t keep up with the testing.”
My first thought was, “Really?” But yes, sometimes this still can be a problem—a serious one.
Good news first: it used to be worse.
Yes, testing was a nightmare back in the days of the one-trick ponies, when we used approximately a million desktop applications for each task and half of them required an obscure framework.
Today, looking at my desktop, I’m writing this in what is probably the most widely-used word processing program in the world, but besides its colleagues from the same package and an SSH client, it’s pretty much the only desktop application I still use.
Everything else is web-based, so all I need is a browser. I could very well write this text in a browser too, but sometimes I’m old school. Wait, did I write “sometimes?” Anyway...
But, of course, distributed applications with “fat clients” and local components do still exist.
Primarily local authorities sometimes rely on software that probably decomposes to dust at the slightest breeze and is only able to run under very controlled conditions.
Testing is required. Sorry, no workaround available.
Most organizations have a copy of a production environment on a VM with current snapshots, but in my experience, not every organization operates a test protocol. That’s an invitation for surprises, guys.
Yes, creating a test protocol does take some time, but it saves even more time in the not-so-long run.
Time-saving, yes. Here comes another statement I’ve heard:
“I have no time.”
Oh, come on. Sure, whatever the position, lots of things in IT require attention, and some have a higher priority than others. We all have days where things are exploding, and we don’t even know where to start.
But what’s more important than a real, existing security problem? And whose head will be served on a platter if a successful break could have been avoided with just ten minutes of routine tasks a day?
I’m sure the ticket Kevin from accounting wrote this morning can wait a couple more minutes.
Take your time and spend it meaningfully, plan and stick to it, and most importantly: document each action. It makes life easier even outside an emergency.
Patching is so important.
I’m always surprised by
security breaches based on vulnerabilities that had fixes released six months ago. No, not surprised—I’m sad.
Cybersecurity is a complex topic, and anticipating attack vectors can quickly become a full-time job.
But keeping software up to date isn’t exactly rocket science.
Even if the organization is in short supply and purchasing a software to automate pretty much all of the above isn’t possible right now, creating a plan and using free tools can at least start helping mitigate risks.
Even the Pareto principle is not rocket science.
What else did I hear at the fairs? Ah, this one was funny:
“I don’t know if anyone is taking care of patching.”
The summit of the drama! It was probably just a developer.
This post originally appeared on LinkedIn.