Top 15 Disaster Recovery Steps To Avoid Disastrous Data Loss

When it comes to disaster recovery and business continuity planning, the most common area of concern is either ransomware or fire/natural disaster type events.  That’s understandable because that’s where most of the marketing and hype is targeted towards when it comes to disaster recovery.  One of the main reasons for this is that ransomware is getting trickier and more clever in the means in which it is distributed and therefore more concerning.  However, it might surprise you that ransomware and natural disaster type events combined only account for roughly 10% of the data loss incidents as a whole.

DataLossStatistics

So whats the other 90%?  Well, hardware failure and user errors account for 76% of all data loss according to Unitrends.  Think about that for a moment.  How often do you see an advertisement for avoiding hardware failure or training your end-users? Compare that with how often you hear or see something about the dangers of ransomware.

The truth of the matter is, that the two leading causes of data loss just aren’t that exciting or motivating.  Ransomware is much scarier and most feel less in control when it comes to avoiding issues with it.  Don’t get me wrong, ransomware is a serious threat and you absolutely should have a plan to protect your organization it against it.  You just also need to worry about protecting yourself from data loss from hardware failure and user errors just as much if not more so.

So How Do I Protect My Data?

Let’s look at each segment of the data loss pie chart independently.  The single largest risk to your organization is

Hardware Failure

Hardwarde replacement lifecycleThe following is a checklist of items you can do immediately to protect against data loss from a hardware failure;

  • Complete hardware inventory and assessment of age. – While this seems like common sense, most organizations don’t even have a list of all their hardware and when it was purchased.  When asked, most of the organizations we speak with say they believe their computers and servers are fairly new.  However the average age of the systems when we inspect them are a little over 4 years old.  That’s important because best practices dictate that personal computers and laptops be replaced every 5 years at most.  Servers and network equipment should be replaced every 3-5 years.
  • Monitor systems for indicators of trouble.   In just about every hardware failure there are indicators that lead up to the eventual failure if you know what to look for and are actively monitoring for those.  There are multiple software options that make this an automatic and easy process to monitor for even if you don’t contract with a managed services provider.  We have previously published a guide to those – Click Here For That Post.
  • Establish a hardware replacement lifecycle.  We all have great intentions when it comes to replacing hardware.  The problem is that most organizations put this on the back burner until it’s too late.  Normally this comes up when there is a complete failure and there is potentially data loss.  Get on a plan to replace your entire organizations hardware on a schedule so that you know and can plan new computers every budget year.  If you want to get an idea of what the replacement cycle should be for the different types of equipment in your operations – check out this previous blog post.
  • Have an emergency plan. Ultimately we want to avoid a hardware failure but you need to have a plan if it happens unexpectedly.  Even brand new hardware can have failures, so having a plan to work through it is critical.  Having advanced hardware replacement services is a great option at a reasonable expense.  Most vendors will offer some form of this.  They all call it something different but the service it typically very similar.  For a yearly charge, they promise to have replacement hardware onsite and in some cases installed by their techs within a matter of hours.  That can be as little as 2 hours or up to next business day.  
  • Back it up, back it up, back it up! It seems obvious and when asked most business owners and managers will say they have backups.  However upon closer inspection, it is rare that all critical data that that owners and managers think  is being backed actually is being backed up.  People always have their own files that they store locally and assume it is protected.  There are always inevitable changes to databases, file storage, or applications added that will leave data unprotected.  The key to avoiding this is to make sure this is reviewed regularly!  By regularly, I mean – MONTHLY.  You should be auditing what is being backed up from where and what the retention policies are for everything.  

User Error 

Outside of hardware failure, user error is the second leading cause of data loss.  Bet you didn’t factor that in when thinking about your biggest threats to your data integrity did you?  Here are some common scenarios that involve user error and steps you can use to avoid them;Accidentally Deleted Files

  • Accidental deletion of files Over 85% of our data recovery requests come from users that have accidentally either deleted a file or overwritten a file inadvertently.  It’s easy to do if you are in a hurry and not paying attention to which files or folders you are modifying.   The best way to avoid issues from this goes back to what I mentioned earlier – backups!  Making sure that you are saving files and documents in a folder that has some type of backup and retention is critical.  Not only do you need to make sure it is backed up, but you also need to make sure you have the ability to have more than one revision.  In many instances, files are deleted or overwritten before the user even realizes they are doing it and you need the ability to have multiple revisions to go back to.  This will save your bacon!
  • Inappropriate Versions of Applications I get it, it’s easy to use personal versions of software because we want to try them out initially without committing to a purchase.  The problem is if you end up 3 months down the road or more still using a personal version of Box or OneDrive and have a lot of data being stored.  You compound the issue further if you have multiple users in the organization sharing the same personal version of these applications.  One user can inadvertently or purposely delete or overwrite all of the data and with the personal versions of the software you almost certainly are going to be limited on what backup or revisions are available(as well as support from the vendor).  Rule of thumb – don’t use personal versions of software with multiple users!
  • End User Security Awareness Training This one will apply to multiple categories.  However very few businesses actually invest in training their end users on the best practices within their environments.  This includes what applications should be used, how those applications are used, where data should be stored, how it should be stored, and what they should and should not be using when it comes to Cloud and other apps.  Best practices dictate that you should at least annually conduct a formal training with your staff on these topics.  Ideally you are conducting this type of training on a monthly basis to help your staff identify the latest threats and how to recognize them.

Software Failures

Application FailuresFortunately software failures are not common in today’s environments.  However, in order to make sure you are protected as best as possible follow these guidelines when it comes to software and protecting your data.

  • Once again – Back It Up Notice a common theme going on here?  I can’t stress enough that in order to avoid just about all of these issues – a solid reliable and predictable backup is essential.  
  • Be Cautious When Upgrading We all want to have the latest and greatest and especially when it allows your software to have more security or bug fixes.  However, you need to be cautious and fully understand what changes will occur when you upgrade versions.  I see all too often organizations that upgrade versions of software only to find that during the upgrade process that the upgrade of the database has problems or changes the way it interacts.  Have a detailed conversation with your software vendors to discuss how the upgrades affect the actual data the applications use.  Then get a scope of work in writing from the vendor that details what will be changed and what to expect as well as what their guarantee is on the upgrade process.  This may seem like overkill, but by doing so it can ensure that if there are problems after the upgrade process that you can push back on the software vendor to fix those issues.  Finally…. revert back to step one on this list.  NEVER and I mean NEVER perform a software upgrade on your line of business applications without first making sure you have a good backup to revert to if there are issues.  Sometimes upgrades completely change the structure of the application and libraries it uses.  If that is the case, you also want to make sure you have a full system(image) level backup that you can revert to.
  • Use The Correct Versions of Software Again this may seem like common sense but I have seen multiple organizations try to cut costs and use the “free” or home user versions of software.  Not only does this most likely violate the user license from the vendor, but it also puts you at risk of data loss.  In the event your application fails or has a data loss or damage, you will be very limited in what support you may receive from the software vendor if you are using the free version or if you are using the home-user version.  Additionally – make sure you get a multi-user licensed version of the software if you are allowing multiple users to access the software.  I have seen organizations lose years worth of data by sharing one login across all of their user-base instead of purchasing the multi-user versions.  Not only does this ensure you are not violating the license agreement but it allows you to create permissions and ensure that one user does not delete all of the data(purposely or inadvertently).

Viruses and Malware

RansomwareThis is the area that most businesses tend to think of when it comes to protecting their data.  While only accounting fo 7% of the overall data loss issues, it is one that strikes fear in most business owners as well as I.T. Managers.  The challenges around mitigating this risk is constantly changing and is not one that is going to go away any time soon.  However, making sure you have a strategy to deal with it when it happens(notice I did not say IF), is crucial to ensure your operations get back to normal as quickly as possible.

  • Backups I think you may have heard this one before… but you need to have solid and reliable backups!  If you want to have any chance of recovering from a virus, malware, or particularly a ransomware attack, you must have backups that you can revert to.  The biggest key to backups(not just in this scenario but all of them) is that you have to test your backups regularly(read this as – at least monthly) to make sure you can actually restore your data from a backup.  In addition you need to be monitoring daily what is backed up and what errors or issues may crop up because inevitably even with the best backup solutions out there, there are occasional failures.
  • Defense in Depth This is not a new concept but you should never allow your security approach to be single minded.  So what do we mean by this?  Well you should have multiple layers of security that provide the following;
    • Anti-virus software – yes you still need AV software and no… there is no AV software that is 100% effective.  The software manufacturers do their best to stay up to date but with 0 day vulnerabilities it leaves a hole that unfortunately can’t always be stopped.  However – more and more antivirus vendors are now offering behavioral based solutions that don’t just rely on a static list of definitions.  These solutions analyze what is going on inside your system to predict what is questionable and stop it.  In these scenarios it can look at the rate of file changes in the system compared to normal activity and decide whether or not malicious software is making changes to files and actually encrypting them.
    • DNS Filtering – By using a service that automatically filters where content is coming from and analyzes if it is a reliable source, you can add just another layer that helps catch those fake sites that allow malicious activity.
    • Operating System Maintenance – Keeping your operating system up to date is crucial to ensuring that you limit your exposure to vulnerabilities.  Additionally you should be auditing permissions on your operating system to make sure that you do not leave unintended permissions that allow system level changes.
    • Scheduled Password Rotations – I get it, it’s a lot easier to reuse the same passwords for multiple sites, software, and access.  However it is one of the biggest security risks we see on a daily basis.  Passwords should be complex in nature(uppercase, lowercase, numbers, and special characters and at least 8 characters long), they should be rotated every 90 days at a minimum, and your screens should lock automatically after a maximum of 15 minutes of inactivity.
    • End-User Security Awareness Training – This once again crosses back over to the user error category.  If you are conducting ongoing monthly training on how to recognize malicious emails, and phishing attempts, you will limit the exposure you have to having ransomware or malware.

Natural Disaster

Diaster Recovery and Business Continuity

Finally coming in at just 3% of the overall data loss instances is natural disasters such as fires, floods, tornados and even power outages.  When people mention business continuity and disaster recovery, chances are these are some of the immediate images that come to mind.  But in reality these are a very small percentage of actual risks that you may have to deal with. Nonetheless, here are some key items to help make sure you keep your data safe in the event of a natural disaster;

  • Backups.… well do we really need to go over this any more?
  • Environmental monitors – for as little as a couple of hundred dollars, you could monitor multiple aspects of your server and/or data room.  These devices will let you know if the temperature gets too high, humidity levels, water presence, and smoke detection and in the event that an alarm gets tripped – it will email to warn whomever you designate.  IT Watchdog is one of the brands we have used for years and have had great luck with thems.
  • Utilize Cloud Based Image Level Backups – as part of your ongoing backup and disaster recovery, you may want to consider using a Cloud based image level backup.  These are not cheap, but however, they do provide you the ultimate in resiliency by allowing you to boot up critical servers on Cloud based infrastructure in the event that you lose your building or data infrastructure.  This could be a lifesaver for your business in these circumstances.

In Conclusion

While most people think that ransomware is one of the biggest threats to their business, the data suggests a more ominous tale.  The good news is that for the most part you can control your exposure to what puts you at the biggest risks to data loss.  By making absolutely certain you have TESTED and reliable backups, predicting and planning your equipment replacement, and making sure that you keep your end-users trained and up to speed you can give yourself the best chances of keeping operations ongoing and unaffected.  

The first step in all of this is to make sure you fairly and truthfully analyze where you are at in this whole process.  Burying your head in the sand and hoping that these things are taken care of is one of the biggest reasons organizations end up losing data.  

If you are not sure where you stand on any of this and want to have get control ASAP – we have a solution to help at an affordable one-time price