Category: Disaster Recovery

Hardware v. Software Backup and DR

Backup
Image courtesy of Stuart Miles at FreeDigitalPhotos.net

There have been a lot of changes to disaster recovery since I started my career in IT years ago.  Back then, the hardware stored things and the software moved backups to tape.  It was a simple if somewhat stilted environment.  It also took forever, as anyone who did “fulls” on the weekend can tell you.  An all weekend backup window can really put a damper on things.  Like when tapes need to be changed.  Of course, that was before “the cloud”.

Now, we have many of those functions converging.  Hardware is becoming “smart” and can now make copies of itself.  Software is becoming smart as well, with the ability to search through catalogs of backup files to show multiple instances of files, or different versions.  So – how do you fit these into your environment?

Hardware Snapshots

Smart hardware platforms and arrays have sprung up almost everywhere.  From the old days of JBOD – Just a Bunch of Disk to intelligent and aware arrays, the mechanisms controlling storage are trying to streamline functions that plague the storage admin.  These days, storage appliances are able to quiesce data on the volumes, make snapshots of those volumes, and often times replicate those volumes between like appliances, or via 3rd party APIs to other storage, like the cloud.

There are many advantages to this approach.  Since these appliances are now placing data using ILM strategies, the appliance usually knows what data resides where.  Data can be snapped quickly, often in just milliseconds.  Hardware based replication to other storage for DR or backup purposes is much faster than traditional backup.  This is often accomplished using just changed data, and then letting the hardware figure out how to make full snapshots of this in the background.  A very nice solutions for hot- or warm- backup sites.

Software Backup

Software solutions traditionally take longer for backups.  It takes time to traverse or “walk” the filesystems involved.  This is slower than SAN or NAS based snapshots.  Software allows for storage that is not associated with hardware appliances to be backed up.  This includes individual machines and drives that may not be hosted on a SAN or NAS.  Even critical desktops and laptops.

Software also is a great solution for its ability to collect information on the files it is backing up.  All the file attributes are collected and organized into a catalog that is user searchable, in the event that only one file or email needs restored.  Catalogs are very organized and searchable by storage and backup admins.  If you haven’t read my article entitled “Is Backup Software dead?“, it goes into a bit more detail on these advantages.

Appliances

Appliances are often hybrids of both types of backups.  They consist of a hardware appliance that stores file and catalog information locally, stores a copy of the latest backup locally, and often times offers the ability to store older backups off-site.  Appliances do not address the speed of SAN or NAS based backups.  But appliances speed up software based backups and offload the computing load that traditionally has been reserved for a server running backup software.

Summary

Backups are a part of life in the IT shop.  Between accidental deletion of files, ransomware, and just plain disasters, you would be crazy not to do them.  How you do them is changing on a consistent basis.  As new technologies come out, the face of backups and disaster recovery changes.  Make sure that you are taking advantage of all the new technology that is being offered.

Share this:

Is Backup Software Dead?

Is Backup Software Dead?
Image courtesy of Simon Howden at FreeDigitalPhotos.net

Is backup software dead?  Everywhere I look, I see increased functionality within storage appliances and operating systems.  Appliances will backup themselves, Operating Systems now have quiescing and basic back up support, and the cloud is making backup targets stupid easy.  Should I buy dedicated backup software, or does my hardware, hypervisor or Operating System handle this?

As a storage professional, I will never discourage anyone from taking backups. As a matter of fact, I personally believe that more is better.  I am sure that many of have heard the popular saying ‘Two is one and one is none.’  Anyone who has mounted a blank backup that “worked last time” understands the wisdom of multiple backups.  Balancing this wisdom against the cost of additional methods of backup – what should I do?  While there is no one answer that will work for everyone, discussions help us formulate plans.

Many hardware solutions provide backup

I’m a big fan of taking administrative tasks off-line.  As an axiom to that, the closer I can get backup to where the data lives, the faster the backup will occur and the SMALLER the impact to production systems.  It stands to reason – if a system snapshot takes place on the storage appliance and it takes milliseconds to execute, isn’t that better than a full backup through system agents that may take hours over the weekend?

To take advantage of this, many storage vendors have added support within their hardware for snapshots and replication.  In essence, this makes a copy of your data volume and moves it within your environment.  Yes, this usually only works on products within the same manufacturing family.  Yes, vendors must support quiescing.  But many OS vendors are now building the functionality within their operating system to quiesce resident data painlessly.  Well, painlessly once you get it set up.  But what was once the realm of large, intense database houses, or financial trading houses now ships with many OSes.

This seems easy enough, right?  Your storage appliance and OS will do most of the difficult work.  But what about support for your hypervisor?  Maybe those legacy apps don’t support some sort of OS quiescing?  Or what about those that don’t even have a dedicated storage appliance?

Backup Software

While it will never be as fast as dedicated storage appliance backup, backup software does have a place.  Many places in fact.

Backup Software’s arguably most important function is as a broker.  The software acts as the middleman between your data (the source) and where ever you would like a copy of the data (the target).  And it provides a greater amount of flexibility than traditional “baked-in” solutions from hardware manufacturers.  Of course, this is a simplistic approach, and many backup packages have lots of gizmos and what-nots to make a backup administrator’s life easier.  But the main function is moving data.

Software works well with dissimilar hardware.  Want to backup data across many different manufacturers?  Software can do it. Want to move it between disk, tape, and the cloud?  Removable media?  Software shines here.  Want to work with legacy applications or operating systems that may not support data integrity?  Software does this and gives you the flexibility to customize it to your environment.

What works for you

I see a place for both hardware and software in a backup strategy.  Of course, I’m also the guy that still sees tape as the most economical means to store and archive large amounts of data.  The key point is to do what works for you.  I’ve worked with large organizations that had more data than could be reasonably backed up through software.  In this case, snaps and replication were a great fit.  But those same organizations had legacy apps that needed databases backed up hot and live, then log files backed up as well to insure transactional integrity.  Software to the rescue.

My point is that there are many tools in your toolbelt to use.  But, technology always changes. Does your hardware provide all the things you need to recover your data in an emergency? With the amazing growth of data, do you see software still being a viable backup method into the future?  How do budgets and cost affect your decision?  Please share your thoughts!

 

Share this:

Your Redundancy Roadmap

Computer error
Image courtesy of iosphere at FreeDigitalPhotos.net

We’ve all been there.  The power goes out, or someone digs up some fiber and you lose connectivity. You either can’t get your work done, or else the phone starts ringing with users who can’t. All of us in the IT industry work hard to make sure that the applications and workspaces that we support are up and available to the users that need them, when they need them.  What are the basic steps that we address to make sure our systems are up when they need to be up?  How do we balance availability and uptime with the IT budget? Where does redundancy figure in our disaster recovery planning? After all, as they said in the movie, The Right Stuff, “No bucks, no Buck Rogers!”

Full power redundancy

Those fancy servers of ours go nowhere unless there is power to them.  This means dual power supplies to the physical boxes.  If one fails, then the remaining power supply needs to be large enough to run the server or appliance.  In addition to this, these power supplies need to be on separate electrical circuits.  It does little good to have two power supplies if the same circuit or UPS failing will take down both.  Speaking of Uninterrupted Power Supplies (UPS), for a truly redundant system, there should be fail-over paths for these as well.  There don’t necessarily need to be two, but there should be a clear path to power in the event that a UPS fails.

Physical connections

Your servers or applications don’t do users any good if no one can access them.  That means that there are multiple paths from users to each workload.  From multiple NICs within each enclosure, cabled to multiple switches with multiple paths to the core, physical paths are important for redundancy.  Multiple demarcs leading to disparate paths of connectivity are also important.  Of course, these multiple paths get expensive, so use your best judgement as to what the return on investment is on these options.

Virtual machines

You have your physical hosts covered. We have multiple paths to the data.  Now we need to work on system redundancy.  There are solutions from failover clusters that have no application downtime to high-availability servers, that may have limited amounts of down time, but will automatically restart servers on a new virtual machine if the old machine fails for some reason.  These are two different ways to address the Recovery Time Objective factors of Disaster Recovery.  With most things, the smaller the downtime window, the larger the price tag.

Outsource it

And of course, there is always the decision to outsource things.  Having a company host your servers, or going with a Cloud solution are of course viable options to redundancy.  Whether you are allowing these services to host all of your computing infrastructure, or you are using these as part of your failover plan, they are tools that you can use in your redundancy toolchest.  Large Cloud providers can spread the cost of massive redundancy between many clients, making it very affordable to use.

Double up on things

So, we have gone over a few of the most common things that IT staff will use to ensure consistent connectivity in their environment.  Obviously, the specific needs of your environment are best known to you.  Any decisions will need to be weighed against your budget and management’s risk appetite.    This article has been designed as a jumping off point for redundancy planning of your network.  What are you doing in your environment?

Share this:

The Case for data protection – Tuesday’s ransomware attack

hacker, malware, ransomware
Image courtesy of photouta at FreeDigitalPhotos.net

The second reported attack of NSA-esque ransomeware this Tuesday should not surprise any systems administrators or IT staff. These attacks are happening on an increasing basis, and with the release of the “Vault 7” documents as a how-to-for-hackers, they will only increase. Google hacker culture, Vault 7 or script-kiddies. Suffice it to say, that dangers like this are a growing concern that needs to be addressed in your Data Protection plan.

Data Protection Plans

Getting back to the basics of Data Protection, todays’ article will discuss how backups as a part of your DP program, can help with ransomeware attacks. Backups may bring up visions of hurricanes or tornadoes, but it goes well beyond that. Data protection also means, well, protecting your data. From all the threats out there, including accidentally deleted files and not so accidentally deleted files, or even ransomed files.

So, you may be asking, how does data protection actually protect me from ransomeware? To put it simply, ransomeware doesn’t remove your data and your files, like a tornado, hard drive crash, or hurricane. It removes YOUR ACCESS to that data and files.  Time and mathematics instead of wind, rain, and lightening are denying access to your data. The files are still there, but you can’t use them to do what you need to do.

Backups

This is the case for backups. We previously discussed the several use cases for snapshots in an article, but in this instance, any backup will do as long as the backups were taken BEFORE the systems became infected with ransomeware.

To this point, you should have a backup SCHEDULE. That means that you don’t just keep the latest copy of your backup, your keep staggered copies of your backups. One of the most famous backup schemes is Grandfather-Father-Son backups. While the scope of your backup schedule is beyond this article, suffice it to say that you should have at least one month of good backups if you have to restore data. With many of the backup appliances on the market these days, this is taken care of for you. And with compression and deduplication technologies the amount of data that can be stored on-site or remotely is truly astounding.

This solution is not perfect, but better than paying someone to release the data that you generated in the first place. Or maybe not – maybe the hackers will sell you an enterprise license? Good Data Protection policies deal with many ways to keep your data YOUR data. This includes making sure you can access it.

In the grand scheme of things, this rash of WannaCry-type ransomeware attacks will continue. While security companies are rapidly working to cut down these attacks, if your data protection isn’t cutting the mustard these attacks will be terrible for your ability to support the other departments of your company. It is time to have a discussion with management about your data protection strategy and how these attacks affect it. Like they say “Life is tough, but it’s tougher if you are stupid.”

Share this:

Snapshots – Everyday Uses and Hacks

Storage Snapshot
Image courtesy of ddpavumba at FreeDigitalPhotos.net

Creating snapshots in a storage environment is an amazing technology.  The ability to take an instant “picture” of a data volume is a tool that is used in a variety of ways.  It makes your job easier and more manageable.  It can help secure your environment.

Different vendors implement snapshots in various ways, but the general theory remains the same. An almost instantaneous copy of data that may be moved and manipulated by a system administrator.  The theory of this is nice, but how can we USE this functionality.  Can it make their job easier and protect their systems from the everyday issues they see “in the wild”?

With organizations I work with, we see many innovative uses of snapshotting technology.  There are amazing examples of real world IT organizations making their jobs faster, easier, and much less stressful.  In other words, they used “business hacks” to make their snapshots work for them. We will discuss five real world ways to use snapshots that are relevant and guaranteed applicable to your everyday work load.

Snapshots in your DR strategy

The first things that pops into most people’s mind is backups and disaster recovery.  Snapshots produce an exact copy of virtual machines or data volumes that is stored within the storage appliances.  Most vendors allow these snapshots to be replicated or moved to another storage appliance.  This allows you to use an appliance in another location as a disaster recovery site.  Or, it is possible to mount these snapshots as volumes and allow your backup server to incorporate these exact replicas of data into your existing backup or Disaster Recovery plan.

There are several advantages to this approach.  The data in a snapshot is an exact replica in time, so it is easy to manage RPO and RTO.  Also, this approach takes the data backup “offline” of your production servers.  Sure, the network and storage are still involved in transferring this data, but the data transfers happen out-of-band.  This reduces slow systems and lag.  Many vendors now include APIs for cloud storage in their software and storage appliances.  Now, you may back up your snapshots directly to cloud storage.

Update “insurance” snapshots

We’ve all done it.  Installed that patch from our system or software vendor and it breaks the box.  Perhaps breaks is a strong word.  It temporarily overwhelms our system with new features and benefits. While snapshots can’t make the process of ironing out an ornery system update any easier, it can provide you with insurance.

By taking a snapshot before you update a system, you have an exact copy that you know works.  Suppose you cannot straighten out all the goodness that was Big-Name-Accounting-Package 5.0 before Monday 8am rolls around.  Now you have the ability to fail-back to your old system while you continue to straighten out the misbehaving upgrade.  Almost a form of version control for those of you familiar with the software development world.  This nifty trick also works on desktops.  If you are using VDI, make copies of your desktop images and use the same concept.  It may not save you time getting to the next version, but it will certainly save your bacon as far as system up-time and help-desk calls are concerned.

Gold copy snapshots

If you are making snapshots of servers before you upgrade, you are probably already doing this, but we will mention it anyway.  Snapshots are amazing tools for creating new servers, virtual machines, or desktops.

Once you have installed an operating system and all the various patches and utilities that you routinely use – take a snapshot.  Now this new, untouched system as-pure-as-the-driven-snow will be the starting point for all new servers or desktops that you implement.  This is often referred to as the “Gold copy“, a reference to software development and when code is ready to ship out to customers.

This “Gold copy” has standard system configurations already in place, various drive mappings, and config files.  It is all in there.  Sure you may edit some things like network and licensing, but you have a starting place that is pretty solid.  In the future, if you need to make changes then just make changes and save as a new snapshot.  This may not seem like much, but anyone who has built a new system from scratch will tell you that this is a genuine lifesaver.

This concept applies to both virtual machines and stand-alone servers or desktops.  Several customers we work with will use an application to “ghost” images from storage appliances to a new non-virtualized server or desktop.  Mount the snapshot you would like to use as your system image, then transfer it over to your new hardware using the disk image utility of your choice.  Of course, this works best in a virtualized environment, but it is also a valuable tool for the not-yet-virtualized.  By the way, why aren’t you virtualized yet?

Instant data set snapshots

We regularly hear from customers asking how to generate test data for new systems testing.  In several cases, systems administration is tasked with creating data sets that the consultants or systems specialists can use to ensure the systems are working as anticipated.

Instead of this being a problem, use the best test data that there is – an exact copy of your current, live data.  There is no need to create new data sets from your existing data. By creating a snapshot of your current databases, you may test systems with what was once hot and live data.  But, there is no negative impact if this data is corrupted or destroyed.  You can even create multiple copies of this data to use across multiple tests.

Getting around malware with snapshots

Today’s data environment can be a pretty scary place.  Look no further than the headlines to see

Malware, virus, spyware
Image courtesy of Stuart Miles at FreeDigitalPhotos.net

stories about malware and ransomware wrecking havoc on organizations.  If the recent exploits of the bad guys is any indication, things are getting much larger in scope.  The WannaCry attack is still fresh in everyone’s minds and is rumored to have effected over 230,000 machines world-wide. It is safe to say that there are external threats to your data that can be remediated with snapshots.

A schedule of snapshots  on your storage appliance is the solution.  Whether this is part of your disaster recovery planning or not, set up a schedule. This concept is similar to the “patch insurance” we discussed above.

By making a number of snapshots over time, we are able to go back to former snapshots and explore these snapshots for malware.   Perhaps we may extract data from our snapshots before the encryption activates.  Of course, data sometimes is lost.  It is up to management to decide to pay faceless hackers for your data or try to recover it via backups and snapshots.

Snapshots have been in the storage technology tool bag for a while.  The technology has matured so that most storage array vendors are offering this functionality.  Over years of working with clients, we have discovered many innovative ways that people are using snapshots.  In this article, I have shared what I have seen, but I am interested in what you are doing with your snapshots.  Feel free to share and let everyone know how they can use snapshots within their storage appliance.

 

Share this:

How Do I connect to my Storage Appliance?

Fiber Channel attached Storage Appliance
Image courtesy of cookie__cutter at FreeDigitalPhotos.net

In this article, we are examining the third question asked in our original article The Beginners Guide to what to know before you shop for a Storage Appliance.  That question in a nutshell is “How do I intend to connect my storage so that all of my applications can get to it?”  Well, that question begs a good look at your current environment.  Based on what you find, we will determine if you should connect to your existing environment or connect through other dedicated technologies within your existing environment.  There are also other less common methods to connect to your storage.

Using Existing network infrastructure

Is your network stable?  Every network administrator or sysadmin knows who the problem children are in their network.  Do you have any segments or switches in your environment that are currently congested or causing delays now?  Adding storage to it will only exacerbate the problem.  On the flip side of that coin, a well-running network makes adding storage easy and inexpensive.

In addition, the speed of your existing network will come into play.  Depending on your current storage needs, I would recommend that no one attach storage at speeds of less than 1 Gigabit Ethernet. As 10 GigE becomes more affordable and more pervasive in networks, it is never a bad idea to increase bandwidth to your storage.  Fortunately, many manufacturers enable upgrading with field replaceable units.  Speak with the vendor about this ability in the units you are investigating.

Most storage appliances will support a variety of connection protocols.  For storage area networks (SAN), it is important that iSCSI be supported in the unit.  iSCSI will support most of the externally mounted volumes or LUNs (Logical Unit Number).  For Network Attached Storage (NAS), NFS is a popular way of attaching storage for most virtualization shared storage and *nix computing.  These storage protocols may all be supported, or only some of them.  SMB/CIFS should be supported for full functionality in a Microsoft network.

Using Dedicated connection technologies

There are situations where the use of the existing network may not be advisable.  If the network is older or slow, putting the data needs of shared storage on the network will just exacerbate an already slow situation.  In this case, there are dedicated connection technologies that may come to the rescue.

Ethernet connectivity is still a very viable alternative, using dedicated switches and VLANs.  VLANs are Virtual Local Area Networks that allow for the logical partitioning of ports within a switch to create virtual switches and LANs.  This lets you segregate data traffic and dedicate resources to the various ports that may be passing your data traffic.

Fiber Channel (FC) is a mature, well established connection technology.  FC uses glass fibers to connect physical servers to physical storage using light.  While this technology is a bit more expensive than traditional ethernet switches it does have advantages over ethernet.  There is tremendous support for this protocol in software and hardware because it is a very stable protocol developed specifically for storage.  Fiber Channel allows for data to be consistently delivered with very low overhead.  Fiber Channel switches are available to connect servers to storage in a logical mesh setup, but it is also a regular practice to directly connect servers with FC Host Bus Adapters (HBA’s – think of an HBA as a fiber channel version of a network card).  This will cut out the expense of a Fiber Channel switch for smaller deployments.

Exotic Connection methods

In addition to the well established protocols of Fiber Channel and iSCSI, there are other ways to connect storage.  There are storage appliances out there that will allow connection to servers via specialized technologies like InfiniBand, or SAS ports.  There is eSATA that is available.  These various ways to connect range from the super fast (InfiniBand – and expensive by the way) to the fairly common and slow.  “Exotic” connection technologies serve special cases and are outside the scope of this article.  These connection technologies will limit your field of vendors, but not disqualify you from a storage appliance.

Considerations of Connectivity

In addition to the connection methods discussed above, there are also other connectivity possibilities to consider.  Bonded connections is one.  Bonded connections make multiple physical paths (read cables or ports) to appear as one logical path to data.  In essence, two 1GB Ethernet connections becomes one logical 2 GB Ethernet connection.  A single path of bandwidth to the storage appliance will be quickly overwhelmed.  There will be many servers and users trying to connect to the storage.  Bonding allows several ports to simultaneously send out data.  Bonding also helps with failover.

Another consideration of connectivity is failover.  Although it may not happen often, if a cable, NIC, or port fails on the storage appliance or on the connectivity side, all servers using that storage are suddenly unable to access data.  Or all of your virtual machines may come down at once.  You have placed all of your proverbial eggs in that one proverbial basket.  Failover mitigates this risk accordingly.

This is often accomplished through the use of different controllers or “heads”.  Two (or more) controllers allows for multiple disparate paths to the data.  It allows for one head to crash and you still have access to your data.  It allows for one power supply to fail, and you still have access to your data.  Many manufacturers will vary on how they support this functionality, so it is important to research this carefully.  Make sure that the storage appliance will run on one power supply.  Verify that the controller heads support failover.  Implement bonded connections.

Summary

In this article, we have discussed the final question raised in our original article about finding the best storage appliance for your environment.  We have gone over considerations of attaching the shared storage to your existing network, the prospect of attaching the NAS or SAN via new connectivity, or even attaching via a special, non-standard or exotic connectivity mode.  Many vendors support these differing connectivity methods.  Specialized connectivity will limit the number of storage appliances that you have to choose from. Most users know that they are required from the start and can plan accordingly.

 

 

 

Share this:

A Quick Discussion on Disaster Recovery Planning

I got a call from a customer not long ago asking me questions about Disaster Recovery planning. Now we’ve all developed DR plans, and a quick search on the interwebs will get even the novice started on the basics, but there were three things that we went over that dredged up old memories and I thought I would share them here. Those things are prioritizing your servers (or functions), doing the math, and asking somebody to help.

A Place for Everything and Everything has a Place

When I worked as a systems engineer at a large company, we developed a divisional DR plan. After a lot of busy work thinking we needed to recover everything now and not getting that to happen without a project budget we could denominate in gold bars, we recognized that not every server or business function needed to be up immediately. There was a method to recovering systems, and we decided to group everything into three classes with different RPO and RTO objectives.

The most important systems were classified as “A” systems, and were the first to be recovered. These were the business critical boxes that needed to be up ASAP. Systems that directly related to and directly impacted the business lines and areas of the business that were visible to the customer.

When the Class “A” were completed, or well underway, we could focus on the class “B” systems. They were not as business critical and not as important as the class “A” systems, but were important to the business. Internal systems that the business needed to run, but were not immediately visible to the public, or systems that could wait until the Class “A” boxes were up.

The last class was the Class “C” systems. These needed up, but had the longest RTO and the greatest RPO of the bunch. The “nice to have” systems. By categorizing our recovery, we could get the important stuff done first, and then work on the rest.

Do the Math is a simple concept

Just like your professors told you back in school, do and show your work. Go ahead and run those storage studies so you know how much you will be recovering. Do a growth test and see what the “delta” (information change) is on your local systems on a daily and weekly basis. Then plan around that. Run the numbers, build a spreadsheet. Are your WAN connections big enough to replicate your data within your time allowance? How often can you make snapshots with the available space on the SAN or NAS? Do you have enough hardware to recover what you plan to recover at the site? Something as simple as can you read your tapes (Boy, am I dating myself!)?

Another Do the Math concept is activate your plan with a limited scope. Nothing will show you where your plan is weak like trying to recover a small amount of data. It doesn’t have to be a full-on test, but activate your plan for a single server. Send someone over to the DR site and have them try to recover last night’s email server, or the HR system. Or only a small portion of the system. Pick 100 records to restore – just enough to tell you where your plan needs more work. And where you can improve.

If your company is big enough to have one, invite the audit department to tag along. Nothing impresses the audit folks and regulators (if you are in that line of business) like testing your plan and working to improve it. Nothing is perfect the first time around or even the seventh, so do the math to improve it.

Ask somebody. But not just anybody

The Beatles were on to something there. There are people in your organization that can help you out. When we classified the systems in the organization into different classes, we didn’t just pick those systems at random. We asked for help. The IT Department sent out questionnaires to department heads and had them rank the systems that we had identified for importance and impact. We also asked for any systems that we might have missed and were not on the list. You would be amazed at the systems WE thought were important versus the systems the BUSINESS thought were important.

Remember that your DR plan is not an end product. It is designed to let the IT assets of your company help recover the business lines of your company.  Of course, information is vital to your company, but how long will you be in business if the widgets don’t get made? If Accounting needs the company chat system to be up first, then the chat system needs to be up first. And no matter what anyone says, email is a Class “A” system. If management doesn’t believe that, turn it off for an hour and see how the phones light up.

Nothing that I have said in this article is rocket science, it is just a few lessons learned from building a plan, and then working to test it. Technology changes, and thus the tools used to implement your plan over time will vary, but the fundamentals of prioritizing your servers, doing the math, and involving the business lines for help still remain pertinent today.

Share this: