Year: 2017

Hardware v. Software Backup and DR

Image courtesy of Stuart Miles at

There have been a lot of changes to disaster recovery since I started my career in IT years ago.  Back then, the hardware stored things and the software moved backups to tape.  It was a simple if somewhat stilted environment.  It also took forever, as anyone who did “fulls” on the weekend can tell you.  An all weekend backup window can really put a damper on things.  Like when tapes need to be changed.  Of course, that was before “the cloud”.

Now, we have many of those functions converging.  Hardware is becoming “smart” and can now make copies of itself.  Software is becoming smart as well, with the ability to search through catalogs of backup files to show multiple instances of files, or different versions.  So – how do you fit these into your environment?

Hardware Snapshots

Smart hardware platforms and arrays have sprung up almost everywhere.  From the old days of JBOD – Just a Bunch of Disk to intelligent and aware arrays, the mechanisms controlling storage are trying to streamline functions that plague the storage admin.  These days, storage appliances are able to quiesce data on the volumes, make snapshots of those volumes, and often times replicate those volumes between like appliances, or via 3rd party APIs to other storage, like the cloud.

There are many advantages to this approach.  Since these appliances are now placing data using ILM strategies, the appliance usually knows what data resides where.  Data can be snapped quickly, often in just milliseconds.  Hardware based replication to other storage for DR or backup purposes is much faster than traditional backup.  This is often accomplished using just changed data, and then letting the hardware figure out how to make full snapshots of this in the background.  A very nice solutions for hot- or warm- backup sites.

Software Backup

Software solutions traditionally take longer for backups.  It takes time to traverse or “walk” the filesystems involved.  This is slower than SAN or NAS based snapshots.  Software allows for storage that is not associated with hardware appliances to be backed up.  This includes individual machines and drives that may not be hosted on a SAN or NAS.  Even critical desktops and laptops.

Software also is a great solution for its ability to collect information on the files it is backing up.  All the file attributes are collected and organized into a catalog that is user searchable, in the event that only one file or email needs restored.  Catalogs are very organized and searchable by storage and backup admins.  If you haven’t read my article entitled “Is Backup Software dead?“, it goes into a bit more detail on these advantages.


Appliances are often hybrids of both types of backups.  They consist of a hardware appliance that stores file and catalog information locally, stores a copy of the latest backup locally, and often times offers the ability to store older backups off-site.  Appliances do not address the speed of SAN or NAS based backups.  But appliances speed up software based backups and offload the computing load that traditionally has been reserved for a server running backup software.


Backups are a part of life in the IT shop.  Between accidental deletion of files, ransomware, and just plain disasters, you would be crazy not to do them.  How you do them is changing on a consistent basis.  As new technologies come out, the face of backups and disaster recovery changes.  Make sure that you are taking advantage of all the new technology that is being offered.

Share this:

Data Security – lessons from Equifax and HBO

Data Security
Image courtesy of David Castillo Dominici at

You can ask Equifax, Ashley Madison, or HBO how data security worked out for them.

With the latest high-profile attacks at large companies that hold customer data, anyone that stores sensitive information within their computer systems should take a look at their data security policies.  While this article won’t help you develop a detailed data security plan, I hope that it will start to explain several of the things that you should know and address.

For starters, let’s talk nomenclature.  Digital data has three states.  These states are agreed upon, however what each encompasses is often debated.  These three states are data-in-use, data-in-motion, and data-at-rest. Addressing these three states will encompass all of your data and potential data exposure.


Data-in-use is data that resides in a processing device (like your computer) and is actively held in memory.  Most of the security risks that happen with data-in-use happen when people have physical access to the computers that are using this data, systems are in a poor state of update or malware protection, or network accounts and security are lax.

Solutions to these issues are fairly straight forward.  Keep computing resources physically secure.  Make sure anti-virus and anti-malware software up to date.  When operating systems and application updates and patches come out, install them.  Good systems administration is also key.  User accounts are secure.  Disable or delete old or unused accounts.  Educate users to recognize potential data leaks due to poor security practices.  Use means to limit access to data to only trusted resources using protocols like CHAP or ACLs.  Keep tight control over other means of access like vender portals and outside APIs that you may use.


Data that is moving from storage to processing is considered data-in-motion.  This usually means the WAN or network, where your data traverses in its journey between at-rest and in-use.  This also includes transport in the cloud, where data may be moving over very public networks on its way between -use and -rest.

The most common way to defend against data-in-motion snooping is to encrypt data.  Always encrypt data-in-motion.  Always.  Many vendors provide virtual private network (VPN) solutions, or WAN acceleration appliances that include encryption as part of the package.  This is traditionally for WAN usage and encrypts the entire communications channel.  There are also solutions for local LAN traffic.  Check out IPSec if you haven’t already.  It may also be worth your while to consider encrypting your data itself, not just the traffic tunnels.  This will become expensive, either in real dollars or in computation, so it may not be a fit for your organization.

Limiting physical access to your network is also a must.  Keep your networking gear behind locked doors, and secure any wireless access.  Again, this is all basic Network Security 101.


Data that is stored on a device, but it is not actively being used, is considered data-at-rest.  This usually means disk, appliance, tape or other removable device.  Yes, thumb drives and CDs are included in data security plans.  Data security plans often overlook backups and backup media, too.

Securing data at rest is again all about physical security and encryption.  Physically secure your storage appliances and tapes and you have solved 90% of the issues with data-at-rest security.  If no one can get to your data, then no one can steal your data.  For data encryption there are also solutions.  There are software applications that will encrypt data, and several operating system vendors have included this functionality in their OSes.  This does tend to slow systems down.

Appliances are also a solution to encrypting data.  There are speciality manufacturers that will sit in between storage media and computing resources that will encrypt data “on the fly”.  These tend to be a bit expensive and are for speciality applications.

Seagate makes a Self-Encrypting Disk (SED).  Special chipsets encrypt everything written or read from this disk.  This disk tends to be a bit slower than traditional disk (figure a 10% penalty on reads and writes), but is a nice solution for those clients that may be trying to meet data security standards.  The disk does not store encryption keys.  Therefore, taking disks does not compromise data.  But for heaven’s sake DO NOT FORGET OR LOSE YOUR KEYS.

Summarizing Data Security

In this article, we have discussed data security.  A data security plan must consider each state of data separately.  Security measures may span more than one state, but remember they are implemented differently dependent on state.  This article is an introduction to basic data security.  It is not all encompassing.  We have only scratched the surface.  Read up, work with your in-house security people, or engage competent data security consultants to get the best security that you can.  Your data may not include government officials looking to “hook-up”, or the spoilers for next season’s Game of Thrones but you never know.







Share this:

The case for HCI – a user survey review

Computer survey
Image courtesy of Master isolated images at

I recently had the opportunity to review some information collected by TechValidate on an HCI vendor, Scale Computing. Now, for full disclosure, AR Consultant Group is a partner of Scale Computing.  Scale Computing makes an HCI product marketed under the product family of HC3.  Even so, data collected by TechValidate is pertinent to HCI solutions across the board.  After reviewing the data and how it was presented, I found it not only easy to research, but a great way to show potential customers and even those mildly curious the advantages of HCI.

Many are not familiar with HCI, or HyperConverged Infrastructure.  It is the combination of compute resources, storage, and hypervisor without the licensing costs in a single preconfigured package.  Certainly, there are other things that vendors add in order to differentiate themselves, but these three are standard within the hyperconverged solution set.  In this particular instance, Scale Computing targets the SMB community, those businesses with between 100 to 500 employees and 1 to 5 IT staff.

The Data

First and foremost, the data bowled me over.  Not the actual data itself, but the method with which the data is presented.  If you haven’t seen the TechValidate package, then you certainly should.  It is a great way to present data and customer opinions.  TechValidate surveyed customers after purchase on what product advantages they found and other traditional data points.  They then presented this data using real-life Scale customers.  The company profiles also back the data up.  Seems like an innovative way to proactively publish data that allows customers and prospective clients to investigate specific data points that interest them – from people that are actually using the product.

The Results

The graphs provided by TechValidate center around challenges that are solved by the Scale HC3 solution.  Also charted are what benefits the customer perceives from a hyperconverged solution.  First, let’s examine the Operational Challenges data.

Operational Challenges Solved by HCI

Operational Challenges solved by hyperconvered technologies from Scale fell primarily into two categories – improvement of processes and reduction of cost or complexity.

The improvement of process challenges appear to revolve around the benefits of virtualization in a preconfigured clustered setup.  By handling the hardware and software clustering aspects of virtualization through hyperconversion, these solutions allow for hassle-free increases in customer processes.   Server hardware clustering and failover, failover of other infrastructure aspects, and simplification of disaster recovery became much simplier.  In other words, the manufacturer made these benefits easy to implement for customer business.

Customer Content verified by TechValidate.

Reducing cost and complexity of hyperconverged solutions, allows customers to enjoy reduced cost of purchasing everything separately.  This also reduces time spent in administration of all the systems separately,  It reduces complexity of support through having a single vendor support contract.

By making the IT function more efficient and getting more value for the budget, this survey addresses many of the main concerns of staff and management of the SMB.

Biggest Benefit from Scale Computing HCI

A follow-up survey asked customers of Scale Computing about the actual business benefits they found from implementing HC3.  Again, these fell into two basic categories – Ease of use and improvement to the information technology environment.

Customer References verified by TechValidate.

Ease of use is the largest benefit by a large margin.  Making the product easy to use increases the interest from customers. “Hey, this will work for me“.  It also shows a benefit to the customer.  Now that it is “easier” to do tasks it therefore cuts down on my after-hours/weekend work.  In addition, it gives me the time to pursue other projects.  It is also easier to train new staff on how to support the system. Believe me, coming from a guy who carried a weekend pager and supported physical servers, these are huge benefits.

Improvement of environment encompasses many different benefits that customers found.  Benefits included improved reliability, scalability, and high availability of business critical workloads.  While these benefits are available to any company, the ability of a single product to bring all these benefits together is a game changer.  It is now possible to get these benefits from a single package that works in your environment.  With a minimum of stress.  It is also expandable and less expensive than doing it a la carte.

The Feedback

It is refreshing to see actual verifiable customer feedback from a third party, not marketing slicks.  Data that extols the value of both HCI, and Scale Computing’s implementation of HCI .  This customer feedback is available in a condensed form.  There is also the ability to dive deeper into the data. So potential customers can research their industry, geographical location, and company size.  These are real world data points from customers, not a marketing department.


Share this:

Is Backup Software Dead?

Is Backup Software Dead?
Image courtesy of Simon Howden at

Is backup software dead?  Everywhere I look, I see increased functionality within storage appliances and operating systems.  Appliances will backup themselves, Operating Systems now have quiescing and basic back up support, and the cloud is making backup targets stupid easy.  Should I buy dedicated backup software, or does my hardware, hypervisor or Operating System handle this?

As a storage professional, I will never discourage anyone from taking backups. As a matter of fact, I personally believe that more is better.  I am sure that many of have heard the popular saying ‘Two is one and one is none.’  Anyone who has mounted a blank backup that “worked last time” understands the wisdom of multiple backups.  Balancing this wisdom against the cost of additional methods of backup – what should I do?  While there is no one answer that will work for everyone, discussions help us formulate plans.

Many hardware solutions provide backup

I’m a big fan of taking administrative tasks off-line.  As an axiom to that, the closer I can get backup to where the data lives, the faster the backup will occur and the SMALLER the impact to production systems.  It stands to reason – if a system snapshot takes place on the storage appliance and it takes milliseconds to execute, isn’t that better than a full backup through system agents that may take hours over the weekend?

To take advantage of this, many storage vendors have added support within their hardware for snapshots and replication.  In essence, this makes a copy of your data volume and moves it within your environment.  Yes, this usually only works on products within the same manufacturing family.  Yes, vendors must support quiescing.  But many OS vendors are now building the functionality within their operating system to quiesce resident data painlessly.  Well, painlessly once you get it set up.  But what was once the realm of large, intense database houses, or financial trading houses now ships with many OSes.

This seems easy enough, right?  Your storage appliance and OS will do most of the difficult work.  But what about support for your hypervisor?  Maybe those legacy apps don’t support some sort of OS quiescing?  Or what about those that don’t even have a dedicated storage appliance?

Backup Software

While it will never be as fast as dedicated storage appliance backup, backup software does have a place.  Many places in fact.

Backup Software’s arguably most important function is as a broker.  The software acts as the middleman between your data (the source) and where ever you would like a copy of the data (the target).  And it provides a greater amount of flexibility than traditional “baked-in” solutions from hardware manufacturers.  Of course, this is a simplistic approach, and many backup packages have lots of gizmos and what-nots to make a backup administrator’s life easier.  But the main function is moving data.

Software works well with dissimilar hardware.  Want to backup data across many different manufacturers?  Software can do it. Want to move it between disk, tape, and the cloud?  Removable media?  Software shines here.  Want to work with legacy applications or operating systems that may not support data integrity?  Software does this and gives you the flexibility to customize it to your environment.

What works for you

I see a place for both hardware and software in a backup strategy.  Of course, I’m also the guy that still sees tape as the most economical means to store and archive large amounts of data.  The key point is to do what works for you.  I’ve worked with large organizations that had more data than could be reasonably backed up through software.  In this case, snaps and replication were a great fit.  But those same organizations had legacy apps that needed databases backed up hot and live, then log files backed up as well to insure transactional integrity.  Software to the rescue.

My point is that there are many tools in your toolbelt to use.  But, technology always changes. Does your hardware provide all the things you need to recover your data in an emergency? With the amazing growth of data, do you see software still being a viable backup method into the future?  How do budgets and cost affect your decision?  Please share your thoughts!


Share this:

Your Redundancy Roadmap

Computer error
Image courtesy of iosphere at

We’ve all been there.  The power goes out, or someone digs up some fiber and you lose connectivity. You either can’t get your work done, or else the phone starts ringing with users who can’t. All of us in the IT industry work hard to make sure that the applications and workspaces that we support are up and available to the users that need them, when they need them.  What are the basic steps that we address to make sure our systems are up when they need to be up?  How do we balance availability and uptime with the IT budget? Where does redundancy figure in our disaster recovery planning? After all, as they said in the movie, The Right Stuff, “No bucks, no Buck Rogers!”

Full power redundancy

Those fancy servers of ours go nowhere unless there is power to them.  This means dual power supplies to the physical boxes.  If one fails, then the remaining power supply needs to be large enough to run the server or appliance.  In addition to this, these power supplies need to be on separate electrical circuits.  It does little good to have two power supplies if the same circuit or UPS failing will take down both.  Speaking of Uninterrupted Power Supplies (UPS), for a truly redundant system, there should be fail-over paths for these as well.  There don’t necessarily need to be two, but there should be a clear path to power in the event that a UPS fails.

Physical connections

Your servers or applications don’t do users any good if no one can access them.  That means that there are multiple paths from users to each workload.  From multiple NICs within each enclosure, cabled to multiple switches with multiple paths to the core, physical paths are important for redundancy.  Multiple demarcs leading to disparate paths of connectivity are also important.  Of course, these multiple paths get expensive, so use your best judgement as to what the return on investment is on these options.

Virtual machines

You have your physical hosts covered. We have multiple paths to the data.  Now we need to work on system redundancy.  There are solutions from failover clusters that have no application downtime to high-availability servers, that may have limited amounts of down time, but will automatically restart servers on a new virtual machine if the old machine fails for some reason.  These are two different ways to address the Recovery Time Objective factors of Disaster Recovery.  With most things, the smaller the downtime window, the larger the price tag.

Outsource it

And of course, there is always the decision to outsource things.  Having a company host your servers, or going with a Cloud solution are of course viable options to redundancy.  Whether you are allowing these services to host all of your computing infrastructure, or you are using these as part of your failover plan, they are tools that you can use in your redundancy toolchest.  Large Cloud providers can spread the cost of massive redundancy between many clients, making it very affordable to use.

Double up on things

So, we have gone over a few of the most common things that IT staff will use to ensure consistent connectivity in their environment.  Obviously, the specific needs of your environment are best known to you.  Any decisions will need to be weighed against your budget and management’s risk appetite.    This article has been designed as a jumping off point for redundancy planning of your network.  What are you doing in your environment?

Share this:

Parking Permit Purgatory

Image courtesy of artur84 at

I had an experience this past week that caused me to take pause and think about the process of overcoming obstacles and formulating solutions.

My wife has her own business, and I am very proud of her success.  In the process of building her business, she put a tremendous amount of mileage on her vehicle.  This vehicle has gone from The City of Brotherly Love to the Rio Grande and all points in between  – several times a year.  This vehicle has a name (Fiona, since you asked) and is almost a part of the family.

As with any tool that is mechanical, it wears out, and that is what this one did.  Now, we expected this, but it brings us to the small issue that we had – and it is probably not what you think.

While Fiona was in the dealership having major repair work done (to the tune of almost 4 weeks), my youngest daughter started her senior year in high school.  We can all remember the excitement of being in your final year of primary education and having the freedom to drive yourself to school – and of course, out with your friends.  Herein was the problem.  The school system which serves my county will provide a parking permit for one vehicle.  Only one.  As a family we have several vehicles, but the one that is available during school hours was Fiona.  Which is in the shop.  Now we had a rental, kindly provided by the dealership, but it can’t be issued a permit.   We did not own it and could not provide current registration for it.  We could not even prove that it was registered within our county.

Why can’t my daughter just take the provided transportation to school you ask?  Heck, it’s my tax dollars paying for those busses and salaries anyway!  Well, she is taking advantage of classes offered by a local college campus that provides credit hours towards both her diploma and a college degree.  As well as an internship with a local business for one of those classes.  Truly a dilemma, since neither my wife nor I desire spending most of our day “in the street” dropping our daughter off between these three locations.

A Parking Permit – the initial problem

The problem that we encountered was one of communication.  We had a situation that fell outside of the norms expected.  Like many projects that I routinely see in the course of business.  A problem, that to me, seemed very easy to fix.  Do you have a temporary permit?  Could we use it on this vehicle until my wife got Fiona back from the dealership in good working order?  It seemed an easy fix.

Or not.

“Could we provide a tag receipt for this temporary vehicle, or could we provide the title,” we were asked?  Well, no.  A well known national chain of rental cars had all of that, in their corporate name.  It was a temporary vehicle.  “Could we provide the rental agreement and a tag receipt for my wife’s vehicle?”  Well, we could get a copy of the rental agreement, but it would be in the name of the local dealership that was providing us the car.  And we could provide the latest tag for my wife’s vehicle, but it would expire in the next week or so.  We can’t get a tag for her vehicle when it has no engine in it.  What to do?

Engaging an expert for a solution

Fortunately for my daughter (and by association my wife and I), there was a solution.  Eventually, we discovered a person at the school that dealt with exceptions to the standard rules.  This subject matter expert was able to help us navigate parking permit purgatory and we got a temporary parking permit for our daughter.  And it only took us three weeks.  Perhaps you don’t think that this is a big deal, but I mention it to illustrate something that we all encounter in business.

Well defined solutions exist for most of our needs, wants, and desires.  That is why Henry Ford was so successful.  He manufactured vehicles that most people wanted at a reasonable price and within a reasonable time frame.  He made it so anyone could afford a vehicle.  But what about the people that fell outside of the bell curve, the people needing something different?  Well, there are solutions for those people when they research the options, or get an expert involved.  Ford isn’t the only car company out there.  There are companies that manufacture nothing but large trucks, electric cars, or vehicles for where there are no roads.  Soon maybe even cars that can fly.

The point is that these solutions are out there, and knowledgeable consultants can help you make the right decision.  Or, in extreme cases, help you build the right solution.  Not everyone needs a Mars Rover, but when you need one, you need one, right?  And unfortunately, that convertible mustang just won’t do.

How this applies to your business

For most of the things you need, there are simple solutions that are mindless to choose. A loaf of bread or a ream of printer paper.   For the things that you need that are more complex, more specialized, or just more expensive to acquire, it pays to engage an expert – or become one.  Someone who knows the ropes, has been there before, or who can put you in touch with those who do.  Maybe not for those times when you are trying to just “get a parking permit”, but you never know.  I can assure you that it will save you time, money, and above all frustration.

Share this:

Now for the hard part – P2V Conversion

virtualization, P2V, V2V, hyperconvergence
Image courtesy of hywards at

Ok, You’ve researched and spent the money and now you are the proud owner of a hyperconverged system.  Or any truly virtualized system, really.  If not, why not?  So how do I get all those physical servers into virtual servers, or what is commonly referred to as P2V conversion.

Well friend, pull up a chair and let’s discuss how to convert your systems.

There are three ways to get those physical boxes that now crowd your data closet and make it a hot noisy mess into a virtualized state.  Which method you chose will depend on several factors.  The three are not mutually exclusive, so feel free to use several of these P2V conversion methods in your environment depending on what the specific requirements are for each of your physical servers.

Rebuilding the Server

Rebuild the Server.  Over the course of time, and after around 4 dozen service pack updates, there is a lot of trash in a system.  Sometimes it is better just to start off fresh.  Using this method is best if you would like to update the underlying operating system, or if you are starting to have strange system behaviors.  This method is best for standard services like DNS, domain controllers, or application servers that you have clear ways to transfer just the data and configuration files.  A clean underlying install of the operating system and application services are a great way to breathe fresh life into an old, tired workhorse of a physical server.

  • That clean fresh feeling of a new OS install
  • Existing physical servers are up and functional while new server installation occurs
  • General installation and configuration issues
  • Time restraints – depending on how many servers you are building, well you are building new servers

P2V Utilities

Utilities.  There are as many utilities out there to manage P2V conversions as there are stars in the sky.  Everyone has their particular favorite.  In essence, these utilities make a disk image copy of your system and convert it to an ISO image, or even into virtual server disk formats.  It is the same concept as bare-metal restores.  These utilities make an exact copy of your application servers, so all the data and application files stay the same, but so do any strange behaviors that may exist within your Operating System.  If your server is operating well, this may be the choice for you.

Unfortunately, these utilities require that your server is off while making this copy.  So, plan for a long weekend while this gets done, and make sure that your users are aware that the IT department is closed for business while this happens.  So – this if probably not for those highly available services that NEED to be up all of the time.  Like your 911 systems or the servers that control ATMs.

  • Easy, guided conversion of disk images
  • Often converted directly to ISO image
  • Conversion often possible between virtual disk formats
  • Server MUST be offline to perform conversion operation
  • Time consuming
  • Application downtime
  • Freeware or inexpensive utilities may not have support contracts available


Dedicated High Availability or Replication Software

Dedicated software exists for those servers that need to be virtualized, but can’t be down for the hours that it may take to use the disk utilities that are discussed in the section above.  These utilities are pay-for, but fill a need not addressed by the disk image utilities.  These utilities often operate like a highly available failover pair.  What that means is agents are loaded on two servers, one that is physical and has the information you wish to virtualize – the “source”.  The other server is a virtual server with only an OS and the agent that will act as the “target”.

In this scenario, the utility makes a full “backup” from the source server to the target server.  Then changes propagate from the source to the target on a regularly scheduled basis.  When the cut-over occurs, the physical server goes down, and the virtual server comes up as an exact copy, often down to the same IP addressing.  This cutover can often happen in only minutes.

  • Less downtime for those critical servers
  • Exact copies of functional servers down to the minute
  • Support contracts are available
  • Often a pay for utility or service.  While this may not be an obstacle for IT shops, large numbers of servers mean large licensing fees
  • Often takes more time and better scheduling than other conversion utilities
  • Small period of time that services are unavailable while cutover occurs
  • Invasive – new agent software loaded on source and target servers

We have discussed the three ways that new hyperconvergence or virtualization shops can convert their physical servers to virtual servers.  Building new servers, using disk imaging utilities, and highly available utility agents all have pros and cons to address.  These three conversions move your physical servers to virtual servers and get you the benefits from virtualization.

Share this:

SSD – How much is too much?

Speed, SSD, fast, SAN, NAS
Image courtesy of pixabay

I wrote an article just a few days ago entitled “Where did the 15k disk drive go?”  It was a short piece, quickly done and meant to draw fairly obvious conclusions.  When given a choice between faster and fastest, for the same or close money, people will always choose fastest.  Little did I suspect the sheer amount of comments and emails that I would get from that article.  It appears that everyone has an opinion on storage technology and how storage vendors build out their appliances.  So, in the spirit of keeping the discussion going, I’ve decided to ask the flip side of most of the comment and email subjects.  “If 15k drives are dead, then how much SSD is too much?”  Let the games begin!

How much SSD?

I heard a Texan once say “How much is too much?  Well, if it’s money in the bank or cows on the ranch, you can never have too much!”  He was talking about things that directly affected his performance as a cattleman and his ability to perform his job or company function.  The same can be said for SSD disk in the ever-changing storage arena of business.  How much is too much SSD in a storage array or on a server?  I’m not talking about the sheer amount of physical space – that depends on the applications and data depositories that the application will require.  Plus a little bit for growth.  What I am talking about is a percentage.  Of 100% of the storage space on a given server or storage appliance, just how what percentage should be SSD – fast but expensive?

In my opinion, much will depend on a storage study.  How many IOPs does your environment need so that storage is not the bottleneck in your environment?  Is there too much latency in your SAN or NAS?  If you don’t know the answers to these questions, then a storage study should be your next step.  Check out my article here.  SSD tends to be the most expensive option in GB/$, but that ratio is coming down as manufacturing processes change and get more efficient.  But we all work in the here-and-now, so as of today, how much SSD is too much in your SAN, NAS, or hyperconverged appliance?

All Flash, or no Flash?

I have seen several examples of SSD ratios, all aided by software in one form or another.  These fall into two camps at either end of the spectrum.

To start, there is the storage appliances with no SSD.  These are fairly simple, and I don’t see them around much. If all you need is an array of large disks spinning merrily along, and your storage goals are met, do you really need SSD?  I have been in proof-of-concept trials where SSD would not make any difference is system performance, until the programmers changed the application code to make it more parallel.

Then there is the “all flash all-the-time” argument.  I am familiar with one storage array vendor that sells an all flash array with compression and de-duplication and claims that across the environment, the cost per used Gigabyte is cheaper than their hybrid array (which does not offer any compression type functionality).  Of course with de-duplication your milage may vary, but that makes a compelling argument for all flash.  There are certain industries where milliseconds matter, like stock market trading, or transaction processing.  Those industries go all flash without a second thought.

The middle ground?

So now we reach the middle ground, where the argument get heated.  Hybrid arrays replace the fastest tier of storage with SSD, or use large amounts of SSD as caches to buffer regular hard drives.  Manufacturers use SSD to take the place of those good ol’ 15K drives, as well as some of the 10k drives, too.  The larger and slower SATA drives remain the workhorses for storage.  Older, slower data goes there to die.  Or at least drive your backup software crazy.

So, where does all this leave us?  Should we go ahead and use all flash since it is the wave of the future?  Since I will be replacing my array as I outgrow it, should I buy affordable now, and look to put in all-flash when it is the standard?  Assuming that I am not a government agency with black-budget trillions to spend, how much SSD is too much SSD?  Looking forward to your comments.

Share this:

Where did the 15K disk drive go?

15k disk
Image courtesy of Suriya Kankliang at

Just a few years ago, everyone wanted disk drives that spun at 15,000 rpm, commonly known as “15k disk”.  Why did people want these?  Well, the faster the spindle turned, the shorter the seek times, the less latency and the faster the writes to that disk.  Since I never worked at any of the drive manufacturers, I can’t really speak to the truth of this, but I do take it on faith.  So when everything on a storage array was spinning disk, why did people want “15k spindles” in the line up?  And since SSD has become so popular, why don’t I really see them anymore?

Why do I want expensive, small disks?

The reason that everyone wanted 15k disk drives was pretty straightforward.  The disk themselves were fairly small in capacity (600GB being a standard size) and expensive on a GB/$ ratio.  But they were FAST.  If there was a target IOPs for a storage array, it was easier to balance out size and speed with a ratio of 15k disk, 10k disk, and standard 7.2k SATA drives.  Speed from the smaller drives and space from the slower drives.  While everything was acceptable ON AVERAGE, the laws of physics still applied to the different speeds of disk. There was a bit of balance that had to happen. You could put your fast access volumes on 15k, but you still needed the SATA drives for the larger storage requirements.  This solution worked, but was expensive – and a bit “hands-on”.

There were even a few manufacturers that started to offer ILM with these systems.  This means that “hot” or active data writes to the 15k disk drives since theoretically the write speed on these is fastest.  Your storage appliance now writes more across the aggregate of your SAN environment.  Once this data is written to the fastest disk on your SAN or NAS, it stays there for a bit.  This logic being that it also has the fastest read times and therefore the best performance when you wanted to recall that data.  These ILM vendors then move the data off of the fastest tier of disk to a slower tier as that data becomes less active or “ages”.  This allowed you to store older, less accessed data on the slower and less expensive tiers of storage.  Because the database has to run quickly, but who cares if it takes accounting a week to get the data for their year-end reporting, right?  Remember that the next time you need that expense report reimbursed!

The next step

Then SSD entered the market.  At an affordable price, that is.  Not only could manufacturers use SSD as caching, but they were large enough that manufacturers could also use them as the fastest tier of data storage in an ILM strategy.  And the form factor of SSD disks allows them to be used in the existing storage appliance enclosures – JUST LIKE spinning disk.  Now, instead of expensive 15k disks, you could put in units in the same form factor that would read and write several hundred times faster than disk.  With the speed and storage capability of SSD, it became unnecessary to use 15K disk in storage appliances for speed.

But I still see some 15k disk out there…

You will still see 15k disk used in local solutions.  A 15K SAS disk RAID 5 array is quite speedy when used in a local physical server.  Virtualization solutions, or database servers will often use 15K spindles for disk targets.  They need sizable storage capacities and quick access.  However, the cost of SSD is coming down.  This allows the justification for installation of SSD disk or arrays in physical servers.  Seagate has stopped development of new models for their 15k disk.  Previously storage technology leapt from Tape to HDD for large data storage like disaster recovery.  Now storage acceptance from high speed disk to SSD will likely accelerate.  Technology to increase access speed, reduce manufacturer costs, and increase storage capacity will accelerate this change.  So long 15k disk, we hardly knew ya!

Share this: