The Storage Study – or How do I determine what my environment is using?

Storage Study
© Ultramarine5 | Dreamstime Stock Photos & Stock Free Images

In a previous article, we discussed three important questions to answer about YOUR environment before jumping into a storage appliance.  In this article, we will delve deeper into the first question we asked, “How fast and how much storage do you need?”.  This article is designed for the IT generalist, someone who is looking for some insight on how to do one.

So – how do I tell what I need?  The first step is to do a storage study.  The storage study is done in your environment for a period of around seven days.  Why seven days?  Because that will capture an entire work week of your environment.  And by work week, I mean those weekends that systems guys work and backups run on as well.  Is Saturday a full-backup day?  You want to see what the impact is on your systems.  Perhaps accounting prepares reports for payroll on Wednesdays.  Usually, a seven day sampling of your storage needs will account for standard practices within your environment, and not create information stores that are massive in size.

If you would like to capture more days than seven, break it out into multiple capture files of seven days.  Perhaps doing multiple sampling weeks during significant system events would reveal more details about your environment.  End of the quarter accounting processing?  Start of a new production cycle?  You decide.

The storage study should include several important take-aways collated and also broken out by host or server.  These four important metrics are IOP/s, Latency, Storage Footprint, and information on new (or “hot”) data.  We will delve a bit deeper into what each of these means below.

Input/output OPerations per Second (IOP/s)

What is an IOP, and what does it mean to my environment?  IOP/s simply put are a measurement of how many storage operations your host is doing every second.  IOP/s can be misleading, though.  While a single read operation generally takes 1 IOP, writes to disk can use up to 6 IOP/s for the same bit of information.  Why this happens is a bit technical, so your relevant question should be “How do I account for this?”

In addition to the overall IOP/s number, most studies will include a read vs. write percentage. This is usually written as 65/35, or 65% reads across this study and 35% writes.  This percentage determines how exactly to account for the IOP/s that were collected.  Of primary importance to the IOP/s study, though is the IOP/s over time.  This will help determine when the busy parts of the day are.  You should see numbers for absolute peak (meaning that this was the largest IOP/s event during the entire sampling period), and several percentiles.

The 85th percentile number is what is usually used to determine how to size your system.  You can certainly size your system to accommodate your peak IOP/s, but usually this is more appliance than you really need.  It follows the same logic of building a house that is above a 500 year flood plain.  Sure, your house won’t get flooded out (statistically speaking) for 500 years, but will the house even be standing by then?


OK – we know about how many IOP/s our systems are using in the course of our storage study.  Now, how long is it taking those IOP/s to be serviced?  In essence, your systems are issuing commands to your storage, but how long does it take your storage to complete the command?  Is that number acceptable?  Milliseconds are the usual time.  Lower is better.

Peak and trending latency are important.  If peak latency reaches 100ms, there is cause to investigate further.  Most applications are tolerant of high latency.  High latency is noticed in database record access times, or the spinning wheel/hourglass of uncommitted data.  It can be a bit tricky to run down exactly where the slowdown may be occurring.  Our primary concern with this storage study is that it is NOT happening along the disk I/O path.  Common culprits are slower disks, inadequate system RAM, and older CPUs.

If you start to see this number trending up or if you see spikes during the day, this is indicative of concerns in your system. While your disk storage may not be the bottleneck, we would like to be able to disqualify it.  Your planned storage appliance should be sized to accommodate any extra load.

Overall Storage footprint

Overall footprint is straightforward.  How much stuff do you have stored on all of your systems?  You will see this represented both by the server and also the entire environment that you collected.  This is often represented by a total amount of space in the environment – all the space on all the hard drives.  The amount of used v. free space is important.  This lets you know how much of all the spinning disks that you are have filled with your data.  A small amount of data on a fast, expensive disk or disks is not cost effective.

If you conduct multiple storage studies, compare the amount of used space from one study to another. This will give you an idea of how quickly your environment is growing.  Most of the storage study tools out there will collect information on each disk individually.  This allows you to drill down to the application level.  Find those greedy disk hogging applications quickly.

This metric will help you to determine how much overall space you should put into your storage appliance.

“Hot Data”

Hot data is data that is accessed, changed, or newly written by your systems within the storage study collection period.  In essence, this is the data that your applications used during the study.  All other data is not accessed, touched, or read during this time, but may be necessary to keep.  This hot data contains clues into how much your overall data needs may be growing every week.

Hot Data also answers the question of how fast your storage needs to be.  Writing data puts more of a strain on a system than reading data.  Hence, we need a faster system the more writing we do.  Hot data also gives us a rough estimate of what new data was written on the system.  This allows us to extrapolate what your storage needs will look like in a quarter or a year given your current rate of growth.

One important aspect that hot data drives is the speed of the storage appliance.  The higher the percentage of hot data the faster the storage appliance needs to be.  The larger the overall amount of hot data, the faster the storage appliance needs to be.  These are important considerations in correctly sizing storage appliances for both size and speed.

Accurate growth rates allow us to properly size the overall storage capacity of a storage appliance.  No one wants to buy too little space right off the bat.  But it is also pertinent that we not buy too much storage at the onset of the project.  Storage prices go down every year as capacity goes up.  It is financially cheaper to only buy the storage when you need it, not buy it all up front.


A storage study is the first step in determining your needs in a storage appliance.  This report generates many of the metrics that are required to correctly size a storage appliance.  The numbers generated will give us ideas of how much disk space overall we need, and how fast that disk space needs to be.

We have discussed IOP/s, Latency, Storage Footprint, and information on new (or “hot”) data in this article.  Once you have collected these metrics, analyze them.  Next, let’s see how various applications affect our storage needs.

Share this: