AR Consultant Group was interviewed about the new Scale Computing offering combining Hyperconverged Infrastructure and Google Cloud Services.
Read the article from Computer Reseller News:
AR Consultant Group was interviewed about the new Scale Computing offering combining Hyperconverged Infrastructure and Google Cloud Services.
Read the article from Computer Reseller News:
I recently had the opportunity to review some information collected by TechValidate on an HCI vendor, Scale Computing. Now, for full disclosure, AR Consultant Group is a partner of Scale Computing. Scale Computing makes an HCI product marketed under the product family of HC3. Even so, data collected by TechValidate is pertinent to HCI solutions across the board. After reviewing the data and how it was presented, I found it not only easy to research, but a great way to show potential customers and even those mildly curious the advantages of HCI.
Many are not familiar with HCI, or HyperConverged Infrastructure. It is the combination of compute resources, storage, and hypervisor without the licensing costs in a single preconfigured package. Certainly, there are other things that vendors add in order to differentiate themselves, but these three are standard within the hyperconverged solution set. In this particular instance, Scale Computing targets the SMB community, those businesses with between 100 to 500 employees and 1 to 5 IT staff.
First and foremost, the data bowled me over. Not the actual data itself, but the method with which the data is presented. If you haven’t seen the TechValidate package, then you certainly should. It is a great way to present data and customer opinions. TechValidate surveyed customers after purchase on what product advantages they found and other traditional data points. They then presented this data using real-life Scale customers. The company profiles also back the data up. Seems like an innovative way to proactively publish data that allows customers and prospective clients to investigate specific data points that interest them – from people that are actually using the product.
The graphs provided by TechValidate center around challenges that are solved by the Scale HC3 solution. Also charted are what benefits the customer perceives from a hyperconverged solution. First, let’s examine the Operational Challenges data.
Operational Challenges solved by hyperconvered technologies from Scale fell primarily into two categories – improvement of processes and reduction of cost or complexity.
The improvement of process challenges appear to revolve around the benefits of virtualization in a preconfigured clustered setup. By handling the hardware and software clustering aspects of virtualization through hyperconversion, these solutions allow for hassle-free increases in customer processes. Server hardware clustering and failover, failover of other infrastructure aspects, and simplification of disaster recovery became much simplier. In other words, the manufacturer made these benefits easy to implement for customer business.
Reducing cost and complexity of hyperconverged solutions, allows customers to enjoy reduced cost of purchasing everything separately. This also reduces time spent in administration of all the systems separately, It reduces complexity of support through having a single vendor support contract.
By making the IT function more efficient and getting more value for the budget, this survey addresses many of the main concerns of staff and management of the SMB.
A follow-up survey asked customers of Scale Computing about the actual business benefits they found from implementing HC3. Again, these fell into two basic categories – Ease of use and improvement to the information technology environment.
Ease of use is the largest benefit by a large margin. Making the product easy to use increases the interest from customers. “Hey, this will work for me“. It also shows a benefit to the customer. Now that it is “easier” to do tasks it therefore cuts down on my after-hours/weekend work. In addition, it gives me the time to pursue other projects. It is also easier to train new staff on how to support the system. Believe me, coming from a guy who carried a weekend pager and supported physical servers, these are huge benefits.
Improvement of environment encompasses many different benefits that customers found. Benefits included improved reliability, scalability, and high availability of business critical workloads. While these benefits are available to any company, the ability of a single product to bring all these benefits together is a game changer. It is now possible to get these benefits from a single package that works in your environment. With a minimum of stress. It is also expandable and less expensive than doing it a la carte.
It is refreshing to see actual verifiable customer feedback from a third party, not marketing slicks. Data that extols the value of both HCI, and Scale Computing’s implementation of HCI . This customer feedback is available in a condensed form. There is also the ability to dive deeper into the data. So potential customers can research their industry, geographical location, and company size. These are real world data points from customers, not a marketing department.
Ok, You’ve researched and spent the money and now you are the proud owner of a hyperconverged system. Or any truly virtualized system, really. If not, why not? So how do I get all those physical servers into virtual servers, or what is commonly referred to as P2V conversion.
Well friend, pull up a chair and let’s discuss how to convert your systems.
There are three ways to get those physical boxes that now crowd your data closet and make it a hot noisy mess into a virtualized state. Which method you chose will depend on several factors. The three are not mutually exclusive, so feel free to use several of these P2V conversion methods in your environment depending on what the specific requirements are for each of your physical servers.
Rebuild the Server. Over the course of time, and after around 4 dozen service pack updates, there is a lot of trash in a system. Sometimes it is better just to start off fresh. Using this method is best if you would like to update the underlying operating system, or if you are starting to have strange system behaviors. This method is best for standard services like DNS, domain controllers, or application servers that you have clear ways to transfer just the data and configuration files. A clean underlying install of the operating system and application services are a great way to breathe fresh life into an old, tired workhorse of a physical server.
Utilities. There are as many utilities out there to manage P2V conversions as there are stars in the sky. Everyone has their particular favorite. In essence, these utilities make a disk image copy of your system and convert it to an ISO image, or even into virtual server disk formats. It is the same concept as bare-metal restores. These utilities make an exact copy of your application servers, so all the data and application files stay the same, but so do any strange behaviors that may exist within your Operating System. If your server is operating well, this may be the choice for you.
Unfortunately, these utilities require that your server is off while making this copy. So, plan for a long weekend while this gets done, and make sure that your users are aware that the IT department is closed for business while this happens. So – this if probably not for those highly available services that NEED to be up all of the time. Like your 911 systems or the servers that control ATMs.
Dedicated software exists for those servers that need to be virtualized, but can’t be down for the hours that it may take to use the disk utilities that are discussed in the section above. These utilities are pay-for, but fill a need not addressed by the disk image utilities. These utilities often operate like a highly available failover pair. What that means is agents are loaded on two servers, one that is physical and has the information you wish to virtualize – the “source”. The other server is a virtual server with only an OS and the agent that will act as the “target”.
In this scenario, the utility makes a full “backup” from the source server to the target server. Then changes propagate from the source to the target on a regularly scheduled basis. When the cut-over occurs, the physical server goes down, and the virtual server comes up as an exact copy, often down to the same IP addressing. This cutover can often happen in only minutes.
We have discussed the three ways that new hyperconvergence or virtualization shops can convert their physical servers to virtual servers. Building new servers, using disk imaging utilities, and highly available utility agents all have pros and cons to address. These three conversions move your physical servers to virtual servers and get you the benefits from virtualization.
I wrote an article just a few days ago entitled “Where did the 15k disk drive go?” It was a short piece, quickly done and meant to draw fairly obvious conclusions. When given a choice between faster and fastest, for the same or close money, people will always choose fastest. Little did I suspect the sheer amount of comments and emails that I would get from that article. It appears that everyone has an opinion on storage technology and how storage vendors build out their appliances. So, in the spirit of keeping the discussion going, I’ve decided to ask the flip side of most of the comment and email subjects. “If 15k drives are dead, then how much SSD is too much?” Let the games begin!
I heard a Texan once say “How much is too much? Well, if it’s money in the bank or cows on the ranch, you can never have too much!” He was talking about things that directly affected his performance as a cattleman and his ability to perform his job or company function. The same can be said for SSD disk in the ever-changing storage arena of business. How much is too much SSD in a storage array or on a server? I’m not talking about the sheer amount of physical space – that depends on the applications and data depositories that the application will require. Plus a little bit for growth. What I am talking about is a percentage. Of 100% of the storage space on a given server or storage appliance, just how what percentage should be SSD – fast but expensive?
In my opinion, much will depend on a storage study. How many IOPs does your environment need so that storage is not the bottleneck in your environment? Is there too much latency in your SAN or NAS? If you don’t know the answers to these questions, then a storage study should be your next step. Check out my article here. SSD tends to be the most expensive option in GB/$, but that ratio is coming down as manufacturing processes change and get more efficient. But we all work in the here-and-now, so as of today, how much SSD is too much in your SAN, NAS, or hyperconverged appliance?
I have seen several examples of SSD ratios, all aided by software in one form or another. These fall into two camps at either end of the spectrum.
To start, there is the storage appliances with no SSD. These are fairly simple, and I don’t see them around much. If all you need is an array of large disks spinning merrily along, and your storage goals are met, do you really need SSD? I have been in proof-of-concept trials where SSD would not make any difference is system performance, until the programmers changed the application code to make it more parallel.
Then there is the “all flash all-the-time” argument. I am familiar with one storage array vendor that sells an all flash array with compression and de-duplication and claims that across the environment, the cost per used Gigabyte is cheaper than their hybrid array (which does not offer any compression type functionality). Of course with de-duplication your milage may vary, but that makes a compelling argument for all flash. There are certain industries where milliseconds matter, like stock market trading, or transaction processing. Those industries go all flash without a second thought.
So now we reach the middle ground, where the argument get heated. Hybrid arrays replace the fastest tier of storage with SSD, or use large amounts of SSD as caches to buffer regular hard drives. Manufacturers use SSD to take the place of those good ol’ 15K drives, as well as some of the 10k drives, too. The larger and slower SATA drives remain the workhorses for storage. Older, slower data goes there to die. Or at least drive your backup software crazy.
So, where does all this leave us? Should we go ahead and use all flash since it is the wave of the future? Since I will be replacing my array as I outgrow it, should I buy affordable now, and look to put in all-flash when it is the standard? Assuming that I am not a government agency with black-budget trillions to spend, how much SSD is too much SSD? Looking forward to your comments.
Just a few years ago, everyone wanted disk drives that spun at 15,000 rpm, commonly known as “15k disk”. Why did people want these? Well, the faster the spindle turned, the shorter the seek times, the less latency and the faster the writes to that disk. Since I never worked at any of the drive manufacturers, I can’t really speak to the truth of this, but I do take it on faith. So when everything on a storage array was spinning disk, why did people want “15k spindles” in the line up? And since SSD has become so popular, why don’t I really see them anymore?
The reason that everyone wanted 15k disk drives was pretty straightforward. The disk themselves were fairly small in capacity (600GB being a standard size) and expensive on a GB/$ ratio. But they were FAST. If there was a target IOPs for a storage array, it was easier to balance out size and speed with a ratio of 15k disk, 10k disk, and standard 7.2k SATA drives. Speed from the smaller drives and space from the slower drives. While everything was acceptable ON AVERAGE, the laws of physics still applied to the different speeds of disk. There was a bit of balance that had to happen. You could put your fast access volumes on 15k, but you still needed the SATA drives for the larger storage requirements. This solution worked, but was expensive – and a bit “hands-on”.
There were even a few manufacturers that started to offer ILM with these systems. This means that “hot” or active data writes to the 15k disk drives since theoretically the write speed on these is fastest. Your storage appliance now writes more across the aggregate of your SAN environment. Once this data is written to the fastest disk on your SAN or NAS, it stays there for a bit. This logic being that it also has the fastest read times and therefore the best performance when you wanted to recall that data. These ILM vendors then move the data off of the fastest tier of disk to a slower tier as that data becomes less active or “ages”. This allowed you to store older, less accessed data on the slower and less expensive tiers of storage. Because the database has to run quickly, but who cares if it takes accounting a week to get the data for their year-end reporting, right? Remember that the next time you need that expense report reimbursed!
Then SSD entered the market. At an affordable price, that is. Not only could manufacturers use SSD as caching, but they were large enough that manufacturers could also use them as the fastest tier of data storage in an ILM strategy. And the form factor of SSD disks allows them to be used in the existing storage appliance enclosures – JUST LIKE spinning disk. Now, instead of expensive 15k disks, you could put in units in the same form factor that would read and write several hundred times faster than disk. With the speed and storage capability of SSD, it became unnecessary to use 15K disk in storage appliances for speed.
You will still see 15k disk used in local solutions. A 15K SAS disk RAID 5 array is quite speedy when used in a local physical server. Virtualization solutions, or database servers will often use 15K spindles for disk targets. They need sizable storage capacities and quick access. However, the cost of SSD is coming down. This allows the justification for installation of SSD disk or arrays in physical servers. Seagate has stopped development of new models for their 15k disk. Previously storage technology leapt from Tape to HDD for large data storage like disaster recovery. Now storage acceptance from high speed disk to SSD will likely accelerate. Technology to increase access speed, reduce manufacturer costs, and increase storage capacity will accelerate this change. So long 15k disk, we hardly knew ya!
By now, everyone realizes the advantages of server virtualization. Flexibility in the face of rapidly changing technology, reduction in administrative effort on busy IT staff, and cost savings from reducing physical machines is just the beginning. As you may have heard, hyperconverged infrastructure solutions offer all of these advantages, plus the added benefit of simplicity in your environment.
This article is targeted towards small to mid-sized business: 50 to 500 employees supported with 1-5 or so staffers in the IT department. These IT shops don’t rely on specialists, but a few really good “jack-of-all-trades”. If you are looking for a way to bring this up with the boss, make sure to see the article written for the senior directors or owners in the business here.
So – there are a lot of different hyperconverged vendors out there and a lot of solutions. If you believe the literature and web demos, they will all do everything you need in your environment. How do you know which is the best and what to look out for?
As with everything in life, the answer is – It depends. No one can answer the question of which is best for you, without the intimate knowledge of your environment which probably only you have. What I can do is provide you with some questions that you might want to ask the various solutions providers. These may help you determine which solution works best for your organization, and that management will buy off on.
Here are 5 questions to help you in your inquiry.
Well, not literally, but what does this solution entail? How many servers of MINE will this solution cover, and how much extra capacity will I have? Are there any extras that might later cost me money or maintenance fees? Are installation services needed and possibly included in this solution? Is high availability between hardware units included in this quote? The answers to these questions may not make or break the solution for you, but you should know what you are getting for your money. You need to be able to present this effectively to management so no one gets any unpleasant surprises later. Maybe you only need a barebones system right now. That’s fine, but make sure that you know what is included and what everyone’s expectations are.
There are a few main solutions out there and they all handle this differently. Many manufacturers of these solutions OEM hypervisors, so ask how that affects the cost of your unit(s). Is there the possibility of having to purchase additional software licenses in order to expand? Are all of the management consoles and utilities provided under the license of the hyperconverged product? If not, what isn’t included that I may want, and where can I get it? Do I need to deal with the hyperconverged manufacturer, or do I have to drag another vendor into this? How many vendors are involved in this solution and who do I call if I need support? Are there different tiers in the number of licenses? What do my maintenance costs look like 3 and 5 years out? If my server count grows by 20% per year, what additional costs will I encounter? Most solutions providers will be more than happy to work these numbers for you, and your management will love your forward thinking “strategic planning”.
Hyperconverged infrastructure solutions are all about making things simple, right? Find out. Get to know how this particular solution works. You don’t need to see the actual code, but it might be nice to know conceptually how everything fits together. Does this solution come with any training? Is training required? Is training an extra cost? Are basic functions like setting up virtual machines, virtual disks, and virtual NICs intuitive? What about more advanced tasks? That pesky application that we have that demands VLAN tagging, how does this solution support that? Can I do every task I need to do from the management interface? How easy is this product to use for non-pre-sales-engineers-that-don’t-work-for-the-manufacturer?
OK – we are looking at this solution because recovery and business continuity are supposedly made much easier with this. Can I stop dropping by the office after hours and on weekends to do silly little server tasks, like rebooting crashed boxes… for payroll… at the end of the month? How does this solution help me with recovery tasks? How does it handle a crashed server? How does the solution handle network failures, disk failures, or whole server failures? Can I SEE it demonstrated live? How will this solution affect my existing backup strategy? Will my current backup solution work, or does this solution include something that replaces it? Does it do native snapshots? How many? Will it replicate those snapshots somewhere automagically? How can my existing DR plan be improved with this solution?
Everyone has a constantly changing environment. How does this solution handle growth and changing needs? What does it take to add 20% capacity to this solution? How much does it cost, and how easy is it to do? Will I have to stop production or do it at 3am? Do I need additional chassis to do this, or can I upgrade the units internally? Will this require downtime? What if I want to start moving things to the edge of my infrastructure? How flexible is this product? Do I have the ability to add more memory, CPU, or plain disk to this solution independent of purchasing the next model? What is the roadmap for this product line – Flash disk, software, and NIC speeds?
Hyperconverged infrastructure promises to be an amazing step in the IT virtualization lifecycle. There are different capabilities and features in all of the various solutions. You just need to ask a few questions to figure out which one is right for you. Not just right for you right now, but right for you in 3 to 5 years. Only after you can answer the questions above will you be able to enjoy the REAL benefits of simplicity that hyperconvergence provides.
Hyperconvergence is the newest IT architecture that is removing both cost and complexity from virtualization infrastructure. This article assumes you are aware of the advantages of hyperconvergence and how it applies to the business end of your small to medium business. What we are going to discuss is how to ensure that you are getting the TRUE advantages from Hyperconvergence over what all those fancy marketing papers say you can.
A small to medium business(SMB) doesn’t mean just a tiny kiosk in the mall that only has a single POS computer. We’re talking about SMB in terms of between 50 to 500 employees with an IT staff of up to 5 full or part-time staffers.
There are a lot of claims out there around hyperconvergence technologies. At the top of the list is reducing costs. Also, it claims to be a simpler environment for your IT staff – increasing productivity. As the business owner, what questions do you need to ask to ensure that your hard-earned capital is well spent?
Among all the claims, there are 5 things that you need to look for in a hyperconverged solution to ensure that your solution brings everything to your business that it can.
One of the claims of hyperconvergence is simplification of the solution. This is potentially achieved by eliminating the multitude of vendors that are part of a traditional virtualized solution. This solution involves how many vendors? Where do the individual responsibilities of each vendor start and stop? Will you need multiple support contracts, or is everything covered under one master contract? Is there a central support number to call, or is there the possibility of finger-pointing between various manufacturers? In this vein, is the solution the intellectual property of one company, or are there different licensing agreements in place? How could this affect YOUR investment in the event of a manufacturer bankruptcy?
The initial install of the solution is probably correctly sized for your business. What happens if you need to expand that installation? If you need more virtual servers, or to add more users, are there going to be any additional license fees (Vmware)? What about yearly maintenance fees, will those grow, too? What if we expand and I want to add virtual servers at another location? Are my licenses “tiered” or do they get more expensive for additional functionality or when I hit a certain license count? These are not necessarily deal-breakers, but fore-warned is fore-armed. It sure helps to have a reliable idea of licensing costs when budget time rolls around.
Hyperconverged solutions come in all shapes and sizes. Different solutions exist for a dozen virtualized servers, and for several hundred virtual servers. Whichever you have is not as important as the answer to the question Is the solution expandable? Does the solution have the ability to cover your business as it grows, without the dreaded “fork-lift upgrade”, which means downtime for the profit-centers of your business. In addition to this, if upgrades are possible, do they involve downtime? Can your sales department sell while the upgrades occur?
Sure, everyone will be more than happy to install this beast once you have signed on the dotted line, but just how complex is that installation? Can we operate on the existing systems and minimize downtime while the installation occurs? How complex is the switchover to the new systems (Easily migrating VMs or data)? Can your IT staff shadow the installation? Is it easy enough that they can do it themselves with just a bit of guidance? Can your staff expand the system, or will you need outside help?
Now that we have it and everything is running, just how difficult is it to get my IT staff up to speed on the product? Is there additional training that will take my staff off site in order to learn how to use this product? Once I train my staff, am I in danger of losing them to a competitor willing to pay more for those certifications? When we add additional virtual servers to the environment, will my staff be able to do that? How difficult is it, and how long will it take? Since my staff isn’t as large as some of the big-guys, how difficult is it to cross-train?
Hyperconvergence is an amazing leap forward for IT virtualization. Correctly sized, designed, and implemented it promises a lot to the small to medium business. But like most things in life, one size doesn’t necessarily fit all. Spending money wisely requires due diligence. Make sure the business squeezes all of the value that you paid for from this solution. Address the questions around vendors, licensing, systems expandability, installation and ease of use.
Engage with the manufacturers and ask the solutions provider the next step questions addressed in this article. This will ensure that you enjoy the advantages advertised while getting the exact solution to benefit your business NOW.