Tag: virtualization

The case for HCI – a user survey review

Computer survey
Image courtesy of Master isolated images at FreeDigitalPhotos.net

I recently had the opportunity to review some information collected by TechValidate on an HCI vendor, Scale Computing. Now, for full disclosure, AR Consultant Group is a partner of Scale Computing.  Scale Computing makes an HCI product marketed under the product family of HC3.  Even so, data collected by TechValidate is pertinent to HCI solutions across the board.  After reviewing the data and how it was presented, I found it not only easy to research, but a great way to show potential customers and even those mildly curious the advantages of HCI.

Many are not familiar with HCI, or HyperConverged Infrastructure.  It is the combination of compute resources, storage, and hypervisor without the licensing costs in a single preconfigured package.  Certainly, there are other things that vendors add in order to differentiate themselves, but these three are standard within the hyperconverged solution set.  In this particular instance, Scale Computing targets the SMB community, those businesses with between 100 to 500 employees and 1 to 5 IT staff.

The Data

First and foremost, the data bowled me over.  Not the actual data itself, but the method with which the data is presented.  If you haven’t seen the TechValidate package, then you certainly should.  It is a great way to present data and customer opinions.  TechValidate surveyed customers after purchase on what product advantages they found and other traditional data points.  They then presented this data using real-life Scale customers.  The company profiles also back the data up.  Seems like an innovative way to proactively publish data that allows customers and prospective clients to investigate specific data points that interest them – from people that are actually using the product.

The Results

The graphs provided by TechValidate center around challenges that are solved by the Scale HC3 solution.  Also charted are what benefits the customer perceives from a hyperconverged solution.  First, let’s examine the Operational Challenges data.

Operational Challenges Solved by HCI

Operational Challenges solved by hyperconvered technologies from Scale fell primarily into two categories – improvement of processes and reduction of cost or complexity.

The improvement of process challenges appear to revolve around the benefits of virtualization in a preconfigured clustered setup.  By handling the hardware and software clustering aspects of virtualization through hyperconversion, these solutions allow for hassle-free increases in customer processes.   Server hardware clustering and failover, failover of other infrastructure aspects, and simplification of disaster recovery became much simplier.  In other words, the manufacturer made these benefits easy to implement for customer business.

Customer Content verified by TechValidate.


Reducing cost and complexity of hyperconverged solutions, allows customers to enjoy reduced cost of purchasing everything separately.  This also reduces time spent in administration of all the systems separately,  It reduces complexity of support through having a single vendor support contract.

By making the IT function more efficient and getting more value for the budget, this survey addresses many of the main concerns of staff and management of the SMB.

Biggest Benefit from Scale Computing HCI

A follow-up survey asked customers of Scale Computing about the actual business benefits they found from implementing HC3.  Again, these fell into two basic categories – Ease of use and improvement to the information technology environment.

Customer References verified by TechValidate.


Ease of use is the largest benefit by a large margin.  Making the product easy to use increases the interest from customers. “Hey, this will work for me“.  It also shows a benefit to the customer.  Now that it is “easier” to do tasks it therefore cuts down on my after-hours/weekend work.  In addition, it gives me the time to pursue other projects.  It is also easier to train new staff on how to support the system. Believe me, coming from a guy who carried a weekend pager and supported physical servers, these are huge benefits.

Improvement of environment encompasses many different benefits that customers found.  Benefits included improved reliability, scalability, and high availability of business critical workloads.  While these benefits are available to any company, the ability of a single product to bring all these benefits together is a game changer.  It is now possible to get these benefits from a single package that works in your environment.  With a minimum of stress.  It is also expandable and less expensive than doing it a la carte.

The Feedback

It is refreshing to see actual verifiable customer feedback from a third party, not marketing slicks.  Data that extols the value of both HCI, and Scale Computing’s implementation of HCI .  This customer feedback is available in a condensed form.  There is also the ability to dive deeper into the data. So potential customers can research their industry, geographical location, and company size.  These are real world data points from customers, not a marketing department.

 

Share this:

Is Backup Software Dead?

Is Backup Software Dead?
Image courtesy of Simon Howden at FreeDigitalPhotos.net

Is backup software dead?  Everywhere I look, I see increased functionality within storage appliances and operating systems.  Appliances will backup themselves, Operating Systems now have quiescing and basic back up support, and the cloud is making backup targets stupid easy.  Should I buy dedicated backup software, or does my hardware, hypervisor or Operating System handle this?

As a storage professional, I will never discourage anyone from taking backups. As a matter of fact, I personally believe that more is better.  I am sure that many of have heard the popular saying ‘Two is one and one is none.’  Anyone who has mounted a blank backup that “worked last time” understands the wisdom of multiple backups.  Balancing this wisdom against the cost of additional methods of backup – what should I do?  While there is no one answer that will work for everyone, discussions help us formulate plans.

Many hardware solutions provide backup

I’m a big fan of taking administrative tasks off-line.  As an axiom to that, the closer I can get backup to where the data lives, the faster the backup will occur and the SMALLER the impact to production systems.  It stands to reason – if a system snapshot takes place on the storage appliance and it takes milliseconds to execute, isn’t that better than a full backup through system agents that may take hours over the weekend?

To take advantage of this, many storage vendors have added support within their hardware for snapshots and replication.  In essence, this makes a copy of your data volume and moves it within your environment.  Yes, this usually only works on products within the same manufacturing family.  Yes, vendors must support quiescing.  But many OS vendors are now building the functionality within their operating system to quiesce resident data painlessly.  Well, painlessly once you get it set up.  But what was once the realm of large, intense database houses, or financial trading houses now ships with many OSes.

This seems easy enough, right?  Your storage appliance and OS will do most of the difficult work.  But what about support for your hypervisor?  Maybe those legacy apps don’t support some sort of OS quiescing?  Or what about those that don’t even have a dedicated storage appliance?

Backup Software

While it will never be as fast as dedicated storage appliance backup, backup software does have a place.  Many places in fact.

Backup Software’s arguably most important function is as a broker.  The software acts as the middleman between your data (the source) and where ever you would like a copy of the data (the target).  And it provides a greater amount of flexibility than traditional “baked-in” solutions from hardware manufacturers.  Of course, this is a simplistic approach, and many backup packages have lots of gizmos and what-nots to make a backup administrator’s life easier.  But the main function is moving data.

Software works well with dissimilar hardware.  Want to backup data across many different manufacturers?  Software can do it. Want to move it between disk, tape, and the cloud?  Removable media?  Software shines here.  Want to work with legacy applications or operating systems that may not support data integrity?  Software does this and gives you the flexibility to customize it to your environment.

What works for you

I see a place for both hardware and software in a backup strategy.  Of course, I’m also the guy that still sees tape as the most economical means to store and archive large amounts of data.  The key point is to do what works for you.  I’ve worked with large organizations that had more data than could be reasonably backed up through software.  In this case, snaps and replication were a great fit.  But those same organizations had legacy apps that needed databases backed up hot and live, then log files backed up as well to insure transactional integrity.  Software to the rescue.

My point is that there are many tools in your toolbelt to use.  But, technology always changes. Does your hardware provide all the things you need to recover your data in an emergency? With the amazing growth of data, do you see software still being a viable backup method into the future?  How do budgets and cost affect your decision?  Please share your thoughts!

 

Share this:

Your Redundancy Roadmap

Computer error
Image courtesy of iosphere at FreeDigitalPhotos.net

We’ve all been there.  The power goes out, or someone digs up some fiber and you lose connectivity. You either can’t get your work done, or else the phone starts ringing with users who can’t. All of us in the IT industry work hard to make sure that the applications and workspaces that we support are up and available to the users that need them, when they need them.  What are the basic steps that we address to make sure our systems are up when they need to be up?  How do we balance availability and uptime with the IT budget? Where does redundancy figure in our disaster recovery planning? After all, as they said in the movie, The Right Stuff, “No bucks, no Buck Rogers!”

Full power redundancy

Those fancy servers of ours go nowhere unless there is power to them.  This means dual power supplies to the physical boxes.  If one fails, then the remaining power supply needs to be large enough to run the server or appliance.  In addition to this, these power supplies need to be on separate electrical circuits.  It does little good to have two power supplies if the same circuit or UPS failing will take down both.  Speaking of Uninterrupted Power Supplies (UPS), for a truly redundant system, there should be fail-over paths for these as well.  There don’t necessarily need to be two, but there should be a clear path to power in the event that a UPS fails.

Physical connections

Your servers or applications don’t do users any good if no one can access them.  That means that there are multiple paths from users to each workload.  From multiple NICs within each enclosure, cabled to multiple switches with multiple paths to the core, physical paths are important for redundancy.  Multiple demarcs leading to disparate paths of connectivity are also important.  Of course, these multiple paths get expensive, so use your best judgement as to what the return on investment is on these options.

Virtual machines

You have your physical hosts covered. We have multiple paths to the data.  Now we need to work on system redundancy.  There are solutions from failover clusters that have no application downtime to high-availability servers, that may have limited amounts of down time, but will automatically restart servers on a new virtual machine if the old machine fails for some reason.  These are two different ways to address the Recovery Time Objective factors of Disaster Recovery.  With most things, the smaller the downtime window, the larger the price tag.

Outsource it

And of course, there is always the decision to outsource things.  Having a company host your servers, or going with a Cloud solution are of course viable options to redundancy.  Whether you are allowing these services to host all of your computing infrastructure, or you are using these as part of your failover plan, they are tools that you can use in your redundancy toolchest.  Large Cloud providers can spread the cost of massive redundancy between many clients, making it very affordable to use.

Double up on things

So, we have gone over a few of the most common things that IT staff will use to ensure consistent connectivity in their environment.  Obviously, the specific needs of your environment are best known to you.  Any decisions will need to be weighed against your budget and management’s risk appetite.    This article has been designed as a jumping off point for redundancy planning of your network.  What are you doing in your environment?

Share this:

Now for the hard part – P2V Conversion

virtualization, P2V, V2V, hyperconvergence
Image courtesy of hywards at FreeDigitalPhotos.net

Ok, You’ve researched and spent the money and now you are the proud owner of a hyperconverged system.  Or any truly virtualized system, really.  If not, why not?  So how do I get all those physical servers into virtual servers, or what is commonly referred to as P2V conversion.

Well friend, pull up a chair and let’s discuss how to convert your systems.

There are three ways to get those physical boxes that now crowd your data closet and make it a hot noisy mess into a virtualized state.  Which method you chose will depend on several factors.  The three are not mutually exclusive, so feel free to use several of these P2V conversion methods in your environment depending on what the specific requirements are for each of your physical servers.

Rebuilding the Server

Rebuild the Server.  Over the course of time, and after around 4 dozen service pack updates, there is a lot of trash in a system.  Sometimes it is better just to start off fresh.  Using this method is best if you would like to update the underlying operating system, or if you are starting to have strange system behaviors.  This method is best for standard services like DNS, domain controllers, or application servers that you have clear ways to transfer just the data and configuration files.  A clean underlying install of the operating system and application services are a great way to breathe fresh life into an old, tired workhorse of a physical server.

Pros
  • That clean fresh feeling of a new OS install
  • Existing physical servers are up and functional while new server installation occurs
Cons
  • General installation and configuration issues
  • Time restraints – depending on how many servers you are building, well you are building new servers

P2V Utilities

Utilities.  There are as many utilities out there to manage P2V conversions as there are stars in the sky.  Everyone has their particular favorite.  In essence, these utilities make a disk image copy of your system and convert it to an ISO image, or even into virtual server disk formats.  It is the same concept as bare-metal restores.  These utilities make an exact copy of your application servers, so all the data and application files stay the same, but so do any strange behaviors that may exist within your Operating System.  If your server is operating well, this may be the choice for you.

Unfortunately, these utilities require that your server is off while making this copy.  So, plan for a long weekend while this gets done, and make sure that your users are aware that the IT department is closed for business while this happens.  So – this if probably not for those highly available services that NEED to be up all of the time.  Like your 911 systems or the servers that control ATMs.

Pros
  • Easy, guided conversion of disk images
  • Often converted directly to ISO image
  • Conversion often possible between virtual disk formats
Cons
  • Server MUST be offline to perform conversion operation
  • Time consuming
  • Application downtime
  • Freeware or inexpensive utilities may not have support contracts available

 

Dedicated High Availability or Replication Software

Dedicated software exists for those servers that need to be virtualized, but can’t be down for the hours that it may take to use the disk utilities that are discussed in the section above.  These utilities are pay-for, but fill a need not addressed by the disk image utilities.  These utilities often operate like a highly available failover pair.  What that means is agents are loaded on two servers, one that is physical and has the information you wish to virtualize – the “source”.  The other server is a virtual server with only an OS and the agent that will act as the “target”.

In this scenario, the utility makes a full “backup” from the source server to the target server.  Then changes propagate from the source to the target on a regularly scheduled basis.  When the cut-over occurs, the physical server goes down, and the virtual server comes up as an exact copy, often down to the same IP addressing.  This cutover can often happen in only minutes.

Pros
  • Less downtime for those critical servers
  • Exact copies of functional servers down to the minute
  • Support contracts are available
Cons
  • Often a pay for utility or service.  While this may not be an obstacle for IT shops, large numbers of servers mean large licensing fees
  • Often takes more time and better scheduling than other conversion utilities
  • Small period of time that services are unavailable while cutover occurs
  • Invasive – new agent software loaded on source and target servers

We have discussed the three ways that new hyperconvergence or virtualization shops can convert their physical servers to virtual servers.  Building new servers, using disk imaging utilities, and highly available utility agents all have pros and cons to address.  These three conversions move your physical servers to virtual servers and get you the benefits from virtualization.

Share this:

The SMB Owner’s Guide to Ensuring Your Success with Hyperconvergence.

SMB manager owner CIO executive
Image courtesy of imagerymajestic at FreeDigitalPhotos.net

Hyperconvergence is the newest IT architecture that is removing both cost and complexity from virtualization infrastructure. This article assumes you are aware of the advantages of hyperconvergence and how it applies to the business end of your small to medium business. What we are going to discuss is how to ensure that you are getting the TRUE advantages from Hyperconvergence over what all those fancy marketing papers say you can.

A small to medium business(SMB) doesn’t mean just a tiny kiosk in the mall that only has a single POS computer. We’re talking about SMB in terms of between 50 to 500 employees with an IT staff of up to 5 full or part-time staffers.

There are a lot of claims out there around hyperconvergence technologies. At the top of the list is reducing costs. Also, it claims to be a simpler environment for your IT staff – increasing productivity. As the business owner, what questions do you need to ask to ensure that your hard-earned capital is well spent?

Among all the claims, there are 5 things that you need to look for in a hyperconverged solution to ensure that your solution brings everything to your business that it can.

Vendors in the solution

One of the claims of hyperconvergence is simplification of the solution. This is potentially achieved by eliminating the multitude of vendors that are part of a traditional virtualized solution. This solution involves how many vendors? Where do the individual responsibilities of each vendor start and stop? Will you need multiple support contracts, or is everything covered under one master contract? Is there a central support number to call, or is there the possibility of finger-pointing between various manufacturers? In this vein, is the solution the intellectual property of one company, or are there different licensing agreements in place? How could this affect YOUR investment in the event of a manufacturer bankruptcy?

Licensing

The initial install of the solution is probably correctly sized for your business. What happens if you need to expand that installation? If you need more virtual servers, or to add more users, are there going to be any additional license fees (Vmware)? What about yearly maintenance fees, will those grow, too? What if we expand and I want to add virtual servers at another location? Are my licenses “tiered” or do they get more expensive for additional functionality or when I hit a certain license count? These are not necessarily deal-breakers, but fore-warned is fore-armed. It sure helps to have a reliable idea of licensing costs when budget time rolls around.

Expandability

Hyperconverged solutions come in all shapes and sizes. Different solutions exist for a dozen virtualized servers, and for several hundred virtual servers. Whichever you have is not as important as the answer to the question Is the solution expandable? Does the solution have the ability to cover your business as it grows, without the dreaded “fork-lift upgrade”, which means downtime for the profit-centers of your business. In addition to this, if upgrades are possible, do they involve downtime?  Can your sales department sell while the upgrades occur?

Installation

Sure, everyone will be more than happy to install this beast once you have signed on the dotted line, but just how complex is that installation? Can we operate on the existing systems and minimize downtime while the installation occurs? How complex is the switchover to the new systems (Easily migrating VMs or data)? Can your IT staff shadow the installation? Is it easy enough that they can do it themselves with just a bit of guidance?  Can your staff expand the system, or will you need outside help?

Ease of Use

Now that we have it and everything is running, just how difficult is it to get my IT staff up to speed on the product? Is there additional training that will take my staff off site in order to learn how to use this product? Once I train my staff, am I in danger of losing them to a competitor willing to pay more for those certifications? When we add additional virtual servers to the environment, will my staff be able to do that? How difficult is it, and how long will it take? Since my staff isn’t as large as some of the big-guys, how difficult is it to cross-train?

Summary

Hyperconvergence is an amazing leap forward for IT virtualization. Correctly sized, designed, and implemented it promises a lot to the small to medium business. But like most things in life, one size doesn’t necessarily fit all. Spending money wisely requires due diligence. Make sure the business squeezes all of the value that you paid for from this solution. Address the questions around vendors, licensing, systems expandability, installation and ease of use.

Engage with the manufacturers and ask the solutions provider the next step questions addressed in this article. This will ensure that you enjoy the advantages advertised while getting the exact solution to benefit your business NOW.

Share this: