Decisively or Indecisively Legacy?
Some product lines come with a well-defined roadmap that permits customers to anticipate and plan for upcoming end of service life (EOSL) announcements. This is generally not the case, however, and an edict from a manufacturer that customers must either upgrade or suffer the consequences is usually both unwelcome and unexpected.
Many businesses will simply move to the suggested upgrade product to avoid the hassle. Others will refuse to upgrade for the same reason, leaving them with legacy product that's in the asset mix out of short term convenience rather than long term strategy. But there is value in examining the continuing role of EOSL products in the datacenter, and in deciding to retain them when the objective evidence backs this decision.
Hardware is not going to brick up and stop working at midnight on the EOSL date. The real issue in EOSL is the end of support, rather than the end of service. EOSL products might lose callback and onsite guarantees and after EOSL date may only be eligible for unpredictable and costly T&M (time and materials) support. Keep in mind some OEMs may use a more fancy name for it, and bill it based on a subscription rather than time-of-service model; but if there is no assurance of a person with a part onsite within a certain time, there's not much comfort in that service level.
OEMs might also make it very simple, and offer no support whatsoever on some EOSL products depending on their policy. Other support options exist, of course: the EOSL product might be self-supported by certified techs on your own staff and aftermarket parts, or have full Service Level Agreement coverage under a third party support contract. If the product is relatively cheap or commoditized you may have a pool of replacements to draw on for some time after EOSL. Regardless, before you even begin a survey to determine what viability your Legacy product(s) have, you must have a clear support coverage solution stated and practiced. Doing nothing until there's no choice is not much of a policy, and will virtually guarantee that the reactive eventual solution is either expensive or inadequate, if not both.
A legacy survey is a three-phase exercise that might take an hour, or six months, depending on your level of access to metadata regarding existing assets and business processes. It may be trivial or overwhelming, and it highlights the value of good asset and systems management tools. Ideally your company has specific review/ PMO policies and follows standards like the ITIL/ITSM for handling these kinds of scenarios. But few businesses have a clearly defined process specifically for dealing with EOSL announcements and service level reductions; those that do may not equally address all three essential phases of an EOSL survey. This in no way is meant to suggest that this blog is a better practice than that presented in the ITIL or achieved by adhering to ITSM practices; it only reflects the reality that not nearly all businesses use them, and those that do, don't always adhere 100% to them.
What you have, what it does now, what it does best
Far from being a cryptic or unintuitive process, the key to a successful legacy survey is in applying objective analysis to optimize resource allocation on existing systems and compare it to resource allocation on a hypothetical replacement, or multiple hypothetical replacement scenarios. A virtualized and/or private cloud environment should start two steps ahead to begin with, because the first two phases of the survey are also necessary for successful virtualization, but a traditional datacenter where specific hardware resources are dedicated to specific processes in a set-it-and-forget-it manner may find asset and resource allocation data harder to come by. Of course, if you're using a public cloud then EOSL isn't your concern, but you can be sure your hosting providers are concerned with it.
Phase 1: Identify your assets and their capabilities
The first phase of the survey is in identifying your assets across the board in as much detail as is relevant and possible. If you use an asset management solution such as Kaseya or a home-grown dashboard, this is as simple as running a report or two; it may be trivial information you're already on top of. But it may also be an ugly mess of network configuration details, remote server closets in branch locations, or mysterious hardware you're not entirely sure about. In any such case, getting on top of the asset information can be a daunting task depending on the size and complexity of the datacenter; newsletters and websites dedicated to asset management exist for good reason. For the legacy survey you want to see everything you have available and not merely your legacy products, but if you really can't afford a full asset identification phase, you need to at least identify the legacy systems and the processes that they touch.
Phase 2: Identify your needs and capacities
The second phase turns analysis from identifying hardware and starts identifying need. In practice this can play out in a variety of ways - you can analyze resource allocation over time, at actual maximum use, at average use, or the like. This may be as simple as running resource allocation reports appropriate to the EOSL product - server usage details for EOSL servers, storage use and capacity for EOSL storage arrays, etc. The important thing is to collect as much hard data and actual numbers as possible in order to compare current reality with a future possibility, using the speeds and feeds information about the replacement products supplied by the vendor(s). This should be easier, if not trivial, for a virtualized datacenter compared to a traditional one. but if you don't already have a good idea of the resource hogs and repeat offenders in the datacenter, it may be both a difficult and illuminating undertaking. In either case you're not looking for a real-time ticker on what's happening now, you're trying to determine if processing and processes are already allocated to the best hardware for the job, and if so, what that looks like, by the numbers. Once you have this data you'll have visibility that would be impossible to achieve without completing the first two phases.
Phase 3: Compare current metadata with potential future replacements
In the third and final phase, bring the data in the first two phases together to determine the objective, demonstrable value of retaining legacy systems versus replacing those legacy systems with new. By completing phase 1 and phase 2 before analysis begins, you are able to compare apples to apples and determine adequacy and cost factors using real data relevant to your business. This standardization provides mathematical evidence beyond cost to back a decision, rather than forcing you to trust a hunch. Without it, you're much more vulnerable to marketing hype or peer pressure, neither of which should be trusted when they contradict objective fact. When evidence is lacking, flipping a coin is a valid means of reaching a decision, but it's a better practice to make sure you actually collect and use the available evidence instead. And since every hardware product eventually passes into EOSL, standardizing the collection and analysis of asset and process data is a worthy long-term goal.
Keyword Tags: legacy server storage networking ITSM ITIL EOSL Disclaimer: Blog contents express the viewpoints of their independent authors and are not reviewed for correctness or accuracy by Toolbox for IT. Any opinions, comments, solutions or other commentary expressed by blog authors are not endorsed or recommended by Toolbox for IT or any vendor. If you feel a blog entry is inappropriate, click here to notify Toolbox for IT.