Reasons to Move Away From Tape Backups
Poor backup performance
- Tape systems have difficulty meeting shrinking backup windows
- Not conducive to the remote and off-site user requirements
- Daily and weekly incremental backups exceed the allotted time windows
- Recovery from physical tape is cumbersome
Lengthy recovery time
- Difficult to recover in a "user" environment
- Recovery of business-critical data and applications from tape can take many hours, if not days
- Ironically, the "best practice" of using off-site tape repositories adds to recovery time
Increasing operational and capital costs
- Tape deployments are costly and cumbersome to maintain
- Adding capacity to existing tape libraries and upgrading tape drive technology to non-compatible, higher-capacity media is costly
- Ongoing service, support, and warranty is high, especially for aging tape systems
Reduce and eliminate risks from defective, lost or misplaced tapes
- Shipping tape cartridges to off-site repositories exposes firms to security breaches and risks due to loss
- Defective media can result in risk of un-recoverable data
- IT organizations must invest in tape encryption and key management systems to alleviate security risks, which adds cost and complexity
- The "human element" involved when handling tape adds to risk of data loss and increases operational costs
Poor storage efficiency
- Explosion in the growth of redundant data using tape back up -- can not easily use data de-duplication techniques
- Tape-based compression is typically only 2:1 resulting in many sets of media - typically 1 TB of data needs 25 TB of media using the generation tape archiving concept
- Need to move to another media type to gain additional capacity
- Tape is not always read or write compatible
Disaster Plans Fall Short of Requirements
Recent Events Highlight Shortcomings in Disaster Recovery and Business Continuity Plans
CIOs and individuals responsible for the recovery process have found found there were many partial, outdated, or ineffective disaster and business continuity plans out there. Why was it so difficult to get it right?
Experts say there are 5 main reasons for this:
- Data collection: How was the data collected for the disaster and business continuity plan in the first place? There is was no one single source for everything was needed, particularly when trying to integrate relevant external information such as support dates, power consumption, etc
- Data inconsistency: How organizations handle the inherent inconsistencies in data? For example, OS version numbers are often conflicting; vendors change their product names or renumber versions over time, etc. Normalizing the data (making it adhere to consistent rules and categories) is a cumbersome task and the accuracy and consistency of the data needs to be reassessed at every step.
- Categorization: When CIO want to categorize the information in the disaster and business continuity plan, you have to create the taxonomy (or hierarchical categorization) for the industry data. This alone is a significant task, there are many ways to slice and dice the universe of technology products, and no standards have been defined within the
- Manageability: Any extensive technology disaster and business continuity plan is a large and complex data store. A spreadsheet is insufficient for storing and managing rich structured data for thousands of products and vendors. The disaster and business continuity plan should be able to track and maintain the complex relationships between technologies and categories (parent/child relationships, one-to-many mappings, and so on). Developing an appropriate, extensible data store is a complex undertaking.
- Maintenance: As soon as organizations have finished the disaster and business continuity plan, they have to start updating it. The Information Technology industry is constantly changing, which means that the DRP / BCP work is never done. If companies go through a massive effort to produce a disaster and business continuity plan for a single business function, the value of that investment is lost if you cannot keep it up to date.
Infrastructure size is increasing - data may overcome users
Recent surveys show that nearly two-thirds of companies expect up to 30% data and record growth in the next year, and 75 percent of companies now expect to be able to restore large quantities of lost data in less than three hours in the event of data corruption, server crash or other type of outage. How can you keep up with the increasing volume of data that needs to be backed up while still assuring compliance with regulations and corporate policies, keeping costs under control and guaranteeing that you can restore the data in a timely fashion?
According to industry experts, there will be over 58 million virtual servers in use by the end of 2010. This represents a huge volume of data being added to environments just in terms of operating system copies and related data.
To make life a little more challenging, research shows that 80 percent of every IT dollar will be spent on infrastructure maintenance, limiting budgets available for new technologies to improve the business. Research also shows that over 70 percent of data created today resides outside of the corporate data center, requiring companies to look for ways to manage and protect data in many locations.
Recent surveys show that nearly two-thirds of companies expect up to 30% data growth in the next 12 months, and 75 percent of companies now expect to be able to restore large quantities of lost data in less than three hours in the event of data corruption, server crash or other type of outage.
If you don't have proper backup processes and technologies in place, you are playing a dangerous game with your company's future - it's not a question of if valuable business data will be lost, but when. And even if you do have a process in place, it might not be able to keep pace with the rising tide of data that is being generated.
Security Policy and ProcedureTemplate
Data Security and Protection are a priority and this template is a must have tool that every CIO and IT department must have. Over 3,000 enterprise worldwide have acquired this tool and it is viewed by many as the Industry Standard for Security Management and Compliance.
Infrastructure Policies - World Class Solution That Is Easy To Implement
When a CIO or an IT Executive takes over a new job one of the greatest challenges is to quickly validate that the infrastructure that is in place. Would it not be nice to have some tools that could be use to quickly put proven world class policies in place with minimal effort. That is what the CIO IT Infrastructure Policy Bundle does.
Gain control over your IT realm! Download a collection of over Janco's IT infrastructure and policy templates. Each can be modified to align with your needs. This comprehensive collection comes with a variety of highly-researched tools that will help you develop a complete guide that fits the unique needs of your organization and provides tools and suggestions for policy communication and enforcement.
Defining Your Optimal IT Infrastructure is a critical task that can no longer wait with all of the changes mandated by PCI-DSS, HIPAA, ISO, ITIL, Sarbanes-Oxley, changing economic environment, and changes to enterprise operating environments.