Predicting Tech: Is This The True Rush To The Cloud?

A few thoughts on cloud with an hour left at work on a Friday.

Hosted services aren’t new. Virtualization isn’t new. The practice of hosting applications grew out of advancements in virtualization technology. Remember it was mainframes that began offering “virtualized partitions” – what we know of today as logical partitions, or what were called LPARs in the 1960s. This technology eventually moved to the distributed world and allowed single physical boxes to host multiple, isolated environments or clients. Thus was born the first hosted applications, or what we can consider early cloud solutions.

Today the technology has advanced far beyond these simple examples. Hardware’s virtualized. So are applications. Memory, IO and network connectivity are not only virtualized, but now also managed (either by the hardware, the operating system or third-party software) to involve real-time redundancy and failover to produce nearly 100% uptime availability.

Thus we see old factory buildings and warehouses being repurposed as datacenters. Add in some redundant power, cooling and network connections and anyone can set up and host a cloud server farm. Seems like the rush has arrived, right?

Not so fast. There is a bullrush to move everything possible into the cloud. For the public at large, it’s a great way to store and access music, share photos, run productivity applications like Salesforce and Word and stream video. For a business, it’s a great way to add functionality without increased overhead. You don’t need a cross-company hardware upgrade or extra seat to support a new bit of enterprise software. The software is hosted, it runs through a browser and the cloud services provider handles backup, availability and most support (which you’ll want to confirm and evaluate, of course).

Yet for all the hype, the true rush-to-cloud hasn’t yet begun. Remember, when you move any portion of business or functionality into the cloud, you’re inevitably going to face bandwidth issues like massive upload queues, taxed servers, partial data loss or decay and all the other headaches that come from relying on someone else to deliver functionality that used to reside in-house. Total solutions have not yet arrived, but are on the way.

That’s why I argue that the true cloud rush probably won’t come until sometime in late-2015/early-2016.

What do you think? And why? Sound off in the comments section below.

Want to know more about how to move into the cloud? Contact TxMQ: (716) 636-0070 or [email protected].

Best Practices For Virtualization Optimization

Virtualization is a key technology for many organizations. The infrastructure allows organizations to benefit from higher server utilization, faster deployment and the ability to quickly clone, copy and deploy images. The growth of virtualization is driving businesses to perfect and optimize performance to reduce the overall challenges that come with every technology. By optimizing virtualization, companies will be able to thrive in all aspects of the business.
One of the main goals of virtualization is to centralize administrative tasks while improving scalability and workloads. IBM shows that this can be optimized through 5 entry points:
1. Image management
2. Patch & compliance
3. Backup & restore
4. Cost management
5. Monitoring and capacity planning
1. Image Management – Optimize the Virtual Image Lifecycle: A virtualization environment needs core, baseline images that must be managed properly. Optimizing the environment allows an organization to manage these images throughout their lifecycle. Creating a virtual image library improves the assessing, reporting, remediating & enforcing of image standards. This improves the process of finding unused images, images that need patching, and allowing more frequent patching to provide compliance. Other strategies for image management include increasing the ratio of managed images to administrators to decrease IT labor costs and improvement self-service capabilities for end-user direct access.
2. Patch & compliance – Optimize Path Remediation: Automating the patch assessment and management increases the first-pass patch success rate by reducing IT workload and helping organizations comply with security standards. This automatic process reduces the security risk because it decreases amount of time for repairs and provides great visibility with flexible, real-time patch monitoring and reporting from a single management console. A closed loop design allows admins to patch as fast as they can provision by enabling security and operations teams to work together. This helps to provide continuous compliance enforcement in rapidly changing virtualized/cloud environments.
3. Backup and restore – Optimize Resilience and Data Protection: Data is growing as fast, if not faster, than the elements of a virtualized & cloud based environments. Protecting and managing an organization’s data is key to virtualizing optimization. Deduplication is one way to simplify and improve data protection and management. This can be accomplished by doing incremental backups. This helps simplify protection, management of data, speed restoration and backups. It also helps to conserve the resources and increase bandwidth efficiencies due to the decrease the amount of space and time of each backup. This in turn provides the business with lower equipment and management costs.
4. Cost Management – Optimize Metering & Billing: While virtualization helps organization reduce operating costs overall, optimization of this technology helps organizations know where the costs are incurred. An automatic collection of data usage provides how many resources the internal users are consuming and gives the service providers those resources incurred for accurate billing. Using this advanced analytics helps organizations better understand the use and costs of computer storage and network resources. In turn providing overall business improvement by allowing organizations to accurately charge for services.
5. Monitoring and Capacity Planning – Optimize Availability with Resource Utilization: By monitoring performance and planning with historical data, organizations can add a proactive approach to fix issues before they’re discovered and plan for the future to ensure optimized systems and applications. This approach reduces resource consumption by supporting right size virtual machine for different workloads. It speeds deployments by spotting bottlenecks and reduces licensing costs by consolidating virtual machines onto fewer hosts.
IBM provides virtualization optimization by basing cloud services and software on an open cloud architecture. Their IBM SmartCloud foundation is designed to help organizations of all sizes quickly build and scale their virtualized & cloud environment infrastructures and platform capabilities. They help provide delivery flexibility, and choices that organizations need to evolve an existing virtualized infrastructure to cloud. They help accelerate the adoption with integrated systems & gain immediate access to managed services. IBM’s expertise, open standards, and proven infrastructure will help an organization achieve new levels of innovation, and efficiency.

Take it to the Cloud.

By Wendy Sanacore

Everywhere you turn now, you hear people buzzing about Virtualization and going to the Cloud. So, what is it? And why is it so great for your company?

Here's high level overview of what Virtualization is. Right now, most businesses utilize single application servers that run on an operating system across several data centers. This creates a very high cost scale factoring in hardware, data facilities, operating systems and costs and maintenance.

Virtualization is a new technology that provides an alternative to incurring all these costs by allowing you to run multiple virtual machines on a single physical machine, with each virtual machine sharing the resources of that one physical computer across multiple environments. Different virtual machines can run different operating systems and multiple applications on the same physical computer. Multiple operating systems can run concurrently on a single physical computer and share hardware resources with each other allowing you to safely run several operating systems and applications at the same time on a single computer, with each having access to the resources it needs.

With Virtualization, you don’t need to assign servers, storage, or network bandwidth permanently to each application. Instead, your hardware resources are dynamically allocated when and where they’re needed within your private cloud. Your highest priority applications always have the necessary resources without wasting money on excess hardware only used at peak times. Connect this private cloud to a public cloud to create a hybrid cloud, giving your business the flexibility, availability and scalability it needs to thrive.

In turn, this results in tremendous savings by consolidating the number of servers running. In addition, the ROI has been incredible for those companies that have switched to Virtualization. As a trend,most companies see a return on their investment in as little as three to six months.