Cloud Computing is the internet-based storage for files, applications, and infrastructure. One could say cloud computing has been around for many years, but now a company may buy or rent space for their daily operations. The cost savings in implementing a cloud system is substantial and the pricing for use of cloud computing can easily be scaled up or down as determined by necessity.
Uses of Cloud Computing
- Rapid Service
- Secure Service
- Satisfying User Experience
- Lower Costs
- Multi-User Access
- Development Platform
- Infinite Storage
In essence, cloud computing is a construct that allows you to access applications that actually reside at a location other than your computer or other Internet-connected device; most often, this will be a distant datacenter.
There are many benefits to this. For instance, think about the last time you bought Microsoft Word and installed it on your organization’s computers. Either you ran around with a CD- or DVD-ROM and installed it on all the computers, or you set up your software distribution servers to automatically install the application on your machines. And every time Microsoft issued a service pack, you had to go around and install that pack, or you had to set up your software distribution servers to distribute it.
The beauty of cloud computing, as shown in the below figure, is that another company hosts your application. They handle the costs of servers, they manage the software updates, and—depending on how you craft your contract—you pay less for the service.
Cloud Components
A cloud computing solution is made up of several elements:
- Clients
- The Datacenter
- Distributed Servers
As shown in the below figure, these components make up the three parts of a cloud computing solution. Each element has a purpose and plays a specific role in delivering a functional cloud based application.
Three components make up a cloud computing solution.
Clients – Clients are, in a cloud computing architecture, are typically, the computers that just sit on your desk. But they might also be laptops, tablet computers, mobile phones, or PDAs—all big drivers for cloud computing because of their mobility. The client computers are connected with each other through local area network (LAN). Anyway, clients are the devices that the end users interact with to manage their information on the cloud. Clients generally fall into three categories:
- Mobile – Mobile devices include PDAs or smartphones, like a Blackberry, Windows Mobile Smartphone, or an iPhone.
- Thin – Thin Clients are computers that do not have internal hard drives, but rather let the server do all the work, but then display the information.
- Thick – This type of client is a regular computer, using a web browser like Firefox or Internet Explorer to connect to the cloud.
Thin clients are increasingly popular solution, because of their price and effect on the environment. Some benefits to using thin clients include
- Lower hardware costs – Thin clients are cheaper than thick clients because they do not contain as much hardware. They also last longer.
- Lower IT costs – Thin clients are managed at the server and there are fewer points of failure.
- Security – Since the processing takes place on the server and there is no hard drive, there’s less chance of malware invading the device. Also, since thin clients don’t work without a server, there’s less chance of them being physically stolen.
- Data security – Since data is stored on the server, there’s less chance for data to be lost if the client computer crashes or is stolen.
- Less power consumption – Thin clients consume less power than thick clients. This means you’ll pay less to power them, and you’ll also pay less to air-condition the office.
- Ease of repair or replacement – If a thin client dies, it’s easy to replace. The box is simply swapped out and the user’s desktop returns exactly as it was before the failure.
- Less noise – Without a spinning hard drive, less heat is generated and quieter fans can be used on the thin client.
Datacenter – The datacenter is the collection of servers where the application to which you subscribe is housed. It could be a large room in the basement of your building or a room full of servers on the other side of the world that you access via the Internet.
A growing trend in the IT world is virtualizing servers. That is, software can be installed allowing multiple instances of virtual servers to be used. In this way, you can have half a dozen virtual servers running on one physical server.
The number of virtual servers that can exist on a physical server depends on the size and speed of the physical server and what applications will be running on the virtual server.
Distributed Servers – Amazon has their cloud solution in servers all over the world. If something were to happen at one site, causing a failure, the service would still be accessed through another site. Also, if the cloud needs more hardware, they need not throw more servers in the safe room—they can add them at another site and simply make it part of the cloud.
Cloud Infrastructure Deployment – There are several different ways the infrastructure can be deployed. The infrastructure will depend on the application and how the provider has chosen to build the cloud solution. Your needs might be so massive that the number of servers required far exceeds your desire or budget to run those in-house. Alternatively, you may only need a sip of processing power, so you don’t want to buy and run a dedicated server for the job. The cloud fits both needs.
Infrastructure – There are several different ways the infrastructure can be deployed. The infrastructure will depend on the application and how the provider has chosen to build the cloud solution. This is one of the key advantages for using the cloud. Your needs might be so massive that the number of servers required far exceeds your desire or budget to run those in-house. Alternatively, you may only need a sip of processing power, so you don’t want to buy and run a dedicated server for the job. The cloud fits both needs.
Grid computing – Grid computing applies the resources of numerous computers in a network to work on a single problem at the same time. This is usually done to address a scientific or technical problem.
Full virtualization – Full virtualization is a technique in which a complete installation of one machine which is called as the server is run on another machine which is called as the client. The result is a system in which all software running on the server is within a virtual machine.
In a fully virtualized deployment, the software running on the server is displayed on the clients. Virtualization is relevant to cloud computing because it is one of the ways in which you will access services on the cloud. That is, the remote datacenter may be delivering your services in a fully virtualized format. Full virtualization has been successful for several purposes:
- Sharing a computer system among multiple users
- Isolating users from each other and from the control program
- Emulating hardware on another machine
Paravirtualization – Paravirtualization allows multiple operating systems to run on a single hardware device at the same time by more efficiently using system resources, like processors and memory. In full virtualization, the entire system is emulated (BIOS, drive, and so on), but in paravirtualization, its management module operates with an operating system that has been adjusted to work in a virtual machine. Paravirtualization typically runs better than the full virtualization model, simply because in a fully virtualized deployment, all elements must be emulated.
Virtualization –
Virtualization Technology
Virtualization is the process of converting a physical IT resource into a virtual IT resource. Most types of IT resources can be virtualized, including:
- Servers– A physical server can be abstracted into a virtual server.
- Storage– A physical storage device can be abstracted into a virtual storage device or a virtual disk.
- Network– Physical routers and switches can be abstracted into logical network fabrics, such as VLANs.
- Power– A physical UPS and power distribution units can be abstracted into what are commonly referred to as virtual UPSs.
This section focuses on the creation and deployment of virtual servers through server virtualization technology. The terms virtual server and virtual machine (VM) are used synonymously.
The first step in creating a new virtual server through virtualization software is the allocation of physical IT resources, followed by the installation of an operating system. Virtual servers use their own guest operating systems, which are independent of the operating system in which they were created.
Both the guest operating system and the application software running on the virtual server are unaware of the virtualization process, meaning these virtualized IT resources are installed and executed as if they were running on a separate physical server. This uniformity of execution that allows programs to run on physical systems as they would on virtual systems is a vital characteristic of virtualization. Guest operating systems typically require seamless usage of software products and applications that do not need to be customized, configured, or patched in order to run in a virtualized environment.
Virtualization software runs on a physical server called a host or physical host, whose underlying hardware is made accessible by the virtualization software. The virtualization software functionality encompasses system services that are specifically related to virtual machine management and not normally found on standard operating systems. This is why this software is sometimes referred to as a virtual machine manager or a virtual machine monitor (VMM), but most commonly known as a hypervisor.
This section covers the following topics:
- Hardware Independence
Hardware Independence
The installation of an operating system’s configuration and application software in a unique IT hardware platform results in many software-hardware dependencies. In a non-virtualized environment, the operating system is configured for specific hardware models and requires reconfiguration if these IT resources need to be modified.
Virtualization is a conversion process that translates unique IT hardware into emulated and standardized software-based copies. Through hardware independence, virtual servers can easily be moved to another virtualization host, automatically resolving multiple hardware-software incompatibility issues. As a result, cloning and manipulating virtual IT resources is much easier than duplicating physical hardware. The architectural models explored in Part III of this book provide numerous examples of this.
- Server Consolidation
The coordination function that is provided by the virtualization software allows multiple virtual servers to be simultaneously created in the same virtualization host. Virtualization technology enables different virtual servers to share one physical server. This process is called server consolidation and is commonly used to increase hardware utilization, load balancing, and optimization of available IT resources. The resulting flexibility is such that different virtual servers can run different guest operating systems on the same host.
These features directly support common cloud computing features, such as on-demand usage, resource pooling, elasticity, scalability, and resiliency.
- Resource Replication
Virtual servers are created as virtual disk images that contain binary ?le copies of hard disk content. These virtual disk images are accessible to the host’s operating system, meaning simple file operations, such as copy, move, and paste, can be used to replicate, migrate, and back up the virtual server. This ease of manipulation and replication is one of the most salient features of virtualization technology as it enables:
- The creation of standardized virtual machine images commonly con? gured to include virtual hardware capabilities, guest operating systems, and additional application software, for pre-packaging in virtual disk images in support of instantaneous deployment.
- Increased agility in the migration and deployment of a virtual machine’s new instances by being able to rapidly scale out and up.
- The ability to roll back, which is the instantaneous creation of VM snapshots by saving the state of the virtual server’s memory and hard disk image to a host-based file. (Operators can easily revert to these snapshots and restore the virtual machine to its prior state.)
- The support of business continuity with efficient backup and restoration procedures, as well as the creation of multiple instances of critical IT resources and applications.
- Hardware-based and Operating System-based Virtualization
Operating System-based Virtualization
Operating system-based virtualization is the installation of virtualization software in a pre-existing operating system, which is called the host operating system (Figure 1). For example, a user whose workstation has a specific version of Windows installed decides it wants to generate virtual machines. It installs the virtualization software into its host operating system like any other program and uses this application to generate and operate one or more virtual machine. This user needs to use its virtualization software to enable direct access to any of the generated virtual machines. Since the host operating system can provide hardware devices with the necessary support, operating system virtualization can rectify hardware compatibility issues even if the hardware driver is unavailable to the virtualization software.
Hardware independence that is enabled by virtualization allows hardware IT resources to be more flexibly used. For example, let’s take a scenario in which the host operating system has the software necessary for controlling five network adapters that are available to the physical computer. The virtualization software can make the five network adapters available to the virtual machine, even if the virtualized operating system is usually incapable of physically housing five network adapters.
Figure 1 – The different logical layers of operating system-based virtualization, in which the VM is first installed into a full host operating system and subsequently used to generate virtual machines.
Virtualization software translates hardware IT resources that require unique software for operation into virtualized IT resources that are compatible with a range of operating systems. Since the host operating system is a complete operating system in itself, many operating system-based services that are available as organizational management and administration tools can be used to manage the virtualization host.
Examples of such services include:
- Backup and Recovery
- Integration to Directory Services
- Security Management
Operating system-based virtualization can introduce demands and issues related to performance overhead, such as:
- The host operating system consumes CPU, memory, and other hardware IT resources.
- Hardware-related calls from guest operating systems need to traverse several layers to and from the hardware, which decreases overall performance.
- Licenses are usually required for host operating systems, in addition to individual licenses for each of their guest operating systems.
A concern with operating system-based virtualization is the processing overhead required to run the virtualization software and host operating systems. Implementing a virtualization layer will negatively affect overall system performance. Estimating, monitoring, and managing the resulting impact can be challenging because it requires expertise in system workloads, software and hardware environments, and sophisticated monitoring tools.
Hardware-based Virtualization
This option represents the installation of virtualization software directly on the virtualization host hardware so as to bypass the host operating system, which would presumably be engaged with operating system-based virtualization (Figure 2). Allowing the virtual machines to interact with hardware without requiring intermediary action from the host operating system generally makes hardware-based virtualization more efficient.
Figure 2 – The different logical layers of hardware-based virtualization, which does not require another host operating system.
Virtualization software is typically referred to as a hypervisor for this type of processing. A hypervisor has a simple user interface that requires a negligible amount of storage space. It exists as a thin layer of software that handles hardware management functions to establish a virtualization management layer. Device drivers and system services are optimized for the provisioning of virtual machines, although many standard operating system functions are not implemented. This type of virtualization system is essentially used to optimize performance overhead inherent to the coordination that enables multiple VMs to interact with the same hardware platform.
One of the main issues of hardware-based virtualization concerns compatibility with hardware devices. The virtualization layer is designed to communicate directly with the host hardware, meaning all of the associated device drivers and support software must be compatible with the hypervisor. Hardware device drivers may not be as available to hypervisor platforms as they are to more commonly used operating systems. Also, host management and administration features may not include the range of advanced functions that are common to operating systems.
- Virtualization Operation and Management
- Many administrative tasks can be performed more easily using virtual servers as opposed to using their physical counterparts. Modern virtualization software provides several advanced management functions that can automate administration tasks and reduce the overall operational burden on virtualized IT resources.
- Virtualized IT resource management is often supported by virtualization infrastructure management (VIM)tools that collectively manage virtual IT resources and rely on a centralized management module, otherwise known as a controller, that runs on a dedicated computer. VIMs are commonly encompassed by the resource management system mechanism.
- Technical and Business Considerations
Performance Overhead
Virtualization may not be ideal for complex systems that have high workloads with little use for resource sharing and replication. A poorly formulated virtualization plan can result in excessive performance overhead. A common strategy used to rectify the overhead issue is a technique called para-virtualization, which presents a software interface to the virtual machines that is not identical to that of the underlying hardware. The software interface has instead been modified to reduce the guest operating system’s processing overhead, which is more difficult to manage. A major drawback of this approach is the need to adapt the guest operating system to the para-virtualization API, which can impair the use of standard guest operating systems while decreasing solution portability.
Special Hardware Compatibility
Many hardware vendors that distribute specialized hardware may not have device driver versions that are compatible with virtualization software. Conversely, the software itself may be incompatible with recently released hardware versions. These types of incompatibility issues can be resolved using established commodity hardware platforms and mature virtualization software products.
Portability
The programmatic and management interfaces that establish administration environments for a virtualization program to operate with various virtualization solutions can introduce portability gaps due to incompatibilities. Initiatives such as the Open Virtualization Format (OVF) for the standardization of virtual disk image formats are dedicated to alleviating this concern.