当前位置:首页 > 云计算 > 正文

是公有云计算基础构架的计时(私有云计算基础架构)

What are the main technologies of public cloud computing infrastructure?

Cloud computing system China uses many technologies, among which programming models, data management technology, data storage technology, virtualization technology, and cloud computing platform management technology are more critical.

(1) Programming model

MapReduce is a java, Python, and C++ programming model developed by Google. It is a simplified distributed programming model and efficient task scheduling model, using Parallel operations on large-scale data sets (larger than 1TB). The strict programming model makes programming in a cloud computing environment very simple. The idea of ​​MapReduce mode is to decompose the problem to be executed into Map (mapping) and Reduce (simplification). First, the data is cut into irrelevant blocks through the Map program, and then allocated (scheduled) to a large number of computers for processing to achieve distributed The results of the operation are then summarized and output through the Reduce program.

(2) Massive data distributed storage technology

The cloud computing system consists of a large number of servers and serves a large number of users at the same time. Therefore, the cloud computing system uses distributed storage to store data, using Redundant storage ensures data reliability. The data storage systems widely used in cloud computing systems are Google's GFS and HDFS, the open source implementation of GFS developed by the Hadoop team.

(3) Massive data management technology

Cloud computing requires the processing and analysis of distributed and massive data. Therefore, data management technology must be able to efficiently manage large amounts of data. The data management technology in cloud computing systems is mainly Google's BT (BigTable) data management technology and the open source data management module HBase developed by the Hadoop team.

(4) Virtualization technology

Virtualization technology can isolate software applications from underlying hardware. It includes a split mode that divides a single resource into multiple virtual resources. It also includes an aggregation mode that integrates multiple resources into one virtual resource. Virtualization technology can be divided into storage virtualization, computing virtualization, network virtualization, etc. based on objects. Computing virtualization is further divided into system-level virtualization, application-level virtualization and desktop virtualization.

(5) Cloud computing platform management technology

Cloud computing resources are huge in scale, with a large number of servers distributed in different locations, and hundreds of applications running at the same time. How to manage them effectively? For these servers, it is a huge challenge to ensure that the entire system provides uninterrupted services.

The platform management technology of cloud computing systems can enable a large number of servers to work together, facilitate business deployment and activation, quickly discover and recover system faults, and achieve the reliability of large-scale systems through automated and intelligent means. operations.

Why distribution is the cornerstone of public cloud computing infrastructure
High-quality hardware equipment and reasonable facility distribution layout are the cornerstones of private cloud computing infrastructure.
Understand everything about your private cloud infrastructure. Not all workloads are suitable for virtualized environments. Likewise, not all workloads are suitable for private cloud environments. When executing a cloud strategy, you are most likely managing a hybrid environment of physical, virtual and cloud resources. Therefore, you need to carve out a portion of your data center into a pool of shared, virtualized, and scalable resources. Many IT executives plan to put 30-50% or more of their workloads into private cloud environments. However, an environment where private cloud resources are managed will have physical servers and mainframes, as well as some static virtualized resources. In real estate terms, when building cloud-centric data centers in the future, it’s not about tearing them down and rebuilding them, but rather overhauling them. To do this, we need to understand the nature of the current workload, delineate the scope of heterogeneous hybridization in the current environment, and what happens in terms of requirements as you progress from development to test/QA to production. The change.
Determine target workloads for your private cloud environment. Current workloads need to be evaluated to determine which workloads are best placed in a private cloud environment. Such snapshots will be used to set long-term goals and determine what percentage of the overall workload should be put into the private cloud environment. In short, it will also be used to determine the workloads for initial cloud deployment.