Quantum runs several specific services, called agents, that are backends for processing different kinds of requests and providing different kind of features.
For example, the L3 agent handles routing configuration and provides floating IP support. To make a Quantum setup more durable, you need several agents of each type running on different hosts. However, to make that work, the Quantum server must be able to schedule requests properly to avoid conflicts in network configuration. This assumes that a cloud provider has created the following types of nodes:.
Each compute and network node must run a Quantum L2-agent, which provides a base for creation of L2-networks to be used by VM instances. A network node usually provides a DHCP-server for each of the private networks, which is used by VM instances to receive their network configuration parameters. To prevent this, one can actually run multiple copies of DHCP-agents on different nodes, or even set up a few distinct network nodes to ensure high availability of L3-services.
But this solution has several drawbacks:. The lack of an agent scheduler also limits the scalability of Quantum services, as there is no way to distribute networks to be hosted routed between different nodes running corresponding Quantum agents. For example, if there is only one running L3-agent, then the Internet access bandwidth of the whole cluster is limited by the external network interface bandwidth of the network node. The key difference between a typical networking setup in Folsom as shown in Figure 1 and a typical networking setup in Quantum Grizzly as shown in Figure 2 is that it is now possible to use several nodes running DHCP- and L3-agents to either schedule networks and routers being created between them, or simply to improve reliability of those services by hosting objects redundantly on a multitude of nodes.
The specification defines a component object that represents a Quantum agent, a Quantum server, or any advanced service. By design, every component is responsible for reporting its state and location to the Quantum server. In fact, the definition of a component provides a unified way for communication between the Quantum server and different Quantum components. Although some components, such as the DHCP agent, are able to serve a single network without any request scheduling, most components require scheduling to work properly and avoid configuration conflicts.
Heartbeats, messages every component sends to the Quantum server periodically, are the cornerstone of the scheduling mechanism in Quantum. Heartbeats are sent asynchronously over the message queue, and then are handled by the Quantum server, which stores all the latest heartbeat information. Following Type Drivers are supported:.
Following Vendor specific Mechanism Drivers are supported:. To get code, ask questions, view blueprints, etc, see: Neutron Launchpad Page. See NeutronDevelopment for some rough guides on how to contribute code to Neutron, including how to add your own plugin. Check out NeutronStarterBugs for ideas on easy bugs or starter projects you might tackle. Or just start playing with NeutronDevstack and come up with your own ideas! Jump to: navigation , search.
OpenStack Quantum OpenStack, an open source software initiative for building clouds, has a network connectivity project named Quantum see project page here.
How does OpenFlow relate to Quantum? Cookies We employ the use of cookies. However, the Quantum API will be extensible, allowing plugins to introduce new logical network abstractions, for example, to provide more advanced security or QoS policies. Extensions provide an avenue for innovation by allowing new functionality to be exposed. The goal is that API extensions that prove valuable will be adopted by multiple plugins and can then be standardized.
A key output of the summit was splitting up the large space of "Openstack network-related functionality" into more well-defined efforts where were either specifically targeted "building block" services or "orchestration" services for combining building blocks into higher-level abstractions. Building block services have a well-defined scope e. This keeps the building block service simple and provides OpenStack operators with the ability to mix and match different technologies for different building blocks it is always possible that a single vendor solution can implements multiple services.
0コメント