In the initial version of the resource provider schema in the placement API, we stuck with a simple world-view that resource providers could be related to each other only via an aggregate relationship. In other words, a resource provider “X” may provide shared resources to a set of other resource providers “S” if and only if “X” was associated with an aggregate “A” that all members of “S” were also associated with. This relationship works perfectly fine for things like shared storage or IP pools. However, certain classes of resource require a more parent->child relationship than a many-to-many relationship that the aggregate association offers. Two examples of where a parent->child relationship is more appropriate are when handling VCPU/MEMORY_MB resources on NUMA nodes on a compute host and when handling SRIOV_NET_VF resources for NICs on a compute host. In the case of NUMA nodes, the system must be able to track how many VCPU and MEMORY_MB have been allocated from each individual NUMA node on the host. Allocating memory to a guest and having that memory span address space across two banks of DIMMs attached to different NUMA nodes results in sub-optimal performance, and for certain high-performance guest workloads this penalty is not acceptable.
时间: 2024-02-14 22:33:25 浏览: 77
在placement API中的资源提供者模式的初始版本中,我们坚持了一个简单的观点,即资源提供者之间只能通过聚合关系相互关联。换句话说,如果且仅当资源提供者“X”与所有成员资源提供者“S”都关联到同一个聚合“A”时,资源提供者“X”才能向资源提供者“S”提供共享资源。
这种关系对于共享存储或IP池等情况非常适用。然而,某些类型的资源需要比聚合关联提供更多的父->子关系。两个更适合父->子关系的例子是在计算主机上处理NUMA节点上的VCPU/MEMORY_MB资源以及在计算主机上处理SRIOV_NET_VF资源的网卡。
在NUMA节点的情况下,系统必须能够跟踪从主机上的每个单独NUMA节点分配了多少VCPU和MEMORY_MB。将内存分配给虚拟机,并使该内存跨越连接到不同NUMA节点的两组DIMM的地址空间,会导致性能下降,并且对于某些高性能虚拟机工作负载来说,这种性能损失是不可接受的。
阅读全文