Overview of Single Root I/O Virtualization
The single root I/O virtualization (SR-IOV) interface is an extension to the PCI Express (PCIe) specification. SR-IOV allows a device, such as a network adapter, to separate access to its resources among various PCIe hardware functions. These functions consist of the following types:
• A PCIe Physical Function (PF). This function is the primary function of the
device and advertises the device's SR-IOV capabilities. The PF is associated with the Hyper-V parent partition in a virtualized environment.
• One or more PCIe Virtual Functions (VFs). Each VF is associated with the
device's PF. A VF shares one or more physical resources of the device, such as a memory and a network port, with the PF and other VFs on the device. Each VF is associated with a Hyper-V child partition in a virtualized environment.
Each PF and VF is assigned a unique PCI Express Requester ID (RID) that allows an I/O memory management unit (IOMMU) to differentiate between different traffic streams and apply memory and interrupt translations between the PF and VFs. This allows traffic streams to be delivered directly to the appropriate Hyper-V parent or child partition. As a result, nonprivileged data traffic flows from the PF to VF without affecting other VFs.
SR-IOV enables network traffic to bypass the software switch layer of the Hyper-V virtualization stack. Because the VF is assigned to a child partition, the
network traffic flows directly between the VF and child partition. As a result, the I/O overhead in the software emulation layer is diminished and achieves network performance that is nearly the same performance as in nonvirtualized environments.
For more information, see the following topics:
SR-IOV Architecture
SR-IOV Data Paths
SR-IOV Architecture
This section provides a brief overview of the single root I/O virtualization (SR-IOV) interface and its components.
The following figure shows the components of the SR-IOV starting withWindows Server Developer Preview.
The SR-IOV interface consists of the following components:
Hyper-V Extensible Switch Module
The extensible switch module that configures the NIC switch on the SR-IOV network adapter to provide network connectivity to the Hyper-V child partitions.
Note Hyper-V child partitions are known as virtual machines (VMs).
If the child partitions are connected to a PCI Express (PCIe) Virtual Function (VF), the extensible switch module does not participate in data traffic between the VM and the network adapter. Instead, data traffic is passed directly between the VM and the VF to which it is attached.
For more information about the extensible switch, see Hyper-V Extensible
Switch.
Physical Function (PF)
The PF is a PCI Express (PCIe) function of a network adapter that supports the SR-IOV interface. The PF includes the SR-IOV Extended Capability in the PCIe Configuration space. The capability is used to configure and manage the SR-IOV functionality of the network adapter, such as enabling virtualization and exposing VFs.
For more information, see SR-IOV Physical Function (PF).
PF Miniport Driver
The PF miniport driver is responsible for managing resources on the network adapter that are used by one or more VFs. Because of this, the PF miniport driver is loaded in the management operating system before any resources are allocated for a VF. The PF miniport driver is halted after all resources that were allocated for VFs are freed.
For more information, see Writing SR-IOV PF Miniport Drivers.
Virtual Function (VF)
A VF is a lightweight PCIe function on a network adapter that supports the SR-IOV interface. The VF is associated with the VF on the network adapter, and
represents a virtualized instance of the network adapter. Each VF has its own PCI Configuration space. Each VF also shares one or more physical resources on the network adapter, such as an external network port, with the PF and other VFs.
For more information, see SR-IOV Virtual Functions (VFs).
VF Miniport Driver
The VF miniport driver is installed in the VM to manage the VF. Any operation that is performed by the VF miniport driver must not affect any other VF or the PF on the same network adapter.
For more information, see Writing SR-IOV PF Miniport Drivers.
Network Interface Card (NIC) Switch
The NIC switch is a hardware component of the network adapter that supports the SR-IOV interface. The NIC switch forwards network traffic between the physical port on the adapter and internal virtual ports (VPorts). Each VPort is attached to either the PF or a VF.
For more information, see NIC Switches.
Virtual Ports (VPorts)
A VPort is a data object that represents an internal port on the NIC switch of a
network adapter that supports the SR-IOV interface. Similar to a port on a physical switch, a VPort on the NIC switch delivers packets to and from a PF or VF to which the port is attached.
For more information, see NIC Switches.
Physical Port
The physical port is a hardware component of the network adapter that supports the SR-IOV interface. The physical port provides the interface on the adapter to the external networking medium.
Build date: 12/8/2011
SR-IOV Physical Function (PF)
The Physical Function (PF) is a PCI Express (PCIe) function of a network adapter that supports the single root I/O virtualization (SR-IOV) interface. The PF includes the SR-IOV Extended Capability in the PCIe Configuration space. The capability is used to configure and manage the SR-IOV functionality of the network adapter, such as enabling virtualization and exposing PCIe Virtual Functions (VFs).
The PF is exposed as a virtual network adapter in the management operating system of the Hyper-V parent partition. The PF miniport driver is an NDIS miniport driver that manages the PF in the management operating system. The configuration and provisioning of the VFs, together with other hardware and software resources for the support of VFs, is performed through the PF miniport driver. The PF miniport driver uses the traditional NDIS miniport driver functionality to provide the access to the networking I/O resources to the
management operating system. The PF driver is also used as a way to manage the resources allocated on the adapter for the VFs.
The PF supports the SR-IOV Extended Capability structure in its PCIe configuration space. This structure is defined in the PCI-SIG Single Root I/O Virtualization and Sharing 1.1 specification. This structure includes the following members:
TotalVFs
A read-only field that specifies the maximum number of VFs that can be associated with the PF.
NumVFs
A read-write field that specifies the current number of VFs that are available on the SR-IOV network adapter.
SR-IOV Control
A read-write field that specifies various control bits that enable or disable SR-IOV functionality on the network adapter. For example, if the VF Enable bit is set to one, VFs can be associated with the PF on the adapter. If this bit is set to zero, VFs are disabled and not visible on the adapter.
The PF also provides the mechanism for the management operating system to communicate with the external physical network. The PF provides network connectivity to the all virtual network adapters that are connected to the Hyper-V extensible switch module. This includes the following:
• Virtual network adapters that provide network connectivity to the Hyper-V
parent partition.
• Virtual network adapters that provide network connectivity to the Hyper-V
child partitions that do not have VFs allocated to them.
The PF miniport driver is responsible for managing resources on the network adapter that are used by one or more VFs. Because of this, the PF miniport driver is loaded in the management operating system before any resources are allocated for a VF. The PF miniport driver is halted after all resources that were allocated for VFs are freed.
SR-IOV Virtual Functions (VFs)
A PCI Express (PCIe) Virtual Function (VF) is a lightweight PCIe function on a network adapter that supports single root I/O virtualization (SR-IOV). The VF is associated with the PCIe Physical Function (PF) on the network adapter, and represents a virtualized instance of the network adapter. Each VF has its own PCI Configuration space. Each VF also shares one or more physical resources on the network adapter, such as an external network port, with the PF and other VFs.
A VF is not a full-fledged PCIe device. However, it provides a basic mechanism for directly transferring data between a Hyper-V child partition and the underlying SR-IOV network adapter. Software resources associated for data transfer are directly available to the VF and are isolated from use by the other VFs or the PF. However, the configuration of most of these resources is performed by the PF miniport driver that runs in the management operating system of the Hyper-V parent partition.
A VF is exposed as a virtual network adapter (VF network adapter) in the guest operating system that runs in a Hyper-V child partition. After the VF is associated with a virtual port (VPort) on the NIC switch of the SR-IOV network adapter, the virtual PCI (VPCI) driver that runs in the VM exposes the VF network adapter. Once exposed, the PnP manager in the guest operating system loads the VF miniport driver.
Note A Hyper-V child partition is also known as a virtual machine (VM).
The VF miniport driver is an NDIS miniport driver that is installed in the VM to
manage the VF. Any operation that is performed by the VF miniport driver must not affect any other VF or the PF on the same network adapter.
The VF miniport driver can function like any PCI device driver. It can read and write to the VF's PCI configuration space. However, access to the virtual PCI device is a privileged operation and is managed by the PF miniport driver in the following way:
• When the VF miniport driver calls NdisMGetBusData to read data from the
PCI configuration space of the VF network adapter, the virtualization stack is notified. This stack runs in the management operating system of the Hyper-V parent partition. When the stack is notified of the read request, it issues an object identifier (OID) method request of OID_SRIOV_READ_VF_CONFIG_SPACE to the PF miniport driver. The data to be read is specified in
anNDIS_SRIOV_READ_VF_CONFIG_SPACE_PARAMETERS structure that is contained in the OID request.
The driver reads the requested data from the VF PCI configuration space and returns the data by completing the OID request. This data is then returned to the VF miniport driver when the call to NdisMGetBusData completes.
• When the VF miniport driver calls NdisMSetBusData to write data to the PCI
configuration space of the VF network adapter, the virtualization stack is notified of the write request. It issues an OID method request
ofOID_SRIOV_WRITE_VF_CONFIG_SPACE to the PF miniport driver. The data to be
written is specified in
anNDIS_SRIOV_WRITE_VF_CONFIG_SPACE_PARAMETERS structure that is contained in the OID request.
The driver writes the data to the VF PCI configuration space and returns the status of the request when it completes the OID request. This status is returned to the VF miniport driver after the call to NdisMSetBusDatacompletes.
The VF miniport driver may also communicate with the PF miniport driver. This communication path is over a backchannel interface. For more information, see SR-IOV PF/VF Backchannel Communication.
Note The VF miniport driver must be aware that it is running in a virtualized environment so that it can communicate with the PF miniport driver for certain operations. For more information on how the driver does this, see Initializing a VF Miniport Driver.
NIC Switches
A network adapter that supports single root I/O virtualization (SR-IOV) must implement a hardware bridge that forwards network traffic between the physical port on the adapter and internal virtual ports (VPorts). This bridge is known as theNIC switch and is shown in the following figure.
Each NIC switch contains the following components:
• One external, or physical, port that provides network connectivity to the
external physical network.
• One internal port that provides the PCI Express (PCIe) Physical Function (PF)
on the network adapter with access to the external physical network. An internal port is known as a virtual port (VPort).
The PF always has a VPort that is created and assigned to it. This VPort is known as the default VPort, and is referenced by the DEFAULT_VPORT_ID identifier.
For more information about VPorts, see Virtual Ports (VPorts).
• One or more VPorts that provide a PCIe Virtual Function (VF) on the network
adapter with access to the external physical network.
Note Additional VPorts can be created and allocated to the PF for network access.
Note Starting with Windows Server Developer Preview, the SR-IOV interface supports only one NIC switch on the network adapter. This switch is known as the default NIC switch, and is referenced by the NDIS_DEFAULT_SWITCH_ID identifier.
The hardware resources for the NIC switch are managed by the PF miniport driver for the SR-IOV network adapter. The driver creates and configures the NIC switch through one of the following methods:
• Static creation based on standardized SR-IOV and NIC switch INF keywords.
For more information on these keywords, see Standardized INF Keywords for SR-IOV.
• Dynamic creation based on object identifier (OID) method requests
of OID_NIC_SWITCH_CREATE_SWITCH. NDIS or the Hyper-V extensible switch module issues these OID requests to create NIC switches on the SR-IOV network adapter.
For more information on how NIC switches are created, configured, and managed, see Managing NIC Switches.
Virtual Ports (VPorts)
A virtual port (VPort) is data object that represents an internal port on the NIC switch of a network adapter that supports single root I/O virtualization (SR-IOV). Each NIC switch has the following ports for network connectivity:
• One external physical port for connectivity to the external physical network.
• One or more internal VPorts which are connected to the PCI Express Physical
Function (PF) or virtual functions (VFs).
The PF is attached to the Hyper-V parent partition and is exposed as a virtual network adapter in the management operating system that runs in that partition.
A VF is attached to the Hyper-V child partition and is exposed as a virtual network adapter in the guest operating system that runs in that partition.
The NIC switch bridges network traffic from the physical port to one or more VPorts. This provides virtualized access to the underlying physical network interface.
Each VPort has a unique identifier (VPortId) that is unique for the NIC switch on the network adapter. A default VPort always exists on the default NIC switch and can never be deleted. The default VPort has the VPortId of NDIS_DEFAULT_VPORT_ID.
When the PF miniport driver handles an object identifier (OID) method request of OID_NIC_SWITCH_CREATE_SWITCH, it creates the NIC switch and the default VPort for that switch. The default VPort is always attached to the PF and is always in an operational state.
Nondefault VPorts are created through OID method requests
of OID_NIC_SWITCH_CREATE_VPORT. Only one nondefault VPort can be attached to a VF. Once attached, the default is in an operational state. One or more nondefault VPorts can also be created and attached to the PF. These VPorts are nonoperational when created and can become operational through an OID set request of OID_NIC_SWITCH_VPORT_PARAMETERS.
Note After a VPort becomes operational, it can only become nonoperational when it is deleted through an OID request of OID_NIC_SWITCH_DELETE_VPORT.
Each VPort has one or more hardware queue pairs associated with it for receiving and transmitting packets. The default queue pair on the network adapter is reserved for use by the default VPort. Queue pairs for nondefault VPorts are allocated and assigned when the VPort is created through the OID_NIC_SWITCH_CREATE_VPORT request.
Nondefault VPorts are created and configured through OID method requests of OID_NIC_SWITCH_CREATE_VPORT. The default VPort and nondefault VPorts are reconfigured through OID set requests ofOID_NIC_SWITCH_VPORT_PARAMETERS. Each OID request contains an NDIS_NIC_SWITCH_VPORT_PARAMETERSstructure
that specifies the following configuration parameters:
• The PCIe function to which the VPort is attached.
Each VPort can be either attached to the PF or with a VF at any time. After the VPort is created and attached to a PCIe function, the attachment cannot be dynamically changed to another PCIe function.
Note The default VPort is always attached to the PF on the network adapter.
Starting with Windows Server Developer Preview, only one nondefault VPort can be attached to a VF. However, multiple nondefault VPorts along with the default VPort can be attached to the PF.
• The number of hardware queue pairs that are assigned to a VPort.
Each VPort has a set of hardware queue pairs that are available to it. Each queue pair consists of a separate transmit and receive queue on the network adapter.
Queue pairs are limited resources on the network adapter. The total number of queue pairs reserved for use by the default and nondefault VPorts is specified when the NIC switch is created. This allows the number of queue pairs that are assigned to the default VPort to differ from the nondefault VPorts.
Each nondefault VPort can be configured to have a different number of queue
pairs. This is known as asymmetric allocation of queue pairs. If the NIC does not allow for such an asymmetric allocation, each nondefault VPort is configured to have equal number of queue pairs. This is known as symmetric allocation of queue pairs. For more information, see Symmetric and Asymmetric Assignment of Queue Pairs.
Note The PF miniport driver reports on whether it supports asymmetric allocation of queue pairs duringMiniportInitializeEx. For more information, see Initializing a PF Miniport Driver.
The number of queue pairs assigned to each VPort is not changed dynamically. The number of queue pairs assigned to a VPort cannot be changed after the VPort has been created.
Note One or more queue pairs assigned to the nondefault VPorts can be used for receive-side scaling (RSS) by the VF miniport driver that runs in the guest operating system.
• Interrupt moderation parameters for the VPort.
Different interrupt moderation types can be specified for different VPorts. This allows the virtualization stack to control the number of interrupts generated by a particular VPort.
In addition to configuration parameters, overlying drivers can configure
receive filters for each VPort by issuing OID method requests
of OID_RECEIVE_FILTER_SET_FILTER. The NIC switch performs the specified receive filtering on a VPort basis.
Receive filters parameters for VPorts include packet filtering conditions, such as a list of media access control (MAC) addresses and the virtual LAN (VLAN) identifiers. Filters for MAC addresses and VLAN identifiers are always specified together in the NDIS_RECEIVE_FILTER_PARAMETERS associated with
the OID_RECEIVE_FILTER_SET_FILTER request. The NIC switch must filter incoming packets to the switch whose destination MAC address and VLAN identifier matches any receive filter condition that was set on the VPort. The NIC switch filters packets received from either another VPort or from the external physical port. If the packet matches a filter, the NIC switch must forward it to the VPort.
Multiple MAC address and VLAN identifier pairs may be set on the VPort. If only a MAC address is set, the receive filter specifies that the VPort should receive packets that match the following condition:
• The packet's destination MAC address matches the filter's MAC address.
• The packet has a VLAN tag or (if a VLAN tag is present) a VLAN identifier of
zero.
Nondefault VPorts are deleted through OID set requests
of OID_NIC_SWITCH_DELETE_VPORT. The default VPort is only deleted when the
NIC switch is deleted through an OID set request of OID_NIC_SWITCH_DELETE_SWITCH.
SR-IOV Data Paths
This section describes the possible data paths between a network adapter that supports single root I/O virtualization (SR-IOV) and the Hyper-V parent and child partitions.
This section includes the following topics:
Overview of SR-IOV Data Paths
SR-IOV VF Data Path
SR-IOV Synthetic Data Path
SR-IOV VF Failover and Live Migration Support
Overview of SR-IOV Data Path
When a Hyper-V child partition is started and the guest operating system is running, the virtualization stack starts the Network Virtual Service Client (NetVSC). NetVSC exposes a virtual machine (VM) network adapter by providing a miniport driver edge to the protocol stacks that run in the guest operating system. In addition, NetVSC provides a protocol driver edge that allows it to bind to the
underlying miniport drivers.
NetVSC also communicates with the Hyper-V extensible switch that runs in the management operating system of the Hyper-V parent partition. The extensible switch component operates as a Network Virtual Service Provider (NetVSP). The interface between the NetVSC and NetVSP provides a software data path that is known as the synthetic data path. For more information about this data path, see SR-IOV Synthetic Data Path.
If the physical network adapter supports the single root I/O virtualization (SR-IOV) interface, it can enable one or more PCI Express (PCIe) Virtual Functions (VFs). Each VF can be attached to a Hyper-V child partition. When this happens, the virtualization stack performs the following steps:
1. The virtualization stack exposes a network adapter for the VF in the guest operating system. This causes the PCI driver that runs in the guest operating system to start the VF miniport driver. This driver is provided by the independent hardware vendor (IHV) for the SR-IOV network adapter.
2. After the VF miniport driver is loaded and initialized, NDIS binds the protocol edge of the NetVSC in the guest operating system to the driver.
Note NetVSC only binds to the VF miniport driver. No other protocol stacks in the guest operating system can bind to the VF miniport driver.
After the NetVSC successfully binds to the driver, network traffic in the guest operating system occurs over the VF data path. Packets are sent or received over the underlying VF of the network adapter instead of the synthetic data path.
For more information about the VF data path, see SR-IOV VF Data Path.
The following figure shows the various data paths that are supported over an SR-IOV network adapter.
After the Hyper-V child partition is started and before the VF data path is established, network traffic flows over the synthetic data path. After the VF data path is established, network traffic can revert to the synthetic data path if the following conditions are true:
• The VF becomes unattached to the Hyper-V child partition. For example, the
virtualization stack could detach a VF from one child partition and attach it to another child partition. This might occur when there are more Hyper-V child partitions that are running than there are VF resources on the underlying SR-IOV network adapter.
The process of failing over to the synthetic data path from the VF data path is known as VF failover.
• The Hyper-V child partition is being live migrated to a different host.
For more information about VF failover and live migration, see SR-IOV VF Failover and Live Migration.
SR-IOV VF Data Path
If the physical network adapter supports the single root I/O virtualization (SR-IOV) interface, it can enable one or more PCI Express (PCIe) Virtual Functions (VFs). Each VF can be attached to a Hyper-V child partition. When this happens, the virtualization stack performs the following steps:
1. Once resources for the VF are allocated, the virtualization stack exposes a network adapter for the VF in the guest operating system. This causes the PCI driver that runs in the guest operating system to start the VF miniport driver. This driver is provided by the independent hardware vendor (IHV) for the SR-IOV network adapter.
Note Resources for the VF must be allocated by the miniport driver for the PCIe Physical Function (PF) before the VF can be attached to the Hyper-V child partition. VF resources include assigning a virtual port (VPort) on the NIC switch to the VF. For more information, see SR-IOV Virtual Functions.
2. After the VF miniport driver is loaded and initialized, NDIS binds the protocol edge of the Network Virtual Service Client (NetVSC) in the guest operating system to the driver.
Note NetVSC only binds to the VF miniport driver. No other protocol stacks in the guest operating system can bind to the VF miniport driver.
After the NetVSC successfully binds to the driver, network traffic in the guest operating system occurs over the VF data path. Packets are sent or received over the underlying VF of the network adapter instead of the software-based synthetic data path. For more information about the synthetic data path, see SR-IOV Synthetic Data Path.
The following diagram shows the components of the VF data path over an SR-IOV network adapter.
The use of the VF data path provides the following benefits:
• All data packets flow directly between the networking components in the
guest operating system and the VF. This eliminates the overhead of the synthetic data path in which data packets flow between the Hyper-V child and parent partitions.
For more information about the synthetic data path, see SR-IOV Synthetic Data Path.
• The VF data path bypasses any involvement by the management operating
system in packet traffic from a Hyper-V child partition. The VF provides
independent memory space, interrupts, and DMA streams for the child partition to which it is attached. This achieves networking performance that is almost compatible with nonvirtualized environments.
• The routing of packets over the VF data path is performed by the NIC switch
on the SR-IOV network adapter. Packets are sent or received over the external network through the physical port of the adapter. Packets are also forwarded to or from other child partitions to which a VF is attached.
Note Packets to or from child partitions to which no VF is attached are forwarded by the NIC switch to the Hyper-V extensible switch module. This module runs in the Hyper-V parent partition and delivers these packets to the child partition by using the synthetic data path.
SR-IOV Synthetic Data Path
When a Hyper-V child partition is started and the guest operating system is running, the virtualization stack starts the Network Virtual Service Client (NetVSC). NetVSC exposes a virtual machine (VM) network adapter that provides a miniport driver edge to the protocol stacks that run in the guest operating system.
NetVSC also communicates with the Hyper-V extensible switch that runs in the management operating system of the Hyper-V parent partition. The extensible switch component operates as a Network Virtual Service Provider (NetVSP). The interface between the NetVSC and NetVSP provides a software data path that is known as the synthetic data path.
The following diagram shows the components of the synthetic data path over an SR-IOV network adapter.
If the underlying SR-IOV network adapter allocates resources for PCI Express (PCIe) Virtual Functions (VFs), the virtualization stack will attach a VF to a Hyper-V child partition. Once attached, packet traffic within the child partition will occur over the hardware-optimized VF data path instead of the synthesized data path. For more information on the VF data path, see SR-IOV Data Path.
The virtualization stack may still enable the synthetic data path for a Hyper-V child partition if one of the following conditions is true:
• The SR-IOV network adapter has insufficient VF resources to accommodate
all of the Hyper-V child partitions that were started. After all VFs on the network adapter are attached to child partitions, the remaining partitions use the synthetic data path.
The process of failing over to the synthetic data path from the VF data path is known as VF failover.
• A VF was attached to a Hyper-V child partition but becomes detached. For
example, the virtualization stack could detach a VF from one child partition and attach it to another child partition. This might occur when there are more Hyper-V child partitions that are running than there are VF resources on the underlying SR-IOV network adapter.
• The Hyper-V child partition is being live migrated to a different host.
Although the synthetic data path over an SR-IOV network adapter is not as efficient as the VF data path, it can still be hardware optimized. For example, if one or more virtual ports (VPorts) are configured and attached to the PCIe Physical Function (PF), the data path can provide the offload capabilities that resemble the virtual machine queue (VMQ) interface. For more information, see Nondefault Virtual Ports and VMQ.
SR-IOV VF Failover and Live Migration Support
After the Hyper-V child partition is started, network traffic flows over the synthetic data path. If the physical network adapter supports the single root I/O virtualization (SR-IOV) interface, it can enable one or more PCI Express (PCIe) Virtual Functions (VFs). Each VF can be attached to a Hyper-V child partition. When this happens, network traffic flows over the hardware-optimized VF data path.
After the VF data path is established, network traffic can revert to the synthetic data path if any of the following conditions is true:
• A VF was attached to a Hyper-V child partition but becomes detached. For
example, the virtualization stack could detach a VF from one child partition and attach it to another child partition. This might occur when there are more Hyper-V child partitions that are running than there are VF resources on the underlying SR-IOV network adapter.
The process of failing over to the synthetic data path from the VF data path is known as VF failover.
• The Hyper-V child partition is being live migrated to a different host.
The following figure shows the various data paths that are supported over an SR-IOV network adapter.
The NetVSC exposes a virtual machine (VM) network adapter which is bound to the PF miniport driver to support the VF data path. During the transition to the synthetic data path, the VF network adapter is surprise removed from the guest operating system, the VF miniport driver is halted, and the Network Virtual Service Client (NetVSC) is unbound from the VF miniport driver.
The transition between the VF and synthetic data paths occurs with minimum
loss of packets and prevents the loss of TCP connections. Before the transition to the synthetic data path is complete, the virtualization stacks follows these steps:
1. The virtualization stack moves the media access control (MAC) and virtual LAN (VLAN) filters for the VM network adapter to the default virtual port (VPort) that is attached to the PCIe Physical Function (PF). The VM network adapter is exposed in the guest operating system of the child partition.
After the filters are moved to the default VPort, the synthetic data path is fully operational for network traffic to and from the networking components that run in the guest operating system. The PF miniport driver indicates received packets on the default PF VPort which uses the synthetic data path to indicate the packets to the guest operating system. Similarly, all transmitted packets from the guest operating system are routed through the synthetic data path and transmitted through the default PF VPort.
For more information about VPorts, see Virtual Ports (VPorts).
2. The virtualization stack deletes the VPort that is attached to the VF by issuing an object identifier (OID) set request
of OID_NIC_SWITCH_DELETE_VPORT to the PF miniport driver. The miniport driver frees any hardware and software resources associated with the VPort and completes the OID request.
For more information, see Deleting a Virtual Port.
3. The virtualization stack requests a PCIe function level reset (FLR) of the VF before its resources are deallocated. The stack does this by issuing an OID set request of OID_SRIOV_RESET_VFto the PF miniport driver. The FLR brings the VF on the SR-IOV network adapter into a quiescent state and clears any pending interrupt events for the VF.
4. After the VF has been reset, the virtualization stack requests a deallocation of the VF resources by issuing an OID set request of OID_NIC_SWITCH_FREE_VF to the PF miniport driver. This causes the miniport driver to free the hardware resources associated with the VF.
For more information about this process, see Virtual Function Teardown Sequence.
因篇幅问题不能全部显示,请点此查看更多更全内容