TEXT   15

VFIO - Virtual Function

Guest on 1st July 2022 07:21:12 PM

  1. ==================================
  2. VFIO - "Virtual Function I/O" [1]_
  3. ==================================
  4.  
  5. Many modern system now provide DMA and interrupt remapping facilities
  6. to help ensure I/O devices behave within the boundaries they've been
  7. allotted.  This includes x86 hardware with AMD-Vi and Intel VT-d,
  8. POWER systems with Partitionable Endpoints (PEs) and embedded PowerPC
  9. systems such as Freescale PAMU.  The VFIO driver is an IOMMU/device
  10. agnostic framework for exposing direct device access to userspace, in
  11. a secure, IOMMU protected environment.  In other words, this allows
  12. safe [2]_, non-privileged, userspace drivers.
  13.  
  14. Why do we want that?  Virtual machines often make use of direct device
  15. access ("device assignment") when configured for the highest possible
  16. I/O performance.  From a device and host perspective, this simply
  17. turns the VM into a userspace driver, with the benefits of
  18. significantly reduced latency, higher bandwidth, and direct use of
  19. bare-metal device drivers [3]_.
  20.  
  21. Some applications, particularly in the high performance computing
  22. field, also benefit from low-overhead, direct device access from
  23. userspace.  Examples include network adapters (often non-TCP/IP based)
  24. and compute accelerators.  Prior to VFIO, these drivers had to either
  25. go through the full development cycle to become proper upstream
  26. driver, be maintained out of tree, or make use of the UIO framework,
  27. which has no notion of IOMMU protection, limited interrupt support,
  28. and requires root privileges to access things like PCI configuration
  29. space.
  30.  
  31. The VFIO driver framework intends to unify these, replacing both the
  32. KVM PCI specific device assignment code as well as provide a more
  33. secure, more featureful userspace driver environment than UIO.
  34.  
  35. Groups, Devices, and IOMMUs
  36. ---------------------------
  37.  
  38. Devices are the main target of any I/O driver.  Devices typically
  39. create a programming interface made up of I/O access, interrupts,
  40. and DMA.  Without going into the details of each of these, DMA is
  41. by far the most critical aspect for maintaining a secure environment
  42. as allowing a device read-write access to system memory imposes the
  43. greatest risk to the overall system integrity.
  44.  
  45. To help mitigate this risk, many modern IOMMUs now incorporate
  46. isolation properties into what was, in many cases, an interface only
  47. meant for translation (ie. solving the addressing problems of devices
  48. with limited address spaces).  With this, devices can now be isolated
  49. from each other and from arbitrary memory access, thus allowing
  50. things like secure direct assignment of devices into virtual machines.
  51.  
  52. This isolation is not always at the granularity of a single device
  53. though.  Even when an IOMMU is capable of this, properties of devices,
  54. interconnects, and IOMMU topologies can each reduce this isolation.
  55. For instance, an individual device may be part of a larger multi-
  56. function enclosure.  While the IOMMU may be able to distinguish
  57. between devices within the enclosure, the enclosure may not require
  58. transactions between devices to reach the IOMMU.  Examples of this
  59. could be anything from a multi-function PCI device with backdoors
  60. between functions to a non-PCI-ACS (Access Control Services) capable
  61. bridge allowing redirection without reaching the IOMMU.  Topology
  62. can also play a factor in terms of hiding devices.  A PCIe-to-PCI
  63. bridge masks the devices behind it, making transaction appear as if
  64. from the bridge itself.  Obviously IOMMU design plays a major factor
  65. as well.
  66.  
  67. Therefore, while for the most part an IOMMU may have device level
  68. granularity, any system is susceptible to reduced granularity.  The
  69. IOMMU API therefore supports a notion of IOMMU groups.  A group is
  70. a set of devices which is isolatable from all other devices in the
  71. system.  Groups are therefore the unit of ownership used by VFIO.
  72.  
  73. While the group is the minimum granularity that must be used to
  74. ensure secure user access, it's not necessarily the preferred
  75. granularity.  In IOMMUs which make use of page tables, it may be
  76. possible to share a set of page tables between different groups,
  77. reducing the overhead both to the platform (reduced TLB thrashing,
  78. reduced duplicate page tables), and to the user (programming only
  79. a single set of translations).  For this reason, VFIO makes use of
  80. a container class, which may hold one or more groups.  A container
  81. is created by simply opening the /dev/vfio/vfio character device.
  82.  
  83. On its own, the container provides little functionality, with all
  84. but a couple version and extension query interfaces locked away.
  85. The user needs to add a group into the container for the next level
  86. of functionality.  To do this, the user first needs to identify the
  87. group associated with the desired device.  This can be done using
  88. the sysfs links described in the example below.  By unbinding the
  89. device from the host driver and binding it to a VFIO driver, a new
  90. VFIO group will appear for the group as /dev/vfio/$GROUP, where
  91. $GROUP is the IOMMU group number of which the device is a member.
  92. If the IOMMU group contains multiple devices, each will need to
  93. be bound to a VFIO driver before operations on the VFIO group
  94. are allowed (it's also sufficient to only unbind the device from
  95. host drivers if a VFIO driver is unavailable; this will make the
  96. group available, but not that particular device).  TBD - interface
  97. for disabling driver probing/locking a device.
  98.  
  99. Once the group is ready, it may be added to the container by opening
  100. the VFIO group character device (/dev/vfio/$GROUP) and using the
  101. VFIO_GROUP_SET_CONTAINER ioctl, passing the file descriptor of the
  102. previously opened container file.  If desired and if the IOMMU driver
  103. supports sharing the IOMMU context between groups, multiple groups may
  104. be set to the same container.  If a group fails to set to a container
  105. with existing groups, a new empty container will need to be used
  106. instead.
  107.  
  108. With a group (or groups) attached to a container, the remaining
  109. ioctls become available, enabling access to the VFIO IOMMU interfaces.
  110. Additionally, it now becomes possible to get file descriptors for each
  111. device within a group using an ioctl on the VFIO group file descriptor.
  112.  
  113. The VFIO device API includes ioctls for describing the device, the I/O
  114. regions and their read/write/mmap offsets on the device descriptor, as
  115. well as mechanisms for describing and registering interrupt
  116. notifications.
  117.  
  118. VFIO Usage Example
  119. ------------------
  120.  
  121. Assume user wants to access PCI device 0000:06:0d.0::
  122.  
  123.         $ readlink /sys/bus/pci/devices/0000:06:0d.0/iommu_group
  124.         ../../../../kernel/iommu_groups/26
  125.  
  126. This device is therefore in IOMMU group 26.  This device is on the
  127. pci bus, therefore the user will make use of vfio-pci to manage the
  128. group::
  129.  
  130.         # modprobe vfio-pci
  131.  
  132. Binding this device to the vfio-pci driver creates the VFIO group
  133. character devices for this group::
  134.  
  135.         $ lspci -n -s 0000:06:0d.0
  136.         06:0d.0 0401: 1102:0002 (rev 08)
  137.         # echo 0000:06:0d.0 > /sys/bus/pci/devices/0000:06:0d.0/driver/unbind
  138.         # echo 1102 0002 > /sys/bus/pci/drivers/vfio-pci/new_id
  139.  
  140. Now we need to look at what other devices are in the group to free
  141. it for use by VFIO::
  142.  
  143.         $ ls -l /sys/bus/pci/devices/0000:06:0d.0/iommu_group/devices
  144.         total 0
  145.         lrwxrwxrwx. 1 root root 0 Apr 23 16:13 0000:00:1e.0 ->
  146.                 ../../../../devices/pci0000:00/0000:00:1e.0
  147.         lrwxrwxrwx. 1 root root 0 Apr 23 16:13 0000:06:0d.0 ->
  148.                 ../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
  149.         lrwxrwxrwx. 1 root root 0 Apr 23 16:13 0000:06:0d.1 ->
  150.                 ../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
  151.  
  152. This device is behind a PCIe-to-PCI bridge [4]_, therefore we also
  153. need to add device 0000:06:0d.1 to the group following the same
  154. procedure as above.  Device 0000:00:1e.0 is a bridge that does
  155. not currently have a host driver, therefore it's not required to
  156. bind this device to the vfio-pci driver (vfio-pci does not currently
  157. support PCI bridges).
  158.  
  159. The final step is to provide the user with access to the group if
  160. unprivileged operation is desired (note that /dev/vfio/vfio provides
  161. no capabilities on its own and is therefore expected to be set to
  162. mode 0666 by the system)::
  163.  
  164.         # chown user:user /dev/vfio/26
  165.  
  166. The user now has full access to all the devices and the iommu for this
  167. group and can access them as follows::
  168.  
  169.         int container, group, device, i;
  170.         struct vfio_group_status group_status =
  171.                                         { .argsz = sizeof(group_status) };
  172.         struct vfio_iommu_type1_info iommu_info = { .argsz = sizeof(iommu_info) };
  173.         struct vfio_iommu_type1_dma_map dma_map = { .argsz = sizeof(dma_map) };
  174.         struct vfio_device_info device_info = { .argsz = sizeof(device_info) };
  175.  
  176.         /* Create a new container */
  177.         container = open("/dev/vfio/vfio", O_RDWR);
  178.  
  179.         if (ioctl(container, VFIO_GET_API_VERSION) != VFIO_API_VERSION)
  180.                 /* Unknown API version */
  181.  
  182.         if (!ioctl(container, VFIO_CHECK_EXTENSION, VFIO_TYPE1_IOMMU))
  183.                 /* Doesn't support the IOMMU driver we want. */
  184.  
  185.         /* Open the group */
  186.         group = open("/dev/vfio/26", O_RDWR);
  187.  
  188.         /* Test the group is viable and available */
  189.         ioctl(group, VFIO_GROUP_GET_STATUS, &group_status);
  190.  
  191.         if (!(group_status.flags & VFIO_GROUP_FLAGS_VIABLE))
  192.                 /* Group is not viable (ie, not all devices bound for vfio) */
  193.  
  194.         /* Add the group to the container */
  195.         ioctl(group, VFIO_GROUP_SET_CONTAINER, &container);
  196.  
  197.         /* Enable the IOMMU model we want */
  198.         ioctl(container, VFIO_SET_IOMMU, VFIO_TYPE1_IOMMU);
  199.  
  200.         /* Get addition IOMMU info */
  201.         ioctl(container, VFIO_IOMMU_GET_INFO, &iommu_info);
  202.  
  203.         /* Allocate some space and setup a DMA mapping */
  204.         dma_map.vaddr = mmap(0, 1024 * 1024, PROT_READ | PROT_WRITE,
  205.                              MAP_PRIVATE | MAP_ANONYMOUS, 0, 0);
  206.         dma_map.size = 1024 * 1024;
  207.         dma_map.iova = 0; /* 1MB starting at 0x0 from device view */
  208.         dma_map.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE;
  209.  
  210.         ioctl(container, VFIO_IOMMU_MAP_DMA, &dma_map);
  211.  
  212.         /* Get a file descriptor for the device */
  213.         device = ioctl(group, VFIO_GROUP_GET_DEVICE_FD, "0000:06:0d.0");
  214.  
  215.         /* Test and setup the device */
  216.         ioctl(device, VFIO_DEVICE_GET_INFO, &device_info);
  217.  
  218.         for (i = 0; i < device_info.num_regions; i++) {
  219.                 struct vfio_region_info reg = { .argsz = sizeof(reg) };
  220.  
  221.                 reg.index = i;
  222.  
  223.                 ioctl(device, VFIO_DEVICE_GET_REGION_INFO, &reg);
  224.  
  225.                 /* Setup mappings... read/write offsets, mmaps
  226.                  * For PCI devices, config space is a region */
  227.         }
  228.  
  229.         for (i = 0; i < device_info.num_irqs; i++) {
  230.                 struct vfio_irq_info irq = { .argsz = sizeof(irq) };
  231.  
  232.                 irq.index = i;
  233.  
  234.                 ioctl(device, VFIO_DEVICE_GET_IRQ_INFO, &irq);
  235.  
  236.                 /* Setup IRQs... eventfds, VFIO_DEVICE_SET_IRQS */
  237.         }
  238.  
  239.         /* Gratuitous device reset and go... */
  240.         ioctl(device, VFIO_DEVICE_RESET);
  241.  
  242. VFIO User API
  243. -------------------------------------------------------------------------------
  244.  
  245. Please see include/linux/vfio.h for complete API documentation.
  246.  
  247. VFIO bus driver API
  248. -------------------------------------------------------------------------------
  249.  
  250. VFIO bus drivers, such as vfio-pci make use of only a few interfaces
  251. into VFIO core.  When devices are bound and unbound to the driver,
  252. the driver should call vfio_add_group_dev() and vfio_del_group_dev()
  253. respectively::
  254.  
  255.         extern int vfio_add_group_dev(struct device *dev,
  256.                                       const struct vfio_device_ops *ops,
  257.                                       void *device_data);
  258.  
  259.         extern void *vfio_del_group_dev(struct device *dev);
  260.  
  261. vfio_add_group_dev() indicates to the core to begin tracking the
  262. iommu_group of the specified dev and register the dev as owned by
  263. a VFIO bus driver.  The driver provides an ops structure for callbacks
  264. similar to a file operations structure::
  265.  
  266.         struct vfio_device_ops {
  267.                 int     (*open)(void *device_data);
  268.                 void    (*release)(void *device_data);
  269.                 ssize_t (*read)(void *device_data, char __user *buf,
  270.                                 size_t count, loff_t *ppos);
  271.                 ssize_t (*write)(void *device_data, const char __user *buf,
  272.                                  size_t size, loff_t *ppos);
  273.                 long    (*ioctl)(void *device_data, unsigned int cmd,
  274.                                  unsigned long arg);
  275.                 int     (*mmap)(void *device_data, struct vm_area_struct *vma);
  276.         };
  277.  
  278. Each function is passed the device_data that was originally registered
  279. in the vfio_add_group_dev() call above.  This allows the bus driver
  280. an easy place to store its opaque, private data.  The open/release
  281. callbacks are issued when a new file descriptor is created for a
  282. device (via VFIO_GROUP_GET_DEVICE_FD).  The ioctl interface provides
  283. a direct pass through for VFIO_DEVICE_* ioctls.  The read/write/mmap
  284. interfaces implement the device region access defined by the device's
  285. own VFIO_DEVICE_GET_REGION_INFO ioctl.
  286.  
  287.  
  288. PPC64 sPAPR implementation note
  289. -------------------------------
  290.  
  291. This implementation has some specifics:
  292.  
  293. 1) On older systems (POWER7 with P5IOC2/IODA1) only one IOMMU group per
  294.    container is supported as an IOMMU table is allocated at the boot time,
  295.    one table per a IOMMU group which is a Partitionable Endpoint (PE)
  296.    (PE is often a PCI domain but not always).
  297.  
  298.    Newer systems (POWER8 with IODA2) have improved hardware design which allows
  299.    to remove this limitation and have multiple IOMMU groups per a VFIO
  300.    container.
  301.  
  302. 2) The hardware supports so called DMA windows - the PCI address range
  303.    within which DMA transfer is allowed, any attempt to access address space
  304.    out of the window leads to the whole PE isolation.
  305.  
  306. 3) PPC64 guests are paravirtualized but not fully emulated. There is an API
  307.    to map/unmap pages for DMA, and it normally maps 1..32 pages per call and
  308.    currently there is no way to reduce the number of calls. In order to make
  309.    things faster, the map/unmap handling has been implemented in real mode
  310.    which provides an excellent performance which has limitations such as
  311.    inability to do locked pages accounting in real time.
  312.  
  313. 4) According to sPAPR specification, A Partitionable Endpoint (PE) is an I/O
  314.    subtree that can be treated as a unit for the purposes of partitioning and
  315.    error recovery. A PE may be a single or multi-function IOA (IO Adapter), a
  316.    function of a multi-function IOA, or multiple IOAs (possibly including
  317.    switch and bridge structures above the multiple IOAs). PPC64 guests detect
  318.    PCI errors and recover from them via EEH RTAS services, which works on the
  319.    basis of additional ioctl commands.
  320.  
  321.    So 4 additional ioctls have been added:
  322.  
  323.         VFIO_IOMMU_SPAPR_TCE_GET_INFO
  324.                 returns the size and the start of the DMA window on the PCI bus.
  325.  
  326.         VFIO_IOMMU_ENABLE
  327.                 enables the container. The locked pages accounting
  328.                 is done at this point. This lets user first to know what
  329.                 the DMA window is and adjust rlimit before doing any real job.
  330.  
  331.         VFIO_IOMMU_DISABLE
  332.                 disables the container.
  333.  
  334.         VFIO_EEH_PE_OP
  335.                 provides an API for EEH setup, error detection and recovery.
  336.  
  337.    The code flow from the example above should be slightly changed::
  338.  
  339.         struct vfio_eeh_pe_op pe_op = { .argsz = sizeof(pe_op), .flags = 0 };
  340.  
  341.         .....
  342.         /* Add the group to the container */
  343.         ioctl(group, VFIO_GROUP_SET_CONTAINER, &container);
  344.  
  345.         /* Enable the IOMMU model we want */
  346.         ioctl(container, VFIO_SET_IOMMU, VFIO_SPAPR_TCE_IOMMU)
  347.  
  348.         /* Get addition sPAPR IOMMU info */
  349.         vfio_iommu_spapr_tce_info spapr_iommu_info;
  350.         ioctl(container, VFIO_IOMMU_SPAPR_TCE_GET_INFO, &spapr_iommu_info);
  351.  
  352.         if (ioctl(container, VFIO_IOMMU_ENABLE))
  353.                 /* Cannot enable container, may be low rlimit */
  354.  
  355.         /* Allocate some space and setup a DMA mapping */
  356.         dma_map.vaddr = mmap(0, 1024 * 1024, PROT_READ | PROT_WRITE,
  357.                              MAP_PRIVATE | MAP_ANONYMOUS, 0, 0);
  358.  
  359.         dma_map.size = 1024 * 1024;
  360.         dma_map.iova = 0; /* 1MB starting at 0x0 from device view */
  361.         dma_map.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE;
  362.  
  363.         /* Check here is .iova/.size are within DMA window from spapr_iommu_info */
  364.         ioctl(container, VFIO_IOMMU_MAP_DMA, &dma_map);
  365.  
  366.         /* Get a file descriptor for the device */
  367.         device = ioctl(group, VFIO_GROUP_GET_DEVICE_FD, "0000:06:0d.0");
  368.  
  369.         ....
  370.  
  371.         /* Gratuitous device reset and go... */
  372.         ioctl(device, VFIO_DEVICE_RESET);
  373.  
  374.         /* Make sure EEH is supported */
  375.         ioctl(container, VFIO_CHECK_EXTENSION, VFIO_EEH);
  376.  
  377.         /* Enable the EEH functionality on the device */
  378.         pe_op.op = VFIO_EEH_PE_ENABLE;
  379.         ioctl(container, VFIO_EEH_PE_OP, &pe_op);
  380.  
  381.         /* You're suggested to create additional data struct to represent
  382.          * PE, and put child devices belonging to same IOMMU group to the
  383.          * PE instance for later reference.
  384.          */
  385.  
  386.         /* Check the PE's state and make sure it's in functional state */
  387.         pe_op.op = VFIO_EEH_PE_GET_STATE;
  388.         ioctl(container, VFIO_EEH_PE_OP, &pe_op);
  389.  
  390.         /* Save device state using pci_save_state().
  391.          * EEH should be enabled on the specified device.
  392.          */
  393.  
  394.         ....
  395.  
  396.         /* Inject EEH error, which is expected to be caused by 32-bits
  397.          * config load.
  398.          */
  399.         pe_op.op = VFIO_EEH_PE_INJECT_ERR;
  400.         pe_op.err.type = EEH_ERR_TYPE_32;
  401.         pe_op.err.func = EEH_ERR_FUNC_LD_CFG_ADDR;
  402.         pe_op.err.addr = 0ul;
  403.         pe_op.err.mask = 0ul;
  404.         ioctl(container, VFIO_EEH_PE_OP, &pe_op);
  405.  
  406.         ....
  407.  
  408.         /* When 0xFF's returned from reading PCI config space or IO BARs
  409.          * of the PCI device. Check the PE's state to see if that has been
  410.          * frozen.
  411.          */
  412.         ioctl(container, VFIO_EEH_PE_OP, &pe_op);
  413.  
  414.         /* Waiting for pending PCI transactions to be completed and don't
  415.          * produce any more PCI traffic from/to the affected PE until
  416.          * recovery is finished.
  417.          */
  418.  
  419.         /* Enable IO for the affected PE and collect logs. Usually, the
  420.          * standard part of PCI config space, AER registers are dumped
  421.          * as logs for further analysis.
  422.          */
  423.         pe_op.op = VFIO_EEH_PE_UNFREEZE_IO;
  424.         ioctl(container, VFIO_EEH_PE_OP, &pe_op);
  425.  
  426.         /*
  427.          * Issue PE reset: hot or fundamental reset. Usually, hot reset
  428.          * is enough. However, the firmware of some PCI adapters would
  429.          * require fundamental reset.
  430.          */
  431.         pe_op.op = VFIO_EEH_PE_RESET_HOT;
  432.         ioctl(container, VFIO_EEH_PE_OP, &pe_op);
  433.         pe_op.op = VFIO_EEH_PE_RESET_DEACTIVATE;
  434.         ioctl(container, VFIO_EEH_PE_OP, &pe_op);
  435.  
  436.         /* Configure the PCI bridges for the affected PE */
  437.         pe_op.op = VFIO_EEH_PE_CONFIGURE;
  438.         ioctl(container, VFIO_EEH_PE_OP, &pe_op);
  439.  
  440.         /* Restored state we saved at initialization time. pci_restore_state()
  441.          * is good enough as an example.
  442.          */
  443.  
  444.         /* Hopefully, error is recovered successfully. Now, you can resume to
  445.          * start PCI traffic to/from the affected PE.
  446.          */
  447.  
  448.         ....
  449.  
  450. 5) There is v2 of SPAPR TCE IOMMU. It deprecates VFIO_IOMMU_ENABLE/
  451.    VFIO_IOMMU_DISABLE and implements 2 new ioctls:
  452.    VFIO_IOMMU_SPAPR_REGISTER_MEMORY and VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY
  453.    (which are unsupported in v1 IOMMU).
  454.  
  455.    PPC64 paravirtualized guests generate a lot of map/unmap requests,
  456.    and the handling of those includes pinning/unpinning pages and updating
  457.    mm::locked_vm counter to make sure we do not exceed the rlimit.
  458.    The v2 IOMMU splits accounting and pinning into separate operations:
  459.  
  460.    - VFIO_IOMMU_SPAPR_REGISTER_MEMORY/VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY ioctls
  461.      receive a user space address and size of the block to be pinned.
  462.      Bisecting is not supported and VFIO_IOMMU_UNREGISTER_MEMORY is expected to
  463.      be called with the exact address and size used for registering
  464.      the memory block. The userspace is not expected to call these often.
  465.      The ranges are stored in a linked list in a VFIO container.
  466.  
  467.    - VFIO_IOMMU_MAP_DMA/VFIO_IOMMU_UNMAP_DMA ioctls only update the actual
  468.      IOMMU table and do not do pinning; instead these check that the userspace
  469.      address is from pre-registered range.
  470.  
  471.    This separation helps in optimizing DMA for guests.
  472.  
  473. 6) sPAPR specification allows guests to have an additional DMA window(s) on
  474.    a PCI bus with a variable page size. Two ioctls have been added to support
  475.    this: VFIO_IOMMU_SPAPR_TCE_CREATE and VFIO_IOMMU_SPAPR_TCE_REMOVE.
  476.    The platform has to support the functionality or error will be returned to
  477.    the userspace. The existing hardware supports up to 2 DMA windows, one is
  478.    2GB long, uses 4K pages and called "default 32bit window"; the other can
  479.    be as big as entire RAM, use different page size, it is optional - guests
  480.    create those in run-time if the guest driver supports 64bit DMA.
  481.  
  482.    VFIO_IOMMU_SPAPR_TCE_CREATE receives a page shift, a DMA window size and
  483.    a number of TCE table levels (if a TCE table is going to be big enough and
  484.    the kernel may not be able to allocate enough of physically contiguous
  485.    memory). It creates a new window in the available slot and returns the bus
  486.    address where the new window starts. Due to hardware limitation, the user
  487.    space cannot choose the location of DMA windows.
  488.  
  489.    VFIO_IOMMU_SPAPR_TCE_REMOVE receives the bus start address of the window
  490.    and removes it.
  491.  
  492. -------------------------------------------------------------------------------
  493.  
  494. .. [1] VFIO was originally an acronym for "Virtual Function I/O" in its
  495.    initial implementation by Tom Lyon while as Cisco.  We've since
  496.    outgrown the acronym, but it's catchy.
  497.  
  498. .. [2] "safe" also depends upon a device being "well behaved".  It's
  499.    possible for multi-function devices to have backdoors between
  500.    functions and even for single function devices to have alternative
  501.    access to things like PCI config space through MMIO registers.  To
  502.    guard against the former we can include additional precautions in the
  503.    IOMMU driver to group multi-function PCI devices together
  504.    (iommu=group_mf).  The latter we can't prevent, but the IOMMU should
  505.    still provide isolation.  For PCI, SR-IOV Virtual Functions are the
  506.    best indicator of "well behaved", as these are designed for
  507.    virtualization usage models.
  508.  
  509. .. [3] As always there are trade-offs to virtual machine device
  510.    assignment that are beyond the scope of VFIO.  It's expected that
  511.    future IOMMU technologies will reduce some, but maybe not all, of
  512.    these trade-offs.
  513.  
  514. .. [4] In this case the device is below a PCI bridge, so transactions
  515.    from either function of the device are indistinguishable to the iommu::
  516.  
  517.         -[0000:00]-+-1e.0-[06]--+-0d.0
  518.                                 \-0d.1
  519.  
  520.         00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 90)

Raw Paste


Login or Register to edit or fork this paste. It's free.