summaryrefslogtreecommitdiffstats
path: root/lib/libvmmapi/vmmapi.c
Commit message (Collapse)AuthorAgeFilesLines
* MFC r256645.neel2013-10-221-0/+1
| | | | | | | | | | | | | | Add a new capability, VM_CAP_ENABLE_INVPCID, that can be enabled to expose 'invpcid' instruction to the guest. Currently bhyve will try to enable this capability unconditionally if it is available. Consolidate code in bhyve to set the capabilities so it is no longer duplicated in BSP and AP bringup. Add a sysctl 'vm.pmap.invpcid_works' to display whether the 'invpcid' instruction is available. Approved by: re (hrs)
* Parse the memory size parameter using expand_number() to allow specifyingneel2013-10-091-0/+27
| | | | | | | | the memory size more intuitively (e.g. 512M, 4G etc). Submitted by: rodrigc Reviewed by: grehan Approved by: re (blanket)
* Merge projects/bhyve_npt_pmap into head.neel2013-10-051-1/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make the amd64/pmap code aware of nested page table mappings used by bhyve guests. This allows bhyve to associate each guest with its own vmspace and deal with nested page faults in the context of that vmspace. This also enables features like accessed/dirty bit tracking, swapping to disk and transparent superpage promotions of guest memory. Guest vmspace: Each bhyve guest has a unique vmspace to represent the physical memory allocated to the guest. Each memory segment allocated by the guest is mapped into the guest's address space via the 'vmspace->vm_map' and is backed by an object of type OBJT_DEFAULT. pmap types: The amd64/pmap now understands two types of pmaps: PT_X86 and PT_EPT. The PT_X86 pmap type is used by the vmspace associated with the host kernel as well as user processes executing on the host. The PT_EPT pmap is used by the vmspace associated with a bhyve guest. Page Table Entries: The EPT page table entries as mostly similar in functionality to regular page table entries although there are some differences in terms of what bits are used to express that functionality. For e.g. the dirty bit is represented by bit 9 in the nested PTE as opposed to bit 6 in the regular x86 PTE. Therefore the bitmask representing the dirty bit is now computed at runtime based on the type of the pmap. Thus PG_M that was previously a macro now becomes a local variable that is initialized at runtime using 'pmap_modified_bit(pmap)'. An additional wrinkle associated with EPT mappings is that older Intel processors don't have hardware support for tracking accessed/dirty bits in the PTE. This means that the amd64/pmap code needs to emulate these bits to provide proper accounting to the VM subsystem. This is achieved by using the following mapping for EPT entries that need emulation of A/D bits: Bit Position Interpreted By PG_V 52 software (accessed bit emulation handler) PG_RW 53 software (dirty bit emulation handler) PG_A 0 hardware (aka EPT_PG_RD) PG_M 1 hardware (aka EPT_PG_WR) The idea to use the mapping listed above for A/D bit emulation came from Alan Cox (alc@). The final difference with respect to x86 PTEs is that some EPT implementations do not support superpage mappings. This is recorded in the 'pm_flags' field of the pmap. TLB invalidation: The amd64/pmap code has a number of ways to do invalidation of mappings that may be cached in the TLB: single page, multiple pages in a range or the entire TLB. All of these funnel into a single EPT invalidation routine called 'pmap_invalidate_ept()'. This routine bumps up the EPT generation number and sends an IPI to the host cpus that are executing the guest's vcpus. On a subsequent entry into the guest it will detect that the EPT has changed and invalidate the mappings from the TLB. Guest memory access: Since the guest memory is no longer wired we need to hold the host physical page that backs the guest physical page before we can access it. The helper functions 'vm_gpa_hold()/vm_gpa_release()' are available for this purpose. PCI passthru: Guest's with PCI passthru devices will wire the entire guest physical address space. The MMIO BAR associated with the passthru device is backed by a vm_object of type OBJT_SG. An IOMMU domain is created only for guest's that have one or more PCI passthru devices attached to them. Limitations: There isn't a way to map a guest physical page without execute permissions. This is because the amd64/pmap code interprets the guest physical mappings as user mappings since they are numerically below VM_MAXUSER_ADDRESS. Since PG_U shares the same bit position as EPT_PG_EXECUTE all guest mappings become automatically executable. Thanks to Alan Cox and Konstantin Belousov for their rigorous code reviews as well as their support and encouragement. Thanks for John Baldwin for reviewing the use of OBJT_SG as the backing object for pci passthru mmio regions. Special thanks to Peter Holm for testing the patch on short notice. Approved by: re Discussed with: grehan Reviewed by: alc, kib Tested by: pho
* Remove deprecated APIs to get the total and free memory available to vmm.ko.neel2013-04-251-24/+0
| | | | | | | | | These APIs were relevant when memory for virtual machine allocation was hard partitioned away from the rest of the system but that is no longer the case. The sysctls that provided this information were garbage collected a while back. Obtained from: NetApp
* Simplify the assignment of memory to virtual machines by requiring a singleneel2013-03-181-9/+81
| | | | | | | | | | | | | | | | | | command line option "-m <memsize in MB>" to specify the memory size. Prior to this change the user needed to explicitly specify the amount of memory allocated below 4G (-m <lowmem>) and the amount above 4G (-M <highmem>). The "-M" option is no longer supported by 'bhyveload' and 'bhyve'. The start of the PCI hole is fixed at 3GB and cannot be directly changed using command line options. However it is still possible to change this in special circumstances via the 'vm_set_lowmem_limit()' API provided by libvmmapi. Submitted by: Dinakar Medavaram (initial version) Reviewed by: grehan Obtained from: NetApp
* Implement guest vcpu pinning using 'pthread_setaffinity_np(3)'.neel2013-02-111-28/+0
| | | | | | | | | | | | | | Prior to this change pinning was implemented via an ioctl (VM_SET_PINNING) that called 'sched_bind()' on behalf of the user thread. The ULE implementation of 'sched_bind()' bumps up 'td_pinned' which in turn runs afoul of the assertion '(td_pinned == 0)' in userret(). Using the cpuset affinity to implement pinning of the vcpu threads works with both 4BSD and ULE schedulers and has the happy side-effect of getting rid of a bunch of code in vmm.ko. Discussed with: grehan
* Remove mptable generation code from libvmmapi and move it to bhyve.grehan2012-10-261-13/+0
| | | | | | | | | | | | Firmware tables require too much knowledge of system configuration, and it's difficult to pass that information in general terms to a library. The upcoming ACPI work exposed this - it will also livein bhyve. Also, remove code specific to NetApp from the mptable name, and remove the -n option from bhyve. Reviewed by: neel Obtained from: NetApp
* Add an api to map a vm capability type into a string to be used for displayneel2012-10-121-11/+24
| | | | purposes.
* The ioctl VM_GET_MEMORY_SEG is no longer able to return the host physicalneel2012-10-041-2/+1
| | | | | | | | address associated with the guest memory segment. This is because there is no longer a 1:1 mapping between GPA and HPA. As a result 'vmmctl' can only display the guest physical address and the length of the lowmem and highmem segments.
* Change vm_malloc() to map pages in the guest physical address space in 4KBneel2012-10-041-2/+2
| | | | | | | | | chunks. This breaks the assumption that the entire memory segment is contiguously allocated in the host physical address space. This also paves the way to satisfy the 4KB page allocations by requesting free pages from the VM subsystem as opposed to hard-partitioning host memory at boot time.
* Add ioctls to control the X2APIC capability exposed by the virtual machine toneel2012-09-251-0/+29
| | | | | | | the guest. At the moment this simply sets the state in the 'vcpu' instance but there is no code that acts upon these settings.
* Add sysctls to display the total and free amount of hard-wired mem for VMsgrehan2012-08-261-0/+24
| | | | | | | | # sysctl hw.vmm hw.vmm.mem_free: 2145386496 hw.vmm.mem_total: 2145386496 Submitted by: Takeshi HASEGAWA hasegaw at gmail com
* Allow the 'bhyve' process to control whether or not the virtual machine sees anneel2012-08-041-2/+3
| | | | | | ioapic. Obtained from: NetApp
* API to map an apic id to the vcpu.neel2012-08-041-0/+10
| | | | | At the moment this is a simple mapping because the numerical values are identical.
* There is no need to explicitly specify the CR4_VMXE bit when writing to guestneel2012-08-041-5/+1
| | | | | | | | | | CR4. This bit is specific to the Intel VTX and removing it makes the library more portable to AMD/SVM. In the Intel VTX implementation, the hypervisor will ensure that this bit is always set. See vmx_fix_cr4() for details. Suggested by: grehan
* MSI-x interrupt support for PCI pass-thru devices.grehan2012-04-281-0/+19
| | | | | | | | | | Includes instruction emulation for memory r/w access. This opens the door for io-apic, local apic, hpet timer, and legacy device emulation. Submitted by: ryan dot berryhill at sandvine dot com Reviewed by: grehan Obtained from: Sandvine
* First cut to port bhyve, vmmctl, and libvmmapi to HEAD.jhb2011-05-151-2/+0
|
* Import of bhyve hypervisor and utilities, part 1.grehan2011-05-131-0/+647
vmm.ko - kernel module for VT-x, VT-d and hypervisor control bhyve - user-space sequencer and i/o emulation vmmctl - dump of hypervisor register state libvmm - front-end to vmm.ko chardev interface bhyve was designed and implemented by Neel Natu. Thanks to the following folk from NetApp who helped to make this available: Joe CaraDonna Peter Snyder Jeff Heller Sandeep Mann Steve Miller Brian Pawlowski
OpenPOWER on IntegriCloud