summaryrefslogtreecommitdiffstats
path: root/target-i386/helper.c
Commit message (Collapse)AuthorAgeFilesLines
* target-i386: Use cpu_exec_enter/exit qom hooksRichard Henderson2014-09-251-0/+21
| | | | | | | | | | | Note that the code that was within the "exit" ifdef block was identical to the cpu_compute_eflags inline, so make that simplification at the same time. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 1410626734-3804-4-git-send-email-rth@twiddle.net Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* cpu-exec: Make debug_excp_handler a QOM CPU methodPeter Maydell2014-09-121-2/+3
| | | | | | | Make the debug_excp_handler target specific hook into a QOM CPU method. Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-i386: Don't forbid NX bit on PAE PDEs and PTEsWilliam Grant2014-08-251-2/+2
| | | | | | | | | | | | Commit e8f6d00c30ed88910d0d985f4b2bf41654172ceb ("target-i386: raise page fault for reserved physical address bits") added a check that the NX bit is not set on PAE PDPEs, but it also added it to rsvd_mask for the rest of the function. This caused any PDEs or PTEs with NX set to be erroneously rejected, making PAE guests with NX support unusable. Signed-off-by: William Grant <wgrant@ubuntu.com> Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target-i386: Allow execute from user mode when SMEP is enabled.Ricky Zhou2014-07-151-1/+2
| | | | | | | | Previously, execute would be disabled for all pages with SMEP enabled, regardless of what mode the access took place in. Signed-off-by: Ricky Zhou <ricky@rzhou.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target-i386: cleanup x86_cpu_get_phys_page_debugPaolo Bonzini2014-06-051-18/+17
| | | | | | Make the code a bit more similar to x86_cpu_handle_mmu_fault. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target-i386: fix protection bits in the TLB for SMEPPaolo Bonzini2014-06-051-1/+3
| | | | | | | | | User pages must be marked as non-executable when running under SMEP; otherwise, fetching the page first and then calling it will fail. With this patch, all SMEP testcases in kvm-unit-tests now pass. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target-i386: support long addresses for 4MB pages (PSE-36)Paolo Bonzini2014-06-051-3/+9
| | | | | | | 4MB pages can use 40-bit addresses by putting the higher 8 bits in bits 20-13 of the PDE. Bit 21 is reserved. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target-i386: raise page fault for reserved bits in large pagesPaolo Bonzini2014-06-051-0/+1
| | | | | | In large pages, bit 12 is for PAT, but bits starting at 13 are reserved. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target-i386: unify reserved bits and NX bit checkPaolo Bonzini2014-06-051-12/+4
| | | | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target-i386: simplify pte/vaddr calculationPaolo Bonzini2014-06-051-8/+7
| | | | | | | They can moved to after the dirty bit processing, and unified between CR0.PG=1 and CR0.PG=0. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target-i386: raise page fault for reserved physical address bitsPaolo Bonzini2014-06-051-12/+22
| | | | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target-i386: test reserved PS bit on PML4EsPaolo Bonzini2014-06-051-0/+3
| | | | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target-i386: set correct error code for reserved bit accessPaolo Bonzini2014-06-051-17/+9
| | | | | | The correct error code is 9 (present, reserved), not 8. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target-i386: introduce support for 1 GB pagesPaolo Bonzini2014-06-051-0/+7
| | | | | | | Given the simplifications to the code in the previous patches, this is now very simple to do. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target-i386: introduce do_check_protect labelPaolo Bonzini2014-06-051-36/+38
| | | | | | This will help adding 1GB page support in the next patch. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target-i386: tweak handling of PG_NX_MASKPaolo Bonzini2014-06-051-4/+4
| | | | | | | Remove the tail of the PAE case, so that we can use "goto" in the next patch to jump to the protection checks. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target-i386: commonize checks for PAE and non-PAEPaolo Bonzini2014-06-051-79/+41
| | | | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target-i386: commonize checks for 4MB and 4KB pagesPaolo Bonzini2014-06-051-77/+41
| | | | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target-i386: commonize checks for 2MB and 4KB pagesPaolo Bonzini2014-06-051-83/+44
| | | | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target-i386: fix coding standards in x86_cpu_handle_mmu_faultPaolo Bonzini2014-06-051-5/+9
| | | | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target-i386: simplify SMAP handling in MMU_KSMAP_IDXPaolo Bonzini2014-06-051-8/+4
| | | | | | | Do not use this MMU index at all if CR4.SMAP is false, and drop the SMAP check from x86_cpu_handle_mmu_fault. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target-i386: fix kernel accesses with SMAP and CPL = 3Paolo Bonzini2014-06-051-4/+4
| | | | | | | | | With SMAP, implicit kernel accesses from user mode always behave as if AC=0. To do this, kernel mode is not anymore a separate MMU mode. Instead, KERNEL_IDX is renamed to KSMAP_IDX and the kernel mode accessors wrap KSMAP_IDX and KNOSMAP_IDX. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target-i386: rename KSMAP to KNOSMAPPaolo Bonzini2014-06-051-4/+4
| | | | | | This is the mode where SMAP is overridden, put "NO" in its name. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target-i386: preserve FPU and MSR state on INITPaolo Bonzini2014-05-131-2/+8
| | | | | | | | | | | | | | | | | | | Most MSRs, plus the FPU, MMX, MXCSR, XMM and YMM registers should not be zeroed on INIT (Table 9-1 in the Intel SDM). Copy them out of CPUX86State and back in, instead of special casing env->pat. The relevant fields are already consecutive except PAT and SMBASE. However: - KVM and Hyper-V MSRs should be reset because they include memory locations written by the hypervisor. These MSRs are moved together at the end of the preserved area. - SVM state can be moved out of the way since it is written by VMRUN. Cc: Andreas Faerber <afaerber@suse.de> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* kvm: forward INIT signals coming from the chipsetPaolo Bonzini2014-05-131-0/+4
| | | | | | Reviewed-by: Gleb Natapov <gnatapov@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target-i386: x86_cpu_get_phys_page_debug(): support 1GB page translationLuiz Capitulino2014-03-311-0/+11
| | | | | | | | | | | | | | | | | | | | Linux guests, when using more than 4GB of RAM, may end up using 1GB pages to store (kernel) data. When this happens, we're unable to debug a running Linux kernel with GDB: (gdb) p node_data[0]->node_id Cannot access memory at address 0xffff88013fffd3a0 (gdb) GDB returns this error because x86_cpu_get_phys_page_debug() doesn't support translating 1GB pages in IA-32e paging mode and returns an error to GDB. This commit adds support for 1GB page translation for IA32e paging. Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Andreas Färber <afaerber@suse.de>
* cputlb: Change tlb_set_page() argument to CPUStateAndreas Färber2014-03-131-1/+1
| | | | Signed-off-by: Andreas Färber <afaerber@suse.de>
* cputlb: Change tlb_flush() argument to CPUStateAndreas Färber2014-03-131-5/+12
| | | | Signed-off-by: Andreas Färber <afaerber@suse.de>
* cpu-exec: Change cpu_resume_from_signal() argument to CPUStateAndreas Färber2014-03-131-1/+1
| | | | Signed-off-by: Andreas Färber <afaerber@suse.de>
* exec: Change cpu_breakpoint_{insert,remove{,_by_ref,_all}} argumentAndreas Färber2014-03-131-2/+2
| | | | | | Use CPUState. Allows to clean up CPUArchState in gdbstub. Signed-off-by: Andreas Färber <afaerber@suse.de>
* exec: Change cpu_watchpoint_{insert,remove{,_by_ref,_all}} argumentAndreas Färber2014-03-131-3/+8
| | | | | | Use CPUState. This lets us drop a few local env usages. Signed-off-by: Andreas Färber <afaerber@suse.de>
* translate-all: Change cpu_restore_state() argument to CPUStateAndreas Färber2014-03-131-1/+1
| | | | | | This lets us drop some local variables in tlb_fill() functions. Signed-off-by: Andreas Färber <afaerber@suse.de>
* cpu: Move breakpoints field from CPU_COMMON to CPUStateAndreas Färber2014-03-131-1/+2
| | | | | | | | Most targets were using offsetof(CPUFooState, breakpoints) to determine how much of CPUFooState to clear on reset. Use the next field after CPU_COMMON instead, if any, or sizeof(CPUFooState) otherwise. Signed-off-by: Andreas Färber <afaerber@suse.de>
* cpu: Move watchpoint fields from CPU_COMMON to CPUStateAndreas Färber2014-03-131-3/+4
| | | | Signed-off-by: Andreas Färber <afaerber@suse.de>
* cpu: Move exception_index field from CPU_COMMON to CPUStateAndreas Färber2014-03-131-3/+3
| | | | Signed-off-by: Andreas Färber <afaerber@suse.de>
* cpu: Move mem_io_{pc,vaddr} fields from CPU_COMMON to CPUStateAndreas Färber2014-03-131-2/+3
| | | | | | Reset them. Signed-off-by: Andreas Färber <afaerber@suse.de>
* cpu: Turn cpu_handle_mmu_fault() into a CPUClass hookAndreas Färber2014-03-131-8/+12
| | | | | | | | Note that while such functions may exist both for *-user and softmmu, only *-user uses the CPUState hook, while softmmu reuses the prototype for calling it directly. Signed-off-by: Andreas Färber <afaerber@suse.de>
* target-i386: Clean up ENV_GET_CPU() usageAndreas Färber2014-03-131-1/+1
| | | | | | | | | | | | | Commits fdfba1a298ae26dd44bcfdb0429314139a0bc55a, f606604f1c10b60ef294f1b9b229426521a365e3 and 2c17449b3022ca9623c4a7e2a504a4150ac4ad30 added usages of ENV_GET_CPU() macro in target-specific code. Use x86_env_get_cpu() or reuse existing X86CPU variable instead. Cc: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Cc: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Andreas Färber <afaerber@suse.de>
* exec: Make stl_phys_notdirty input an AddressSpaceEdgar E. Iglesias2014-02-111-8/+8
| | | | | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
* exec: Make stq_*_phys input an AddressSpaceEdgar E. Iglesias2014-02-111-1/+2
| | | | | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
* exec: Make ldq/ldub_*_phys input an AddressSpaceEdgar E. Iglesias2014-02-111-10/+10
| | | | | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
* exec: Make ldl_*_phys input an AddressSpaceEdgar E. Iglesias2014-02-111-4/+5
| | | | | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
* Merge remote-tracking branch 'afaerber/tags/qom-cpu-for-anthony' into stagingAnthony Liguori2014-01-101-6/+6
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | QOM CPUState refactorings / X86CPU * TLB invalidation optimizations * X86CPU initialization cleanups * Preparations for X86CPU hot-unplug # gpg: Signature made Tue 24 Dec 2013 04:51:52 AM PST using RSA key ID 3E7E013F # gpg: Good signature from "Andreas Färber <afaerber@suse.de>" # gpg: aka "Andreas Färber <afaerber@suse.com>" # gpg: WARNING: This key is not certified with a trusted signature! # gpg: There is no indication that the signature belongs to the owner. # Primary key fingerprint: 174F 0347 1BCC 221A 6175 6F96 FA2E D12D 3E7E 013F * afaerber/tags/qom-cpu-for-anthony: target-i386: Cleanup 'foo=val' feature handling target-i386: Cleanup 'foo' feature handling target-i386: Convert 'check' and 'enforce' to static properties target-i386: Convert 'hv_spinlocks' to static property target-i386: Convert 'hv_vapic' to static property target-i386: Convert 'hv_relaxed' to static property cpu-exec: Optimize X86CPU usage in cpu_exec() target-i386: Move apic_state field from CPUX86State to X86CPU cputlb: Tidy memset() of arrays cputlb: Use memset() when flushing entries
| * target-i386: Move apic_state field from CPUX86State to X86CPUChen Fan2013-12-231-6/+6
| | | | | | | | | | | | | | This motion is preparing for refactoring vCPU APIC subsequently. Signed-off-by: Chen Fan <chen.fan.fnst@cn.fujitsu.com> Signed-off-by: Andreas Färber <afaerber@suse.de>
* | x86: only allow real mode to access 32bit without LMAAlexander Graf2013-12-231-0/+6
|/ | | | | | | | | | | | | | | | | | | | | When we're running in non-64bit mode with qemu-system-x86_64 we can still end up with virtual addresses that are above the 32bit boundary if a segment offset is set up. GNU Hurd does exactly that. It sets the segment offset to 0x80000000 and puts its EIP value to 0x8xxxxxxx to access low memory. This doesn't hit us when we enable paging, as there we just mask away the unused bits. But with real mode, we assume that vaddr == paddr which is wrong in this case. Real hardware wraps the virtual address around at the 32bit boundary. So let's do the same. This fixes booting GNU Hurd in qemu-system-x86_64 for me. Reported-by: Michael Tokarev <mjt@tls.msk.ru> Signed-off-by: Alexander Graf <agraf@suse.de> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
* Merge remote-tracking branch 'mjt/trivial-patches' into stagingAnthony Liguori2013-09-231-1/+3
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | # By Stefan Weil (8) and others # Via Michael Tokarev * mjt/trivial-patches: tests/.gitignore: ignore test-throttle exec: Fix broken build for MinGW (regression) kvm: Fix compiler warning (clang) tcg-sparc: Fix parenthesis warning Makefile: Remove some more files when cleaning target-i386: Fix segment cache dump iov: avoid "orig_len may be used unitialized" warning vscclient: remove unnecessary use of uninitialized variable trace-events: Clean up with scripts/cleanup-trace-events.pl again tci: Fix qemu-alpha on 32 bit hosts (wrong assertions) *-user: Improve documentation for lock_user function MAINTAINERS: Add missing entry to filelist for TCI target translate-all: Fix formatting of dump output *-user: Fix typo in comment (ulocking -> unlocking) docs: Fix IO port number for CPU present bitmap. q35: Fix typo in constant DEFUALT -> DEFAULT. configure: Undefine _FORTIFY_SOURCE prior using it Message-id: 1379696296-32105-1-git-send-email-mjt@msgid.tls.msk.ru
| * target-i386: Fix segment cache dumpTobias Markus2013-09-201-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | When in Long Mode, cpu_x86_seg_cache() logs "DS16" because the Default operation size bit (D/B bit) is not set for Long Mode Data Segments since there are only Data Segments in Long Mode and no explicit 16/32/64-bit Descriptors. This patch fixes this by checking the Long Mode Active bit of the hidden flags variable and logging "DS" if it is set. (I.e. in Long Mode all Data Segments are logged as "DS") Signed-off-by: Tobias Markus <tobias@markus-regensburg.de> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
* | Merge remote-tracking branch 'qemu-kvm/uq/master' into stagingAnthony Liguori2013-09-231-2/+0
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | # By Alexey Kardashevskiy (3) and others # Via Paolo Bonzini * qemu-kvm/uq/master: target-i386: add feature kvm_pv_unhalt linux-headers: update to 3.12-rc1 target-i386: forward CPUID cache leaves when -cpu host is used linux-headers: update to 3.11 kvm: fix traces to use %x instead of %d kvmvapic: Clear also physical ROM address when entering INACTIVE state kvmvapic: Enter inactive state on hardware reset kvmvapic: Catch invalid ROM size kvm irqfd: support direct msimessage to irq translation fix steal time MSR vmsd callback to proper opaque type kvm: warn if num cpus is greater than num recommended cpu: Move cpu state syncs up into cpu_dump_state() exec: always use MADV_DONTFORK Message-id: 1379694292-1601-1-git-send-email-pbonzini@redhat.com
| * cpu: Move cpu state syncs up into cpu_dump_state()James Hogan2013-09-201-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The x86 and ppc targets call cpu_synchronize_state() from their *_cpu_dump_state() callbacks to ensure that up to date state is dumped when KVM is enabled (for example when a KVM internal error occurs). Move this call up into the generic cpu_dump_state() function so that other KVM targets (namely MIPS) can take advantage of it. This requires kvm_cpu_synchronize_state() and cpu_synchronize_state() to be moved out of the #ifdef NEED_CPU_H in <sysemu/kvm.h> so that they're accessible to qom/cpu.c. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Andreas Färber <afaerber@suse.de> Cc: Alexander Graf <agraf@suse.de> Cc: Gleb Natapov <gleb@redhat.com> Cc: qemu-ppc@nongnu.org Cc: kvm@vger.kernel.org Signed-off-by: Gleb Natapov <gleb@redhat.com>
* | target-i386: fix disassembly with PAE=1, PG=0Paolo Bonzini2013-09-121-18/+16
|/ | | | | | | | | | | | CR4.PAE=1 will not enable paging if CR0.PG=0, but the "if" chain in x86_cpu_get_phys_page_debug says otherwise. Check CR0.PG before everything else. Fixes "-d in_asm" for a code section at the beginning of OVMF. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Max Filippov <jcmvbkbc@gmail.com>
OpenPOWER on IntegriCloud