summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* KVM: PPC: Find HTAB ourselvesAlexander Graf2010-05-172-13/+13
| | | | | | | | | | | | | For KVM we need to find the location of the HTAB. We can either rely on internal data structures of the kernel or ask the hardware. Ben issued complaints about the internal data structure method, so let's switch it to our own inquiry of the HTAB. Now we're fully independend :-). CC: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Fix Book3S_64 Host MMU debug outputAlexander Graf2010-05-171-8/+12
| | | | | | | | | | We have some debug output in Book3S_64. Some of that was invalid though, partially not even compiling because it accessed incorrect variables. So let's fix that up, making debugging more fun again. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Set VSID_PR also for Book3S_64Alexander Graf2010-05-171-0/+3
| | | | | | | | | | | Book3S_64 didn't set VSID_PR when we're in PR=1. This lead to pretty bad behavior when searching for the shadow segment, as part of the code relied on VSID_PR being set. This patch fixes booting Book3S_64 guests. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Be more informative on BUGAlexander Graf2010-05-171-2/+8
| | | | | | | | | | | | We have a condition in the ppc64 host mmu code that should never occur. Unfortunately, it just did happen to me and I was rather puzzled on why, because BUG_ON doesn't tell me anything useful. So let's add some more debug output in case this goes wrong. Also change BUG to WARN, since I don't want to reboot every time I mess something up. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Make Alignment interrupts work againAlexander Graf2010-05-171-0/+2
| | | | | | | | | | | In the process of merging Book3S_32 and 64 I somehow ended up having the alignment interrupt handler take last_inst, but the fetching code not fetching it. So we ended up with stale last_inst values. Let's just enable last_inst fetching for alignment interrupts too. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Improve split modeAlexander Graf2010-05-174-39/+46
| | | | | | | | | | | | | | | | | | When in split mode, instruction relocation and data relocation are not equal. So far we implemented this mode by reserving a special pseudo-VSID for the two cases and flushing all PTEs when going into split mode, which is slow. Unfortunately 32bit Linux and Mac OS X use split mode extensively. So to not slow down things too much, I came up with a different idea: Mark the split mode with a bit in the VSID and then treat it like any other segment. This means we can just flush the shadow segment cache, but keep the PTEs intact. I verified that this works with ppc32 Linux and Mac OS X 10.4 guests and does speed them up. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Make Performance Counters workAlexander Graf2010-05-172-0/+5
| | | | | | | | | | | | When we get a performance counter interrupt we need to route it on to the Linux handler after we got out of the guest context. We also need to tell our handling code that this particular interrupt doesn't need treatment. So let's add those two bits in, making perf work while having a KVM guest running. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Convert u64 -> ulongAlexander Graf2010-05-177-21/+19
| | | | | | | | | There are some pieces in the code that I overlooked that still use u64s instead of longs. This slows down 32 bit hosts unnecessarily, so let's just move them to ulong. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Enable Book3S_32 KVM buildingAlexander Graf2010-05-172-0/+30
| | | | | | | | Now that we have all the bits and pieces in place, let's enable building of the Book3S_32 target. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Add KVM intercept handlersAlexander Graf2010-05-171-0/+14
| | | | | | | | | | | | | When an interrupt occurs we don't know yet if we're in guest context or in host context. When in guest context, KVM needs to handle it. So let's pull the same trick we did on Book3S_64: Just add a macro to determine if we're in guest context or not and if so jump on to KVM code. CC: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Alexander Graf <agraf@suse.de> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Check max IRQ prioAlexander Graf2010-05-171-1/+1
| | | | | | | | | We have a define on what the highest bit of IRQ priorities is. So we can just as well use it in the bit checking code and avoid invalid IRQ values to be triggered. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* PPC: Export SWITCH_FRAME_SIZEAlexander Graf2010-05-171-1/+1
| | | | | | | | | | We need the SWITCH_FRAME_SIZE define on Book3S_32 now too. So let's export it unconditionally. CC: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Alexander Graf <agraf@suse.de> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Export MMU variablesAlexander Graf2010-05-171-0/+5
| | | | | | | | | | | Our shadow MMU code needs to know where the HTAB is located and how big it is. So we need some variables from the kernel exported to module space if KVM is built as a module. CC: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Alexander Graf <agraf@suse.de> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Add Book3S compatibility codeAlexander Graf2010-05-174-3/+34
| | | | | | | | | | | | Some code we had so far required defines and had code that was completely Book3S_64 specific. Since we now opened book3s.c to Book3S_32 too, we need to take care of these pieces. So let's add some minor code where it makes sense to not go the Book3S_64 code paths and add compat defines on others. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Emulate segment faultAlexander Graf2010-05-171-0/+23
| | | | | | | | | | | | Book3S_32 doesn't know about segment faults. It only knows about page faults. So in order to know that we didn't map a segment, we need to fake segment faults. We do this by setting invalid segment registers to an invalid VSID and then check for that VSID on normal page faults. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Add SVCPU to Book3S_32Alexander Graf2010-05-172-0/+6
| | | | | | | | | | | | We need to keep the pointer to the shadow vcpu somewhere accessible from within really early interrupt code. The best fit I found was the thread struct, as that resides in an SPRG. So let's put a pointer to the shadow vcpu in the thread struct and add an asm-offset so we can find it. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Remove fetch fail codeAlexander Graf2010-05-171-4/+0
| | | | | | | | | When instruction fetch failed, the inline function hook automatically detects that and starts the internal guest memory load function. So whenever we access kvmppc_get_last_inst(), we're sure the result is sane. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Release clean pages as cleanAlexander Graf2010-05-171-1/+5
| | | | | | | | | When we mapped a page as read-only, we can just release it as clean to KVM's page claim mechanisms, because we're pretty sure it hasn't been touched. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Make SLB switching code the new segment frameworkAlexander Graf2010-05-172-160/+25
| | | | | | | | | | | We just introduced generic segment switching code that only needs to call small macros to do the actual switching, but keeps most of the entry / exit code generic. So let's move the SLB switching code over to use this new mechanism. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Make highmem code genericAlexander Graf2010-05-171-100/+101
| | | | | | | | | | | | Since we now have several fields in the shadow VCPU, we also change the internal calling convention between the different entry/exit code layers. Let's reflect that in the IR=1 code and make sure we use "long" defines for long field access. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Make real mode handler genericAlexander Graf2010-05-171-31/+88
| | | | | | | | | | | | The real mode handler code was originally writen for 64 bit Book3S only. But since we not add 32 bit functionality too, we need to make some tweaks to it. This patch basically combines using the "long" access defines and using fields from the shadow VCPU we just moved there. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Extract MMU initAlexander Graf2010-05-173-7/+20
| | | | | | | | | | | The host shadow mmu code needs to get initialized. It needs to fetch a segment it can use to put shadow PTEs into. That initialization code was in generic code, which is icky. Let's move it over to the respective MMU file. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Use now shadowed vcpu fieldsAlexander Graf2010-05-173-48/+57
| | | | | | | | | | | The shadow vcpu now contains some fields we don't use from the vcpu anymore. Access to them happens using inline functions that happily use the shadow vcpu fields. So let's now ifdef them out to booke only and add asm-offsets. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* PPC: Add STLUAlexander Graf2010-05-171-0/+2
| | | | | | | | | | | | For assembly code there are several "long" load and store defines already. The one that's missing is the typical stack store, stdu/stwu. So let's add that define as well, making my KVM code happy. CC: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Alexander Graf <agraf@suse.de> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Use CONFIG_PPC_BOOK3S defineAlexander Graf2010-05-174-11/+11
| | | | | | | | | | Upstream recently added a new name for PPC64: Book3S_64. So instead of using CONFIG_PPC64 we should use CONFIG_PPC_BOOK3S consotently. That makes understanding the code easier (I hope). Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Use KVM_BOOK3S_HANDLERAlexander Graf2010-05-174-4/+8
| | | | | | | | | | | | So far we had a lot of conditional code on CONFIG_KVM_BOOK3S_64_HANDLER. As we're moving towards common code between 32 and 64 bits, most of these ifdefs can be moved to a more generic term define, called CONFIG_KVM_BOOK3S_HANDLER. This patch adds the new generic config option and moves ifdefs over. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Improve indirect svcpu accessorsAlexander Graf2010-05-1710-153/+290
| | | | | | | | | | | | | | | We already have some inline fuctions we use to access vcpu or svcpu structs, depending on whether we're on booke or book3s. Since we just put a few more registers into the svcpu, we also need to make sure the respective callbacks are available and get used. So this patch moves direct use of the now in the svcpu struct fields to inline function calls. While at it, it also moves the definition of those inline function calls to respective header files for booke and book3s, greatly improving readability. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Add fields to shadow vcpuAlexander Graf2010-05-171-0/+21
| | | | | | | | | | | | | After a lot of thought on how to make the entry / exit code easier, I figured it'd be clever to put even more register state into the shadow vcpu. That way we have more registers available to use, making the code easier to read. So this patch adds a few new fields to that shadow vcpu. Later on we will remove the originals from the vcpu and paca. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Add kvm_book3s_32.hAlexander Graf2010-05-171-0/+42
| | | | | | | | | In analogy to the 64 bit specific header file, this is the 32 bit pendant. With this in place we can just always call to_svcpu and be assured we get the right pointer anywhere. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Add kvm_book3s_64.hAlexander Graf2010-05-171-0/+28
| | | | | | | | | | | | | In the process of generalizing as much code as possible, I also moved the shadow vcpu code together to a generic book3s file. Unfortunately the location of the shadow vcpu is different on 32 and 64 bit, so we need a wrapper function to tell us where it is. That sounded like a perfect fit for a subarch specific header file. Here we can put anything that needs to be different between those two. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* PPC: Split context init/destroy functionsAlexander Graf2010-05-172-7/+24
| | | | | | | | | | | | | We need to reserve a context from KVM to make sure we have our own segment space. While we did that split for Book3S_64 already, 32 bit is still outstanding. So let's split it now. Signed-off-by: Alexander Graf <agraf@suse.de> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> CC: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Add generic segment switching codeAlexander Graf2010-05-171-0/+257
| | | | | | | | | | | | | | | | | This is the code that will later be used instead of book3s_64_slb.S. It does the last step of guest entry and the first generic steps of guest exiting, once we have determined the interrupt is a KVM interrupt. It also reads the last used instruction from the guest virtual address space if necessary, to speed up that path. The new thing about this file is that it makes use of generic long load and store functions and calls a macro to fill in the actual segment switching code. That still needs to be done differently for book3s_32 and book3s_64. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Add SR swapping codeAlexander Graf2010-05-171-0/+143
| | | | | | | | | | | | Later in this series we will move the current segment switch code to generic code and make that call hooks for the specific sub-archs (32 vs. 64 bit). This is the hook for 32 bits. It enabled the entry and exit code to swap segment registers with values from the shadow cpu structure. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Add host MMU SupportAlexander Graf2010-05-171-0/+480
| | | | | | | | | In order to support 32 bit Book3S, we need to add code to enable our shadow MMU to actually add shadow PTEs. This is the module enabling that support. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Name generic 64-bit code genericAlexander Graf2010-05-179-9/+7
| | | | | | | | | We have quite some code that can be used by Book3S_32 and Book3S_64 alike, so let's call it "Book3S" instead of "Book3S_64", so we can later on use it from the 32 bit port too. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: cleanup for function unaccount_shadowed()Wei Yongjun2010-05-171-1/+1
| | | | | | | | | Since gfn is not changed in the for loop, we do not need to call gfn_to_memslot_unaliased() under the loop, and it is safe to move it out. Signed-off-by: Wei Yongjun <yjwei@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Get rid of dead function gva_to_page()Gui Jianfeng2010-05-172-15/+0
| | | | | | | Nobody use gva_to_page() anymore, get rid of it. Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: Remove unused varialbe in rmap_next()Gui Jianfeng2010-05-171-2/+0
| | | | | | | Remove unused varialbe in rmap_next() Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: Make use of is_large_pte() in walkerGui Jianfeng2010-05-171-2/+2
| | | | | | | | Make use of is_large_pte() instead of checking PT_PAGE_SIZE_MASK bit directly. Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: Move sync_page() first pte address calculation out of loopGui Jianfeng2010-05-171-2/+4
| | | | | | | Move first pte address calculation out of loop to save some cycles. Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: do not call hardware_disable() on CPU_UP_CANCELEDLai Jiangshan2010-05-171-5/+0
| | | | | | | | | | | When CPU_UP_CANCELED, hardware_enable() has not been called at the CPU which is going up because raw_notifier_call_chain(CPU_ONLINE) has not been called for this cpu. Drop the handling for CPU_UP_CANCELED. Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: Drop cr4.pge from shadow page roleAvi Kivity2010-05-173-4/+1
| | | | | | | | | | Since commit bf47a760f66ad, we no longer handle ptes with the global bit set specially, so there is no reason to distinguish between shadow pages created with cr4.gpe set and clear. Such tracking is expensive when the guest toggles cr4.pge, so drop it. Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: use the correct RCU API for PROVE_RCU=yLai Jiangshan2010-05-179-15/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The RCU/SRCU API have already changed for proving RCU usage. I got the following dmesg when PROVE_RCU=y because we used incorrect API. This patch coverts rcu_deference() to srcu_dereference() or family API. =================================================== [ INFO: suspicious rcu_dereference_check() usage. ] --------------------------------------------------- arch/x86/kvm/mmu.c:3020 invoked rcu_dereference_check() without protection! other info that might help us debug this: rcu_scheduler_active = 1, debug_locks = 0 2 locks held by qemu-system-x86/8550: #0: (&kvm->slots_lock){+.+.+.}, at: [<ffffffffa011a6ac>] kvm_set_memory_region+0x29/0x50 [kvm] #1: (&(&kvm->mmu_lock)->rlock){+.+...}, at: [<ffffffffa012262d>] kvm_arch_commit_memory_region+0xa6/0xe2 [kvm] stack backtrace: Pid: 8550, comm: qemu-system-x86 Not tainted 2.6.34-rc4-tip-01028-g939eab1 #27 Call Trace: [<ffffffff8106c59e>] lockdep_rcu_dereference+0xaa/0xb3 [<ffffffffa012f6c1>] kvm_mmu_calculate_mmu_pages+0x44/0x7d [kvm] [<ffffffffa012263e>] kvm_arch_commit_memory_region+0xb7/0xe2 [kvm] [<ffffffffa011a5d7>] __kvm_set_memory_region+0x636/0x6e2 [kvm] [<ffffffffa011a6ba>] kvm_set_memory_region+0x37/0x50 [kvm] [<ffffffffa015e956>] vmx_set_tss_addr+0x46/0x5a [kvm_intel] [<ffffffffa0126592>] kvm_arch_vm_ioctl+0x17a/0xcf8 [kvm] [<ffffffff810a8692>] ? unlock_page+0x27/0x2c [<ffffffff810bf879>] ? __do_fault+0x3a9/0x3e1 [<ffffffffa011b12f>] kvm_vm_ioctl+0x364/0x38d [kvm] [<ffffffff81060cfa>] ? up_read+0x23/0x3d [<ffffffff810f3587>] vfs_ioctl+0x32/0xa6 [<ffffffff810f3b19>] do_vfs_ioctl+0x495/0x4db [<ffffffff810e6b2f>] ? fget_light+0xc2/0x241 [<ffffffff810e416c>] ? do_sys_open+0x104/0x116 [<ffffffff81382d6d>] ? retint_swapgs+0xe/0x13 [<ffffffff810f3ba6>] sys_ioctl+0x47/0x6a [<ffffffff810021db>] system_call_fastpath+0x16/0x1b Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* Merge branch 'perf'Avi Kivity2010-05-17157-6494/+11060
|\ | | | | | | Signed-off-by: Avi Kivity <avi@redhat.com>
| * perf: 'perf kvm' tool for monitoring guest performance from hostZhang, Yanmin2010-04-1930-316/+1407
| | | | | | | | | | | | | | Here is the patch of userspace perf tool. Signed-off-by: Zhang Yanmin <yanmin_zhang@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
| * KVM: Implement perf callbacks for guest samplingZhang, Yanmin2010-04-193-1/+53
| | | | | | | | | | | | | | Below patch implements the perf_guest_info_callbacks on kvm. Signed-off-by: Zhang Yanmin <yanmin_zhang@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
| * perf: Enhance perf to allow for guest statistic collection from hostZhang, Yanmin2010-04-194-13/+77
| | | | | | | | | | | | | | | | | | Below patch introduces perf_guest_info_callbacks and related register/unregister functions. Add more PERF_RECORD_MISC_XXX bits meaning guest kernel and guest user space. Signed-off-by: Zhang Yanmin <yanmin_zhang@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
| * Merge branch 'perf' of ↵Ingo Molnar2010-04-1512-777/+1634
| |\ | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux-2.6 into perf/core
| | * perf probe: Show function entry line as probe-ableMasami Hiramatsu2010-04-141-11/+63
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Function entry line should be shown as probe-able line, because each function has declared line attribute. LKML-Reference: <20100414224007.14630.96915.stgit@localhost6.localdomain6> Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * perf probe: Support DW_OP_plus_uconst in DW_AT_data_member_locationMasami Hiramatsu2010-04-141-5/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | DW_OP_plus_uconst can be used for DW_AT_data_member_location. This patch adds DW_OP_plus_uconst support when getting structure member offset. Commiter note: Fixed up the size_t format specifier in one case: cc1: warnings being treated as errors util/probe-finder.c: In function ‘die_get_data_member_location’: util/probe-finder.c:270: error: format ‘%d’ expects type ‘int’, but argument 4 has type ‘size_t’ make: *** [/home/acme/git/build/perf/util/probe-finder.o] Error 1 LKML-Reference: <20100414223958.14630.5230.stgit@localhost6.localdomain6> Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
OpenPOWER on IntegriCloud