summaryrefslogtreecommitdiffstats
path: root/include
diff options
context:
space:
mode:
authorIngo Molnar <mingo@elte.hu>2008-03-10 18:04:34 +0100
committerLinus Torvalds <torvalds@woody.linux-foundation.org>2008-03-10 18:09:05 -0700
commitf5dbb55b995b77d396fe2204495a0af3e24d28c2 (patch)
tree2a7521863251978ce1dabdab53f6ab676a2aee65 /include
parenteffe008d276f52674d5352deefb68ec409a5ef9b (diff)
downloadop-kernel-dev-f5dbb55b995b77d396fe2204495a0af3e24d28c2.zip
op-kernel-dev-f5dbb55b995b77d396fe2204495a0af3e24d28c2.tar.gz
fix BIOS PCI config cycle buglet causing ACPI boot regression
I figured out another ACPI related regression today. randconfig testing triggered an early boot-time hang on a laptop of mine (32-bit x86, config attached) - the screen was scrolling ACPI AML exceptions [with no serial port and no early debugging available]. v2.6.24 works fine on that laptop with the same .config, so after a few hours of bisection (had to restart it 3 times - other regressions interacted), it honed in on this commit: | 10270d4838bdc493781f5a1cf2e90e9c34c9142f is first bad commit | | Author: Linus Torvalds <torvalds@woody.linux-foundation.org> | Date: Wed Feb 13 09:56:14 2008 -0800 | | acpi: fix acpi_os_read_pci_configuration() misuse of raw_pci_read() reverting this commit ontop of -rc5 gave a correctly booting kernel. But this commit fixes a real bug so the real question is, why did it break the bootup? After quite some head-scratching, the following change stood out: - pci_id->bus = tu8; + pci_id->bus = val; pci_id->bus is defined as u16: struct acpi_pci_id { u16 segment; u16 bus; ... and 'tu8' changed from u8 to u32. So previously we'd unconditionally mask the return value of acpi_os_read_pci_configuration() (raw_pci_read()) to 8 bits, but now we just trust whatever comes back from the PCI access routines and only crop it to 16 bits. But if the high 8 bits of that result contains any noise then we'll write that into ACPI's PCI ID descriptor and confuse the heck out of the rest of ACPI. So lets check the PCI-BIOS code on that theory. We have this codepath for 8-bit accesses (arch/x86/pci/pcbios.c:pci_bios_read()): switch (len) { case 1: __asm__("lcall *(%%esi); cld\n\t" "jc 1f\n\t" "xor %%ah, %%ah\n" "1:" : "=c" (*value), "=a" (result) : "1" (PCIBIOS_READ_CONFIG_BYTE), "b" (bx), "D" ((long)reg), "S" (&pci_indirect)); Aha! The "=a" output constraint puts the full 32 bits of EAX into *value. But if the BIOS's routines set any of the high bits to nonzero, we'll return a value with more set in it than intended. The other, more common PCI access methods (v1 and v2 PCI reads) clear out the high bits already, for example pci_conf1_read() does: switch (len) { case 1: *value = inb(0xCFC + (reg & 3)); which explicitly converts the return byte up to 32 bits and zero-extends it. So zero-extending the result in the PCI-BIOS read routine fixes the regression on my laptop. ( It might fix some other long-standing issues we had with PCI-BIOS during the past decade ... ) Both 8-bit and 16-bit accesses were buggy. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include')
0 files changed, 0 insertions, 0 deletions
OpenPOWER on IntegriCloud