diff options
author | Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> | 2008-11-10 13:54:43 +0900 |
---|---|---|
committer | Jesse Barnes <jbarnes@virtuousgeek.org> | 2008-11-11 13:33:05 -0800 |
commit | 2485b8674bf5762822e14e1554938e36511c0ae4 (patch) | |
tree | 9594d7366d234f9b23c33da9b087c120562b0070 /arch/parisc | |
parent | f21f237cf55494c3a4209de323281a3b0528da10 (diff) | |
download | op-kernel-dev-2485b8674bf5762822e14e1554938e36511c0ae4.zip op-kernel-dev-2485b8674bf5762822e14e1554938e36511c0ae4.tar.gz |
PCI: ignore bit0 of _OSC return code
Currently acpi_run_osc() checks all the bits in _OSC result code (the
first DWORD in the capabilities buffer) to see error condition. But the
bit 0, which doesn't indicate any error, must be ignored.
The bit 0 is used as the query flag at _OSC invocation time. Some
platforms clear it during _OSC evaluation, but the others don't. On
latter platforms, current acpi_run_osc() mis-detects error when _OSC is
evaluated with query flag set because it doesn't ignore the bit 0.
Because of this, the __acpi_query_osc() always fails on such platforms.
And this is the cause of the problem that pci_osc_control_set() doesn't
work since the commit 4e39432f4df544d3dfe4fc90a22d87de64d15815 which
changed pci_osc_control_set() to use __acpi_query_osc().
Tested-by:"Tomasz Czernecki <czernecki@gmail.com>
Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Diffstat (limited to 'arch/parisc')
0 files changed, 0 insertions, 0 deletions