diff options
author | Paul Mackerras <paulus@samba.org> | 2007-10-03 15:33:17 +1000 |
---|---|---|
committer | Paul Mackerras <paulus@samba.org> | 2007-10-03 15:33:17 +1000 |
commit | 70f227d8846a8a9b1f36f71c42e11cc7c6e9408d (patch) | |
tree | fb4dd5c8240bdaada819fb569c01a392b52847b9 | |
parent | a0c7ce9c877ceef8428798ac91fb794f83609aed (diff) | |
parent | f778089cb2445dfc6dfd30a7a567925fd8589f1e (diff) | |
download | op-kernel-dev-70f227d8846a8a9b1f36f71c42e11cc7c6e9408d.zip op-kernel-dev-70f227d8846a8a9b1f36f71c42e11cc7c6e9408d.tar.gz |
Merge branch 'linux-2.6' into for-2.6.24
170 files changed, 2323 insertions, 1599 deletions
diff --git a/Documentation/crypto/async-tx-api.txt b/Documentation/crypto/async-tx-api.txt new file mode 100644 index 0000000..c1e9545 --- /dev/null +++ b/Documentation/crypto/async-tx-api.txt @@ -0,0 +1,219 @@ + Asynchronous Transfers/Transforms API + +1 INTRODUCTION + +2 GENEALOGY + +3 USAGE +3.1 General format of the API +3.2 Supported operations +3.3 Descriptor management +3.4 When does the operation execute? +3.5 When does the operation complete? +3.6 Constraints +3.7 Example + +4 DRIVER DEVELOPER NOTES +4.1 Conformance points +4.2 "My application needs finer control of hardware channels" + +5 SOURCE + +--- + +1 INTRODUCTION + +The async_tx API provides methods for describing a chain of asynchronous +bulk memory transfers/transforms with support for inter-transactional +dependencies. It is implemented as a dmaengine client that smooths over +the details of different hardware offload engine implementations. Code +that is written to the API can optimize for asynchronous operation and +the API will fit the chain of operations to the available offload +resources. + +2 GENEALOGY + +The API was initially designed to offload the memory copy and +xor-parity-calculations of the md-raid5 driver using the offload engines +present in the Intel(R) Xscale series of I/O processors. It also built +on the 'dmaengine' layer developed for offloading memory copies in the +network stack using Intel(R) I/OAT engines. The following design +features surfaced as a result: +1/ implicit synchronous path: users of the API do not need to know if + the platform they are running on has offload capabilities. The + operation will be offloaded when an engine is available and carried out + in software otherwise. +2/ cross channel dependency chains: the API allows a chain of dependent + operations to be submitted, like xor->copy->xor in the raid5 case. The + API automatically handles cases where the transition from one operation + to another implies a hardware channel switch. +3/ dmaengine extensions to support multiple clients and operation types + beyond 'memcpy' + +3 USAGE + +3.1 General format of the API: +struct dma_async_tx_descriptor * +async_<operation>(<op specific parameters>, + enum async_tx_flags flags, + struct dma_async_tx_descriptor *dependency, + dma_async_tx_callback callback_routine, + void *callback_parameter); + +3.2 Supported operations: +memcpy - memory copy between a source and a destination buffer +memset - fill a destination buffer with a byte value +xor - xor a series of source buffers and write the result to a + destination buffer +xor_zero_sum - xor a series of source buffers and set a flag if the + result is zero. The implementation attempts to prevent + writes to memory + +3.3 Descriptor management: +The return value is non-NULL and points to a 'descriptor' when the operation +has been queued to execute asynchronously. Descriptors are recycled +resources, under control of the offload engine driver, to be reused as +operations complete. When an application needs to submit a chain of +operations it must guarantee that the descriptor is not automatically recycled +before the dependency is submitted. This requires that all descriptors be +acknowledged by the application before the offload engine driver is allowed to +recycle (or free) the descriptor. A descriptor can be acked by one of the +following methods: +1/ setting the ASYNC_TX_ACK flag if no child operations are to be submitted +2/ setting the ASYNC_TX_DEP_ACK flag to acknowledge the parent + descriptor of a new operation. +3/ calling async_tx_ack() on the descriptor. + +3.4 When does the operation execute? +Operations do not immediately issue after return from the +async_<operation> call. Offload engine drivers batch operations to +improve performance by reducing the number of mmio cycles needed to +manage the channel. Once a driver-specific threshold is met the driver +automatically issues pending operations. An application can force this +event by calling async_tx_issue_pending_all(). This operates on all +channels since the application has no knowledge of channel to operation +mapping. + +3.5 When does the operation complete? +There are two methods for an application to learn about the completion +of an operation. +1/ Call dma_wait_for_async_tx(). This call causes the CPU to spin while + it polls for the completion of the operation. It handles dependency + chains and issuing pending operations. +2/ Specify a completion callback. The callback routine runs in tasklet + context if the offload engine driver supports interrupts, or it is + called in application context if the operation is carried out + synchronously in software. The callback can be set in the call to + async_<operation>, or when the application needs to submit a chain of + unknown length it can use the async_trigger_callback() routine to set a + completion interrupt/callback at the end of the chain. + +3.6 Constraints: +1/ Calls to async_<operation> are not permitted in IRQ context. Other + contexts are permitted provided constraint #2 is not violated. +2/ Completion callback routines cannot submit new operations. This + results in recursion in the synchronous case and spin_locks being + acquired twice in the asynchronous case. + +3.7 Example: +Perform a xor->copy->xor operation where each operation depends on the +result from the previous operation: + +void complete_xor_copy_xor(void *param) +{ + printk("complete\n"); +} + +int run_xor_copy_xor(struct page **xor_srcs, + int xor_src_cnt, + struct page *xor_dest, + size_t xor_len, + struct page *copy_src, + struct page *copy_dest, + size_t copy_len) +{ + struct dma_async_tx_descriptor *tx; + + tx = async_xor(xor_dest, xor_srcs, 0, xor_src_cnt, xor_len, + ASYNC_TX_XOR_DROP_DST, NULL, NULL, NULL); + tx = async_memcpy(copy_dest, copy_src, 0, 0, copy_len, + ASYNC_TX_DEP_ACK, tx, NULL, NULL); + tx = async_xor(xor_dest, xor_srcs, 0, xor_src_cnt, xor_len, + ASYNC_TX_XOR_DROP_DST | ASYNC_TX_DEP_ACK | ASYNC_TX_ACK, + tx, complete_xor_copy_xor, NULL); + + async_tx_issue_pending_all(); +} + +See include/linux/async_tx.h for more information on the flags. See the +ops_run_* and ops_complete_* routines in drivers/md/raid5.c for more +implementation examples. + +4 DRIVER DEVELOPMENT NOTES +4.1 Conformance points: +There are a few conformance points required in dmaengine drivers to +accommodate assumptions made by applications using the async_tx API: +1/ Completion callbacks are expected to happen in tasklet context +2/ dma_async_tx_descriptor fields are never manipulated in IRQ context +3/ Use async_tx_run_dependencies() in the descriptor clean up path to + handle submission of dependent operations + +4.2 "My application needs finer control of hardware channels" +This requirement seems to arise from cases where a DMA engine driver is +trying to support device-to-memory DMA. The dmaengine and async_tx +implementations were designed for offloading memory-to-memory +operations; however, there are some capabilities of the dmaengine layer +that can be used for platform-specific channel management. +Platform-specific constraints can be handled by registering the +application as a 'dma_client' and implementing a 'dma_event_callback' to +apply a filter to the available channels in the system. Before showing +how to implement a custom dma_event callback some background of +dmaengine's client support is required. + +The following routines in dmaengine support multiple clients requesting +use of a channel: +- dma_async_client_register(struct dma_client *client) +- dma_async_client_chan_request(struct dma_client *client) + +dma_async_client_register takes a pointer to an initialized dma_client +structure. It expects that the 'event_callback' and 'cap_mask' fields +are already initialized. + +dma_async_client_chan_request triggers dmaengine to notify the client of +all channels that satisfy the capability mask. It is up to the client's +event_callback routine to track how many channels the client needs and +how many it is currently using. The dma_event_callback routine returns a +dma_state_client code to let dmaengine know the status of the +allocation. + +Below is the example of how to extend this functionality for +platform-specific filtering of the available channels beyond the +standard capability mask: + +static enum dma_state_client +my_dma_client_callback(struct dma_client *client, + struct dma_chan *chan, enum dma_state state) +{ + struct dma_device *dma_dev; + struct my_platform_specific_dma *plat_dma_dev; + + dma_dev = chan->device; + plat_dma_dev = container_of(dma_dev, + struct my_platform_specific_dma, + dma_dev); + + if (!plat_dma_dev->platform_specific_capability) + return DMA_DUP; + + . . . +} + +5 SOURCE +include/linux/dmaengine.h: core header file for DMA drivers and clients +drivers/dma/dmaengine.c: offload engine channel management routines +drivers/dma/: location for offload engine drivers +include/linux/async_tx.h: core header file for the async_tx api +crypto/async_tx/async_tx.c: async_tx interface to dmaengine and common code +crypto/async_tx/async_memcpy.c: copy offload +crypto/async_tx/async_memset.c: memory fill offload +crypto/async_tx/async_xor.c: xor and xor zero sum offload diff --git a/Documentation/devices.txt b/Documentation/devices.txt index 8de132a..6c46730 100644 --- a/Documentation/devices.txt +++ b/Documentation/devices.txt @@ -94,6 +94,8 @@ Your cooperation is appreciated. 9 = /dev/urandom Faster, less secure random number gen. 10 = /dev/aio Asynchronous I/O notification interface 11 = /dev/kmsg Writes to this come out as printk's + 12 = /dev/oldmem Used by crashdump kernels to access + the memory of the kernel that crashed. 1 block RAM disk 0 = /dev/ram0 First RAM disk diff --git a/Documentation/input/iforce-protocol.txt b/Documentation/input/iforce-protocol.txt index 95df4ca..8777d2d 100644 --- a/Documentation/input/iforce-protocol.txt +++ b/Documentation/input/iforce-protocol.txt @@ -1,254 +1,254 @@ -** Introduction
-This document describes what I managed to discover about the protocol used to
-specify force effects to I-Force 2.0 devices. None of this information comes
-from Immerse. That's why you should not trust what is written in this
-document. This document is intended to help understanding the protocol.
-This is not a reference. Comments and corrections are welcome. To contact me,
-send an email to: deneux@ifrance.com
-
-** WARNING **
-I may not be held responsible for any dammage or harm caused if you try to
-send data to your I-Force device based on what you read in this document.
-
-** Preliminary Notes:
-All values are hexadecimal with big-endian encoding (msb on the left). Beware,
-values inside packets are encoded using little-endian. Bytes whose roles are
-unknown are marked ??? Information that needs deeper inspection is marked (?)
-
-** General form of a packet **
-This is how packets look when the device uses the rs232 to communicate.
-2B OP LEN DATA CS
-CS is the checksum. It is equal to the exclusive or of all bytes.
-
-When using USB:
-OP DATA
-The 2B, LEN and CS fields have disappeared, probably because USB handles frames and
-data corruption is handled or unsignificant.
-
-First, I describe effects that are sent by the device to the computer
-
-** Device input state
-This packet is used to indicate the state of each button and the value of each
-axis
-OP= 01 for a joystick, 03 for a wheel
-LEN= Varies from device to device
-00 X-Axis lsb
-01 X-Axis msb
-02 Y-Axis lsb, or gas pedal for a wheel
-03 Y-Axis msb, or brake pedal for a wheel
-04 Throttle
-05 Buttons
-06 Lower 4 bits: Buttons
- Upper 4 bits: Hat
-07 Rudder
-
-** Device effects states
-OP= 02
-LEN= Varies
-00 ? Bit 1 (Value 2) is the value of the deadman switch
-01 Bit 8 is set if the effect is playing. Bits 0 to 7 are the effect id.
-02 ??
-03 Address of parameter block changed (lsb)
-04 Address of parameter block changed (msb)
-05 Address of second parameter block changed (lsb)
-... depending on the number of parameter blocks updated
-
-** Force effect **
-OP= 01
-LEN= 0e
-00 Channel (when playing several effects at the same time, each must be assigned a channel)
-01 Wave form
- Val 00 Constant
- Val 20 Square
- Val 21 Triangle
- Val 22 Sine
- Val 23 Sawtooth up
- Val 24 Sawtooth down
- Val 40 Spring (Force = f(pos))
- Val 41 Friction (Force = f(velocity)) and Inertia (Force = f(acceleration))
-
-
-02 Axes affected and trigger
- Bits 4-7: Val 2 = effect along one axis. Byte 05 indicates direction
- Val 4 = X axis only. Byte 05 must contain 5a
- Val 8 = Y axis only. Byte 05 must contain b4
- Val c = X and Y axes. Bytes 05 must contain 60
- Bits 0-3: Val 0 = No trigger
- Val x+1 = Button x triggers the effect
- When the whole byte is 0, cancel the previously set trigger
-
-03-04 Duration of effect (little endian encoding, in ms)
-
-05 Direction of effect, if applicable. Else, see 02 for value to assign.
-
-06-07 Minimum time between triggering.
-
-08-09 Address of periodicity or magnitude parameters
-0a-0b Address of attack and fade parameters, or ffff if none.
-*or*
-08-09 Address of interactive parameters for X-axis, or ffff if not applicable
-0a-0b Address of interactive parameters for Y-axis, or ffff if not applicable
-
-0c-0d Delay before execution of effect (little endian encoding, in ms)
-
-
-** Time based parameters **
-
-*** Attack and fade ***
-OP= 02
-LEN= 08
-00-01 Address where to store the parameteres
-02-03 Duration of attack (little endian encoding, in ms)
-04 Level at end of attack. Signed byte.
-05-06 Duration of fade.
-07 Level at end of fade.
-
-*** Magnitude ***
-OP= 03
-LEN= 03
-00-01 Address
-02 Level. Signed byte.
-
-*** Periodicity ***
-OP= 04
-LEN= 07
-00-01 Address
-02 Magnitude. Signed byte.
-03 Offset. Signed byte.
-04 Phase. Val 00 = 0 deg, Val 40 = 90 degs.
-05-06 Period (little endian encoding, in ms)
-
-** Interactive parameters **
-OP= 05
-LEN= 0a
-00-01 Address
-02 Positive Coeff
-03 Negative Coeff
-04+05 Offset (center)
-06+07 Dead band (Val 01F4 = 5000 (decimal))
-08 Positive saturation (Val 0a = 1000 (decimal) Val 64 = 10000 (decimal))
-09 Negative saturation
-
-The encoding is a bit funny here: For coeffs, these are signed values. The
-maximum value is 64 (100 decimal), the min is 9c.
-For the offset, the minimum value is FE0C, the maximum value is 01F4.
-For the deadband, the minimum value is 0, the max is 03E8.
-
-** Controls **
-OP= 41
-LEN= 03
-00 Channel
-01 Start/Stop
- Val 00: Stop
- Val 01: Start and play once.
- Val 41: Start and play n times (See byte 02 below)
-02 Number of iterations n.
-
-** Init **
-
-*** Querying features ***
-OP= ff
-Query command. Length varies according to the query type.
-The general format of this packet is:
-ff 01 QUERY [INDEX] CHECKSUM
-reponses are of the same form:
-FF LEN QUERY VALUE_QUERIED CHECKSUM2
-where LEN = 1 + length(VALUE_QUERIED)
-
-**** Query ram size ****
-QUERY = 42 ('B'uffer size)
-The device should reply with the same packet plus two additionnal bytes
-containing the size of the memory:
-ff 03 42 03 e8 CS would mean that the device has 1000 bytes of ram available.
-
-**** Query number of effects ****
-QUERY = 4e ('N'umber of effects)
-The device should respond by sending the number of effects that can be played
-at the same time (one byte)
-ff 02 4e 14 CS would stand for 20 effects.
-
-**** Vendor's id ****
-QUERY = 4d ('M'anufacturer)
-Query the vendors'id (2 bytes)
-
-**** Product id *****
-QUERY = 50 ('P'roduct)
-Query the product id (2 bytes)
-
-**** Open device ****
-QUERY = 4f ('O'pen)
-No data returned.
-
-**** Close device *****
-QUERY = 43 ('C')lose
-No data returned.
-
-**** Query effect ****
-QUERY = 45 ('E')
-Send effect type.
-Returns nonzero if supported (2 bytes)
-
-**** Firmware Version ****
-QUERY = 56 ('V'ersion)
-Sends back 3 bytes - major, minor, subminor
-
-*** Initialisation of the device ***
-
-**** Set Control ****
-!!! Device dependent, can be different on different models !!!
-OP= 40 <idx> <val> [<val>]
-LEN= 2 or 3
-00 Idx
- Idx 00 Set dead zone (0..2048)
- Idx 01 Ignore Deadman sensor (0..1)
- Idx 02 Enable comm watchdog (0..1)
- Idx 03 Set the strength of the spring (0..100)
- Idx 04 Enable or disable the spring (0/1)
- Idx 05 Set axis saturation threshold (0..2048)
-
-**** Set Effect State ****
-OP= 42 <val>
-LEN= 1
-00 State
- Bit 3 Pause force feedback
- Bit 2 Enable force feedback
- Bit 0 Stop all effects
-
-**** Set overall gain ****
-OP= 43 <val>
-LEN= 1
-00 Gain
- Val 00 = 0%
- Val 40 = 50%
- Val 80 = 100%
-
-** Parameter memory **
-
-Each device has a certain amount of memory to store parameters of effects.
-The amount of RAM may vary, I encountered values from 200 to 1000 bytes. Below
-is the amount of memory apparently needed for every set of parameters:
- - period : 0c
- - magnitude : 02
- - attack and fade : 0e
- - interactive : 08
-
-** Appendix: How to study the protocol ? **
-
-1. Generate effects using the force editor provided with the DirectX SDK, or use Immersion Studio (freely available at their web site in the developer section: www.immersion.com)
-2. Start a soft spying RS232 or USB (depending on where you connected your joystick/wheel). I used ComPortSpy from fCoder (alpha version!)
-3. Play the effect, and watch what happens on the spy screen.
-
-A few words about ComPortSpy:
-At first glance, this soft seems, hum, well... buggy. In fact, data appear with a few seconds latency. Personnaly, I restart it every time I play an effect.
-Remember it's free (as in free beer) and alpha!
-
-** URLS **
-Check www.immerse.com for Immersion Studio, and www.fcoder.com for ComPortSpy.
-
-** Author of this document **
-Johann Deneux <deneux@ifrance.com>
-Home page at http://www.esil.univ-mrs.fr/~jdeneux/projects/ff/
-
-Additions by Vojtech Pavlik.
-
-I-Force is trademark of Immersion Corp.
+** Introduction +This document describes what I managed to discover about the protocol used to +specify force effects to I-Force 2.0 devices. None of this information comes +from Immerse. That's why you should not trust what is written in this +document. This document is intended to help understanding the protocol. +This is not a reference. Comments and corrections are welcome. To contact me, +send an email to: deneux@ifrance.com + +** WARNING ** +I may not be held responsible for any dammage or harm caused if you try to +send data to your I-Force device based on what you read in this document. + +** Preliminary Notes: +All values are hexadecimal with big-endian encoding (msb on the left). Beware, +values inside packets are encoded using little-endian. Bytes whose roles are +unknown are marked ??? Information that needs deeper inspection is marked (?) + +** General form of a packet ** +This is how packets look when the device uses the rs232 to communicate. +2B OP LEN DATA CS +CS is the checksum. It is equal to the exclusive or of all bytes. + +When using USB: +OP DATA +The 2B, LEN and CS fields have disappeared, probably because USB handles frames and +data corruption is handled or unsignificant. + +First, I describe effects that are sent by the device to the computer + +** Device input state +This packet is used to indicate the state of each button and the value of each +axis +OP= 01 for a joystick, 03 for a wheel +LEN= Varies from device to device +00 X-Axis lsb +01 X-Axis msb +02 Y-Axis lsb, or gas pedal for a wheel +03 Y-Axis msb, or brake pedal for a wheel +04 Throttle +05 Buttons +06 Lower 4 bits: Buttons + Upper 4 bits: Hat +07 Rudder + +** Device effects states +OP= 02 +LEN= Varies +00 ? Bit 1 (Value 2) is the value of the deadman switch +01 Bit 8 is set if the effect is playing. Bits 0 to 7 are the effect id. +02 ?? +03 Address of parameter block changed (lsb) +04 Address of parameter block changed (msb) +05 Address of second parameter block changed (lsb) +... depending on the number of parameter blocks updated + +** Force effect ** +OP= 01 +LEN= 0e +00 Channel (when playing several effects at the same time, each must be assigned a channel) +01 Wave form + Val 00 Constant + Val 20 Square + Val 21 Triangle + Val 22 Sine + Val 23 Sawtooth up + Val 24 Sawtooth down + Val 40 Spring (Force = f(pos)) + Val 41 Friction (Force = f(velocity)) and Inertia (Force = f(acceleration)) + + +02 Axes affected and trigger + Bits 4-7: Val 2 = effect along one axis. Byte 05 indicates direction + Val 4 = X axis only. Byte 05 must contain 5a + Val 8 = Y axis only. Byte 05 must contain b4 + Val c = X and Y axes. Bytes 05 must contain 60 + Bits 0-3: Val 0 = No trigger + Val x+1 = Button x triggers the effect + When the whole byte is 0, cancel the previously set trigger + +03-04 Duration of effect (little endian encoding, in ms) + +05 Direction of effect, if applicable. Else, see 02 for value to assign. + +06-07 Minimum time between triggering. + +08-09 Address of periodicity or magnitude parameters +0a-0b Address of attack and fade parameters, or ffff if none. +*or* +08-09 Address of interactive parameters for X-axis, or ffff if not applicable +0a-0b Address of interactive parameters for Y-axis, or ffff if not applicable + +0c-0d Delay before execution of effect (little endian encoding, in ms) + + +** Time based parameters ** + +*** Attack and fade *** +OP= 02 +LEN= 08 +00-01 Address where to store the parameteres +02-03 Duration of attack (little endian encoding, in ms) +04 Level at end of attack. Signed byte. +05-06 Duration of fade. +07 Level at end of fade. + +*** Magnitude *** +OP= 03 +LEN= 03 +00-01 Address +02 Level. Signed byte. + +*** Periodicity *** +OP= 04 +LEN= 07 +00-01 Address +02 Magnitude. Signed byte. +03 Offset. Signed byte. +04 Phase. Val 00 = 0 deg, Val 40 = 90 degs. +05-06 Period (little endian encoding, in ms) + +** Interactive parameters ** +OP= 05 +LEN= 0a +00-01 Address +02 Positive Coeff +03 Negative Coeff +04+05 Offset (center) +06+07 Dead band (Val 01F4 = 5000 (decimal)) +08 Positive saturation (Val 0a = 1000 (decimal) Val 64 = 10000 (decimal)) +09 Negative saturation + +The encoding is a bit funny here: For coeffs, these are signed values. The +maximum value is 64 (100 decimal), the min is 9c. +For the offset, the minimum value is FE0C, the maximum value is 01F4. +For the deadband, the minimum value is 0, the max is 03E8. + +** Controls ** +OP= 41 +LEN= 03 +00 Channel +01 Start/Stop + Val 00: Stop + Val 01: Start and play once. + Val 41: Start and play n times (See byte 02 below) +02 Number of iterations n. + +** Init ** + +*** Querying features *** +OP= ff +Query command. Length varies according to the query type. +The general format of this packet is: +ff 01 QUERY [INDEX] CHECKSUM +reponses are of the same form: +FF LEN QUERY VALUE_QUERIED CHECKSUM2 +where LEN = 1 + length(VALUE_QUERIED) + +**** Query ram size **** +QUERY = 42 ('B'uffer size) +The device should reply with the same packet plus two additionnal bytes +containing the size of the memory: +ff 03 42 03 e8 CS would mean that the device has 1000 bytes of ram available. + +**** Query number of effects **** +QUERY = 4e ('N'umber of effects) +The device should respond by sending the number of effects that can be played +at the same time (one byte) +ff 02 4e 14 CS would stand for 20 effects. + +**** Vendor's id **** +QUERY = 4d ('M'anufacturer) +Query the vendors'id (2 bytes) + +**** Product id ***** +QUERY = 50 ('P'roduct) +Query the product id (2 bytes) + +**** Open device **** +QUERY = 4f ('O'pen) +No data returned. + +**** Close device ***** +QUERY = 43 ('C')lose +No data returned. + +**** Query effect **** +QUERY = 45 ('E') +Send effect type. +Returns nonzero if supported (2 bytes) + +**** Firmware Version **** +QUERY = 56 ('V'ersion) +Sends back 3 bytes - major, minor, subminor + +*** Initialisation of the device *** + +**** Set Control **** +!!! Device dependent, can be different on different models !!! +OP= 40 <idx> <val> [<val>] +LEN= 2 or 3 +00 Idx + Idx 00 Set dead zone (0..2048) + Idx 01 Ignore Deadman sensor (0..1) + Idx 02 Enable comm watchdog (0..1) + Idx 03 Set the strength of the spring (0..100) + Idx 04 Enable or disable the spring (0/1) + Idx 05 Set axis saturation threshold (0..2048) + +**** Set Effect State **** +OP= 42 <val> +LEN= 1 +00 State + Bit 3 Pause force feedback + Bit 2 Enable force feedback + Bit 0 Stop all effects + +**** Set overall gain **** +OP= 43 <val> +LEN= 1 +00 Gain + Val 00 = 0% + Val 40 = 50% + Val 80 = 100% + +** Parameter memory ** + +Each device has a certain amount of memory to store parameters of effects. +The amount of RAM may vary, I encountered values from 200 to 1000 bytes. Below +is the amount of memory apparently needed for every set of parameters: + - period : 0c + - magnitude : 02 + - attack and fade : 0e + - interactive : 08 + +** Appendix: How to study the protocol ? ** + +1. Generate effects using the force editor provided with the DirectX SDK, or use Immersion Studio (freely available at their web site in the developer section: www.immersion.com) +2. Start a soft spying RS232 or USB (depending on where you connected your joystick/wheel). I used ComPortSpy from fCoder (alpha version!) +3. Play the effect, and watch what happens on the spy screen. + +A few words about ComPortSpy: +At first glance, this soft seems, hum, well... buggy. In fact, data appear with a few seconds latency. Personnaly, I restart it every time I play an effect. +Remember it's free (as in free beer) and alpha! + +** URLS ** +Check www.immerse.com for Immersion Studio, and www.fcoder.com for ComPortSpy. + +** Author of this document ** +Johann Deneux <deneux@ifrance.com> +Home page at http://www.esil.univ-mrs.fr/~jdeneux/projects/ff/ + +Additions by Vojtech Pavlik. + +I-Force is trademark of Immersion Corp. diff --git a/Documentation/lguest/lguest.c b/Documentation/lguest/lguest.c index f791840..73c5f1f 100644 --- a/Documentation/lguest/lguest.c +++ b/Documentation/lguest/lguest.c @@ -882,7 +882,7 @@ static u32 handle_block_output(int fd, const struct iovec *iov, * of the block file (possibly extending it). */ if (off + len > device_len) { /* Trim it back to the correct length */ - ftruncate(dev->fd, device_len); + ftruncate64(dev->fd, device_len); /* Die, bad Guest, die. */ errx(1, "Write past end %llu+%u", off, len); } diff --git a/MAINTAINERS b/MAINTAINERS index 06259b4..2ef0862 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2624,8 +2624,8 @@ P: Harald Welte P: Jozsef Kadlecsik P: Patrick McHardy M: kaber@trash.net -L: netfilter-devel@lists.netfilter.org -L: netfilter@lists.netfilter.org (subscribers-only) +L: netfilter-devel@vger.kernel.org +L: netfilter@vger.kernel.org L: coreteam@netfilter.org W: http://www.netfilter.org/ W: http://www.iptables.org/ @@ -2678,7 +2678,7 @@ M: jmorris@namei.org P: Hideaki YOSHIFUJI M: yoshfuji@linux-ipv6.org P: Patrick McHardy -M: kaber@coreworks.de +M: kaber@trash.net L: netdev@vger.kernel.org T: git kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6.git S: Maintained @@ -1,8 +1,8 @@ VERSION = 2 PATCHLEVEL = 6 SUBLEVEL = 23 -EXTRAVERSION =-rc6 -NAME = Pink Farting Weasel +EXTRAVERSION =-rc9 +NAME = Arr Matey! A Hairy Bilge Rat! # *DOCUMENTATION* # To see a list of typical targets execute "make help" diff --git a/arch/arm/kernel/bios32.c b/arch/arm/kernel/bios32.c index 240c448..a2dd930 100644 --- a/arch/arm/kernel/bios32.c +++ b/arch/arm/kernel/bios32.c @@ -338,7 +338,7 @@ pbus_assign_bus_resources(struct pci_bus *bus, struct pci_sys_data *root) * pcibios_fixup_bus - Called after each bus is probed, * but before its children are examined. */ -void __devinit pcibios_fixup_bus(struct pci_bus *bus) +void pcibios_fixup_bus(struct pci_bus *bus) { struct pci_sys_data *root = bus->sysdata; struct pci_dev *dev; @@ -419,7 +419,7 @@ void __devinit pcibios_fixup_bus(struct pci_bus *bus) /* * Convert from Linux-centric to bus-centric addresses for bridge devices. */ -void __devinit +void pcibios_resource_to_bus(struct pci_dev *dev, struct pci_bus_region *region, struct resource *res) { diff --git a/arch/arm/mach-ep93xx/core.c b/arch/arm/mach-ep93xx/core.c index 851cc71..70b2c78 100644 --- a/arch/arm/mach-ep93xx/core.c +++ b/arch/arm/mach-ep93xx/core.c @@ -336,7 +336,7 @@ static int ep93xx_gpio_irq_type(unsigned int irq, unsigned int type) if (line >= 0 && line < 16) { gpio_line_config(line, GPIO_IN); } else { - gpio_line_config(EP93XX_GPIO_LINE_F(line), GPIO_IN); + gpio_line_config(EP93XX_GPIO_LINE_F(line-16), GPIO_IN); } port = line >> 3; diff --git a/arch/arm/mm/cache-l2x0.c b/arch/arm/mm/cache-l2x0.c index b4e9b73..76b800a 100644 --- a/arch/arm/mm/cache-l2x0.c +++ b/arch/arm/mm/cache-l2x0.c @@ -57,7 +57,17 @@ static void l2x0_inv_range(unsigned long start, unsigned long end) { unsigned long addr; - start &= ~(CACHE_LINE_SIZE - 1); + if (start & (CACHE_LINE_SIZE - 1)) { + start &= ~(CACHE_LINE_SIZE - 1); + sync_writel(start, L2X0_CLEAN_INV_LINE_PA, 1); + start += CACHE_LINE_SIZE; + } + + if (end & (CACHE_LINE_SIZE - 1)) { + end &= ~(CACHE_LINE_SIZE - 1); + sync_writel(end, L2X0_CLEAN_INV_LINE_PA, 1); + } + for (addr = start; addr < end; addr += CACHE_LINE_SIZE) sync_writel(addr, L2X0_INV_LINE_PA, 1); cache_sync(); diff --git a/arch/i386/boot/header.S b/arch/i386/boot/header.S index 7f4a2c5..f3140e5 100644 --- a/arch/i386/boot/header.S +++ b/arch/i386/boot/header.S @@ -275,7 +275,7 @@ die: hlt jmp die - .size die, .-due + .size die, .-die .section ".initdata", "a" setup_corrupt: diff --git a/arch/i386/boot/memory.c b/arch/i386/boot/memory.c index 1a2e62d..3783539 100644 --- a/arch/i386/boot/memory.c +++ b/arch/i386/boot/memory.c @@ -20,6 +20,7 @@ static int detect_memory_e820(void) { + int count = 0; u32 next = 0; u32 size, id; u8 err; @@ -27,20 +28,33 @@ static int detect_memory_e820(void) do { size = sizeof(struct e820entry); - id = SMAP; + + /* Important: %edx is clobbered by some BIOSes, + so it must be either used for the error output + or explicitly marked clobbered. */ asm("int $0x15; setc %0" - : "=am" (err), "+b" (next), "+d" (id), "+c" (size), + : "=d" (err), "+b" (next), "=a" (id), "+c" (size), "=m" (*desc) - : "D" (desc), "a" (0xe820)); + : "D" (desc), "d" (SMAP), "a" (0xe820)); + + /* Some BIOSes stop returning SMAP in the middle of + the search loop. We don't know exactly how the BIOS + screwed up the map at that point, we might have a + partial map, the full map, or complete garbage, so + just return failure. */ + if (id != SMAP) { + count = 0; + break; + } - if (err || id != SMAP) + if (err) break; - boot_params.e820_entries++; + count++; desc++; - } while (next && boot_params.e820_entries < E820MAX); + } while (next && count < E820MAX); - return boot_params.e820_entries; + return boot_params.e820_entries = count; } static int detect_memory_e801(void) @@ -89,11 +103,16 @@ static int detect_memory_88(void) int detect_memory(void) { + int err = -1; + if (detect_memory_e820() > 0) - return 0; + err = 0; if (!detect_memory_e801()) - return 0; + err = 0; + + if (!detect_memory_88()) + err = 0; - return detect_memory_88(); + return err; } diff --git a/arch/i386/boot/video.c b/arch/i386/boot/video.c index 693f20d..e4ba897 100644 --- a/arch/i386/boot/video.c +++ b/arch/i386/boot/video.c @@ -147,7 +147,7 @@ int mode_defined(u16 mode) } /* Set mode (without recalc) */ -static int raw_set_mode(u16 mode) +static int raw_set_mode(u16 mode, u16 *real_mode) { int nmode, i; struct card_info *card; @@ -165,8 +165,10 @@ static int raw_set_mode(u16 mode) if ((mode == nmode && visible) || mode == mi->mode || - mode == (mi->y << 8)+mi->x) + mode == (mi->y << 8)+mi->x) { + *real_mode = mi->mode; return card->set_mode(mi); + } if (visible) nmode++; @@ -178,7 +180,7 @@ static int raw_set_mode(u16 mode) if (mode >= card->xmode_first && mode < card->xmode_first+card->xmode_n) { struct mode_info mix; - mix.mode = mode; + *real_mode = mix.mode = mode; mix.x = mix.y = 0; return card->set_mode(&mix); } @@ -223,6 +225,7 @@ static void vga_recalc_vertical(void) static int set_mode(u16 mode) { int rv; + u16 real_mode; /* Very special mode numbers... */ if (mode == VIDEO_CURRENT_MODE) @@ -232,13 +235,16 @@ static int set_mode(u16 mode) else if (mode == EXTENDED_VGA) mode = VIDEO_8POINT; - rv = raw_set_mode(mode); + rv = raw_set_mode(mode, &real_mode); if (rv) return rv; if (mode & VIDEO_RECALC) vga_recalc_vertical(); + /* Save the canonical mode number for the kernel, not + an alias, size specification or menu position */ + boot_params.hdr.vid_mode = real_mode; return 0; } diff --git a/arch/i386/kernel/acpi/wakeup.S b/arch/i386/kernel/acpi/wakeup.S index ed0a0f2..f22ba85 100644 --- a/arch/i386/kernel/acpi/wakeup.S +++ b/arch/i386/kernel/acpi/wakeup.S @@ -151,51 +151,30 @@ bogus_real_magic: #define VIDEO_FIRST_V7 0x0900 # Setting of user mode (AX=mode ID) => CF=success + +# For now, we only handle VESA modes (0x0200..0x03ff). To handle other +# modes, we should probably compile in the video code from the boot +# directory. mode_set: movw %ax, %bx -#if 0 - cmpb $0xff, %ah - jz setalias - - testb $VIDEO_RECALC>>8, %ah - jnz _setrec - - cmpb $VIDEO_FIRST_RESOLUTION>>8, %ah - jnc setres - - cmpb $VIDEO_FIRST_SPECIAL>>8, %ah - jz setspc - - cmpb $VIDEO_FIRST_V7>>8, %ah - jz setv7 -#endif - - cmpb $VIDEO_FIRST_VESA>>8, %ah - jnc check_vesa -#if 0 - orb %ah, %ah - jz setmenu -#endif - - decb %ah -# jz setbios Add bios modes later + subb $VIDEO_FIRST_VESA>>8, %bh + cmpb $2, %bh + jb check_vesa -setbad: clc +setbad: + clc ret check_vesa: - subb $VIDEO_FIRST_VESA>>8, %bh orw $0x4000, %bx # Use linear frame buffer movw $0x4f02, %ax # VESA BIOS mode set call int $0x10 cmpw $0x004f, %ax # AL=4f if implemented - jnz _setbad # AH=0 if OK + jnz setbad # AH=0 if OK stc ret -_setbad: jmp setbad - .code32 ALIGN diff --git a/arch/i386/xen/mmu.c b/arch/i386/xen/mmu.c index 4ae038a..874db0c 100644 --- a/arch/i386/xen/mmu.c +++ b/arch/i386/xen/mmu.c @@ -559,6 +559,9 @@ void xen_exit_mmap(struct mm_struct *mm) put_cpu(); spin_lock(&mm->page_table_lock); - xen_pgd_unpin(mm->pgd); + + /* pgd may not be pinned in the error exit path of execve */ + if (PagePinned(virt_to_page(mm->pgd))) + xen_pgd_unpin(mm->pgd); spin_unlock(&mm->page_table_lock); } diff --git a/arch/mips/kernel/i8259.c b/arch/mips/kernel/i8259.c index b6c3080..3a2d255 100644 --- a/arch/mips/kernel/i8259.c +++ b/arch/mips/kernel/i8259.c @@ -177,10 +177,7 @@ handle_real_irq: outb(cached_master_mask, PIC_MASTER_IMR); outb(0x60+irq,PIC_MASTER_CMD); /* 'Specific EOI to master */ } -#ifdef CONFIG_MIPS_MT_SMTC - if (irq_hwmask[irq] & ST0_IM) - set_c0_status(irq_hwmask[irq] & ST0_IM); -#endif /* CONFIG_MIPS_MT_SMTC */ + smtc_im_ack_irq(irq); spin_unlock_irqrestore(&i8259A_lock, flags); return; diff --git a/arch/mips/kernel/irq-msc01.c b/arch/mips/kernel/irq-msc01.c index 410868b..1ecdd50 100644 --- a/arch/mips/kernel/irq-msc01.c +++ b/arch/mips/kernel/irq-msc01.c @@ -52,11 +52,8 @@ static void level_mask_and_ack_msc_irq(unsigned int irq) mask_msc_irq(irq); if (!cpu_has_veic) MSCIC_WRITE(MSC01_IC_EOI, 0); -#ifdef CONFIG_MIPS_MT_SMTC /* This actually needs to be a call into platform code */ - if (irq_hwmask[irq] & ST0_IM) - set_c0_status(irq_hwmask[irq] & ST0_IM); -#endif /* CONFIG_MIPS_MT_SMTC */ + smtc_im_ack_irq(irq); } /* @@ -73,10 +70,7 @@ static void edge_mask_and_ack_msc_irq(unsigned int irq) MSCIC_WRITE(MSC01_IC_SUP+irq*8, r | ~MSC01_IC_SUP_EDGE_BIT); MSCIC_WRITE(MSC01_IC_SUP+irq*8, r); } -#ifdef CONFIG_MIPS_MT_SMTC - if (irq_hwmask[irq] & ST0_IM) - set_c0_status(irq_hwmask[irq] & ST0_IM); -#endif /* CONFIG_MIPS_MT_SMTC */ + smtc_im_ack_irq(irq); } /* diff --git a/arch/mips/kernel/irq.c b/arch/mips/kernel/irq.c index aeded6c..a990aad 100644 --- a/arch/mips/kernel/irq.c +++ b/arch/mips/kernel/irq.c @@ -74,20 +74,12 @@ EXPORT_SYMBOL_GPL(free_irqno); */ void ack_bad_irq(unsigned int irq) { + smtc_im_ack_irq(irq); printk("unexpected IRQ # %d\n", irq); } atomic_t irq_err_count; -#ifdef CONFIG_MIPS_MT_SMTC -/* - * SMTC Kernel needs to manipulate low-level CPU interrupt mask - * in do_IRQ. These are passed in setup_irq_smtc() and stored - * in this table. - */ -unsigned long irq_hwmask[NR_IRQS]; -#endif /* CONFIG_MIPS_MT_SMTC */ - /* * Generic, controller-independent functions: */ diff --git a/arch/mips/kernel/scall64-o32.S b/arch/mips/kernel/scall64-o32.S index b3ed731..dd68afc 100644 --- a/arch/mips/kernel/scall64-o32.S +++ b/arch/mips/kernel/scall64-o32.S @@ -525,5 +525,5 @@ sys_call_table: PTR compat_sys_signalfd PTR compat_sys_timerfd PTR sys_eventfd - PTR sys_fallocate /* 4320 */ + PTR sys32_fallocate /* 4320 */ .size sys_call_table,.-sys_call_table diff --git a/arch/mips/kernel/smtc.c b/arch/mips/kernel/smtc.c index 43826c1..f094043 100644 --- a/arch/mips/kernel/smtc.c +++ b/arch/mips/kernel/smtc.c @@ -25,8 +25,11 @@ #include <asm/smtc_proc.h> /* - * This file should be built into the kernel only if CONFIG_MIPS_MT_SMTC is set. + * SMTC Kernel needs to manipulate low-level CPU interrupt mask + * in do_IRQ. These are passed in setup_irq_smtc() and stored + * in this table. */ +unsigned long irq_hwmask[NR_IRQS]; #define LOCK_MT_PRA() \ local_irq_save(flags); \ diff --git a/arch/mips/kernel/vmlinux.lds.S b/arch/mips/kernel/vmlinux.lds.S index 60bbaec..087ab99 100644 --- a/arch/mips/kernel/vmlinux.lds.S +++ b/arch/mips/kernel/vmlinux.lds.S @@ -45,6 +45,8 @@ SECTIONS __dbe_table : { *(__dbe_table) } __stop___dbe_table = .; + NOTES + RODATA /* writeable */ diff --git a/arch/mips/sgi-ip32/ip32-platform.c b/arch/mips/sgi-ip32/ip32-platform.c index ba3697e..7309e48 100644 --- a/arch/mips/sgi-ip32/ip32-platform.c +++ b/arch/mips/sgi-ip32/ip32-platform.c @@ -41,8 +41,8 @@ static struct platform_device uart8250_device = { static int __init uart8250_init(void) { - uart8250_data[0].iobase = (unsigned long) &mace->isa.serial1; - uart8250_data[1].iobase = (unsigned long) &mace->isa.serial1; + uart8250_data[0].membase = (void __iomem *) &mace->isa.serial1; + uart8250_data[1].membase = (void __iomem *) &mace->isa.serial1; return platform_device_register(&uart8250_device); } diff --git a/arch/mips/sibyte/bcm1480/setup.c b/arch/mips/sibyte/bcm1480/setup.c index bb28f28..7e1aa34 100644 --- a/arch/mips/sibyte/bcm1480/setup.c +++ b/arch/mips/sibyte/bcm1480/setup.c @@ -15,6 +15,7 @@ * along with this program; if not, write to the Free Software * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ +#include <linux/init.h> #include <linux/kernel.h> #include <linux/module.h> #include <linux/reboot.h> @@ -35,6 +36,7 @@ unsigned int soc_type; EXPORT_SYMBOL(soc_type); unsigned int periph_rev; unsigned int zbbus_mhz; +EXPORT_SYMBOL(zbbus_mhz); static unsigned int part_type; diff --git a/arch/powerpc/boot/dts/mpc8349emitx.dts b/arch/powerpc/boot/dts/mpc8349emitx.dts index f5c3086..3bc3202 100644 --- a/arch/powerpc/boot/dts/mpc8349emitx.dts +++ b/arch/powerpc/boot/dts/mpc8349emitx.dts @@ -97,6 +97,7 @@ #size-cells = <0>; interrupt-parent = < &ipic >; interrupts = <26 8>; + dr_mode = "peripheral"; phy_type = "ulpi"; }; diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index 588c0cb..15998b5 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -613,6 +613,13 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp) regs->ccr = 0; regs->gpr[1] = sp; + /* + * We have just cleared all the nonvolatile GPRs, so make + * FULL_REGS(regs) return true. This is necessary to allow + * ptrace to examine the thread immediately after exec. + */ + regs->trap &= ~1UL; + #ifdef CONFIG_PPC32 regs->mq = 0; regs->nip = start; diff --git a/arch/powerpc/platforms/83xx/usb.c b/arch/powerpc/platforms/83xx/usb.c index e7fdf01..eafe760 100644 --- a/arch/powerpc/platforms/83xx/usb.c +++ b/arch/powerpc/platforms/83xx/usb.c @@ -76,14 +76,14 @@ int mpc834x_usb_cfg(void) if (port0_is_dr) printk(KERN_WARNING "834x USB port0 can't be used by both DR and MPH!\n"); - sicrl |= MPC834X_SICRL_USB0; + sicrl &= ~MPC834X_SICRL_USB0; } prop = of_get_property(np, "port1", NULL); if (prop) { if (port1_is_dr) printk(KERN_WARNING "834x USB port1 can't be used by both DR and MPH!\n"); - sicrl |= MPC834X_SICRL_USB1; + sicrl &= ~MPC834X_SICRL_USB1; } of_node_put(np); } diff --git a/arch/powerpc/platforms/cell/spufs/file.c b/arch/powerpc/platforms/cell/spufs/file.c index b93a027..d72b16d6 100644 --- a/arch/powerpc/platforms/cell/spufs/file.c +++ b/arch/powerpc/platforms/cell/spufs/file.c @@ -2110,8 +2110,8 @@ struct tree_descr spufs_dir_contents[] = { { "mbox_stat", &spufs_mbox_stat_fops, 0444, }, { "ibox_stat", &spufs_ibox_stat_fops, 0444, }, { "wbox_stat", &spufs_wbox_stat_fops, 0444, }, - { "signal1", &spufs_signal1_nosched_fops, 0222, }, - { "signal2", &spufs_signal2_nosched_fops, 0222, }, + { "signal1", &spufs_signal1_fops, 0666, }, + { "signal2", &spufs_signal2_fops, 0666, }, { "signal1_type", &spufs_signal1_type, 0666, }, { "signal2_type", &spufs_signal2_type, 0666, }, { "cntl", &spufs_cntl_fops, 0666, }, diff --git a/arch/powerpc/platforms/pseries/xics.c b/arch/powerpc/platforms/pseries/xics.c index 5ddb025..66e7d68 100644 --- a/arch/powerpc/platforms/pseries/xics.c +++ b/arch/powerpc/platforms/pseries/xics.c @@ -419,7 +419,7 @@ static void xics_set_affinity(unsigned int virq, cpumask_t cpumask) * For the moment only implement delivery to all cpus or one cpu. * Get current irq_server for the given irq */ - irq_server = get_irq_server(irq, 1); + irq_server = get_irq_server(virq, 1); if (irq_server == -1) { char cpulist[128]; cpumask_scnprintf(cpulist, sizeof(cpulist), cpumask); diff --git a/arch/powerpc/sysdev/commproc.c b/arch/powerpc/sysdev/commproc.c index b562afc..160a8b4 100644 --- a/arch/powerpc/sysdev/commproc.c +++ b/arch/powerpc/sysdev/commproc.c @@ -387,4 +387,4 @@ uint cpm_dpram_phys(u8* addr) { return (dpram_pbase + (uint)(addr - dpram_vbase)); } -EXPORT_SYMBOL(cpm_dpram_addr); +EXPORT_SYMBOL(cpm_dpram_phys); diff --git a/arch/ppc/8xx_io/commproc.c b/arch/ppc/8xx_io/commproc.c index 7088428..9da880b 100644 --- a/arch/ppc/8xx_io/commproc.c +++ b/arch/ppc/8xx_io/commproc.c @@ -459,7 +459,7 @@ EXPORT_SYMBOL(cpm_dpdump); void *cpm_dpram_addr(unsigned long offset) { - return ((immap_t *)IMAP_ADDR)->im_cpm.cp_dpmem + offset; + return (void *)(dpram_vbase + offset); } EXPORT_SYMBOL(cpm_dpram_addr); diff --git a/arch/sparc/kernel/ebus.c b/arch/sparc/kernel/ebus.c index e2d02fd..d850785 100644 --- a/arch/sparc/kernel/ebus.c +++ b/arch/sparc/kernel/ebus.c @@ -156,6 +156,8 @@ void __init fill_ebus_device(struct device_node *dp, struct linux_ebus_device *d dev->prom_node = dp; regs = of_get_property(dp, "reg", &len); + if (!regs) + len = 0; if (len % sizeof(struct linux_prom_registers)) { prom_printf("UGH: proplen for %s was %d, need multiple of %d\n", dev->prom_node->name, len, diff --git a/arch/sparc64/kernel/binfmt_aout32.c b/arch/sparc64/kernel/binfmt_aout32.c index f205fc7..d208cc7 100644 --- a/arch/sparc64/kernel/binfmt_aout32.c +++ b/arch/sparc64/kernel/binfmt_aout32.c @@ -177,7 +177,7 @@ static u32 __user *create_aout32_tables(char __user *p, struct linux_binprm *bpr get_user(c,p++); } while (c); } - put_user(NULL,argv); + put_user(0,argv); current->mm->arg_end = current->mm->env_start = (unsigned long) p; while (envc-->0) { char c; @@ -186,7 +186,7 @@ static u32 __user *create_aout32_tables(char __user *p, struct linux_binprm *bpr get_user(c,p++); } while (c); } - put_user(NULL,envp); + put_user(0,envp); current->mm->env_end = (unsigned long) p; return sp; } diff --git a/arch/sparc64/kernel/ebus.c b/arch/sparc64/kernel/ebus.c index bc9ae36..04ab81c 100644 --- a/arch/sparc64/kernel/ebus.c +++ b/arch/sparc64/kernel/ebus.c @@ -375,7 +375,10 @@ static void __init fill_ebus_device(struct device_node *dp, struct linux_ebus_de dev->num_addrs = 0; dev->num_irqs = 0; } else { - (void) of_get_property(dp, "reg", &len); + const int *regs = of_get_property(dp, "reg", &len); + + if (!regs) + len = 0; dev->num_addrs = len / sizeof(struct linux_prom_registers); for (i = 0; i < dev->num_addrs; i++) diff --git a/arch/sparc64/lib/NGcopy_from_user.S b/arch/sparc64/lib/NGcopy_from_user.S index 2d93456..e7f433f 100644 --- a/arch/sparc64/lib/NGcopy_from_user.S +++ b/arch/sparc64/lib/NGcopy_from_user.S @@ -1,6 +1,6 @@ /* NGcopy_from_user.S: Niagara optimized copy from userspace. * - * Copyright (C) 2006 David S. Miller (davem@davemloft.net) + * Copyright (C) 2006, 2007 David S. Miller (davem@davemloft.net) */ #define EX_LD(x) \ @@ -8,8 +8,8 @@ .section .fixup; \ .align 4; \ 99: wr %g0, ASI_AIUS, %asi;\ - retl; \ - mov 1, %o0; \ + ret; \ + restore %g0, 1, %o0; \ .section __ex_table,"a";\ .align 4; \ .word 98b, 99b; \ @@ -24,7 +24,7 @@ #define LOAD(type,addr,dest) type##a [addr] ASI_AIUS, dest #define LOAD_TWIN(addr_reg,dest0,dest1) \ ldda [addr_reg] ASI_BLK_INIT_QUAD_LDD_AIUS, dest0 -#define EX_RETVAL(x) 0 +#define EX_RETVAL(x) %g0 #ifdef __KERNEL__ #define PREAMBLE \ diff --git a/arch/sparc64/lib/NGcopy_to_user.S b/arch/sparc64/lib/NGcopy_to_user.S index 34112d5..6ea01c5 100644 --- a/arch/sparc64/lib/NGcopy_to_user.S +++ b/arch/sparc64/lib/NGcopy_to_user.S @@ -1,6 +1,6 @@ /* NGcopy_to_user.S: Niagara optimized copy to userspace. * - * Copyright (C) 2006 David S. Miller (davem@davemloft.net) + * Copyright (C) 2006, 2007 David S. Miller (davem@davemloft.net) */ #define EX_ST(x) \ @@ -8,8 +8,8 @@ .section .fixup; \ .align 4; \ 99: wr %g0, ASI_AIUS, %asi;\ - retl; \ - mov 1, %o0; \ + ret; \ + restore %g0, 1, %o0; \ .section __ex_table,"a";\ .align 4; \ .word 98b, 99b; \ @@ -23,7 +23,7 @@ #define FUNC_NAME NGcopy_to_user #define STORE(type,src,addr) type##a src, [addr] ASI_AIUS #define STORE_ASI ASI_BLK_INIT_QUAD_LDD_AIUS -#define EX_RETVAL(x) 0 +#define EX_RETVAL(x) %g0 #ifdef __KERNEL__ /* Writing to %asi is _expensive_ so we hardcode it. diff --git a/arch/sparc64/lib/NGmemcpy.S b/arch/sparc64/lib/NGmemcpy.S index 66063a9..605cb3f 100644 --- a/arch/sparc64/lib/NGmemcpy.S +++ b/arch/sparc64/lib/NGmemcpy.S @@ -1,6 +1,6 @@ /* NGmemcpy.S: Niagara optimized memcpy. * - * Copyright (C) 2006 David S. Miller (davem@davemloft.net) + * Copyright (C) 2006, 2007 David S. Miller (davem@davemloft.net) */ #ifdef __KERNEL__ @@ -16,6 +16,12 @@ wr %g0, ASI_PNF, %asi #endif +#ifdef __sparc_v9__ +#define SAVE_AMOUNT 128 +#else +#define SAVE_AMOUNT 64 +#endif + #ifndef STORE_ASI #define STORE_ASI ASI_BLK_INIT_QUAD_LDD_P #endif @@ -50,7 +56,11 @@ #endif #ifndef STORE_INIT +#ifndef SIMULATE_NIAGARA_ON_NON_NIAGARA #define STORE_INIT(src,addr) stxa src, [addr] %asi +#else +#define STORE_INIT(src,addr) stx src, [addr + 0x00] +#endif #endif #ifndef FUNC_NAME @@ -73,18 +83,19 @@ .globl FUNC_NAME .type FUNC_NAME,#function -FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */ - srlx %o2, 31, %g2 +FUNC_NAME: /* %i0=dst, %i1=src, %i2=len */ + PREAMBLE + save %sp, -SAVE_AMOUNT, %sp + srlx %i2, 31, %g2 cmp %g2, 0 tne %xcc, 5 - PREAMBLE - mov %o0, GLOBAL_SPARE - cmp %o2, 0 + mov %i0, %o0 + cmp %i2, 0 be,pn %XCC, 85f - or %o0, %o1, %o3 - cmp %o2, 16 + or %o0, %i1, %i3 + cmp %i2, 16 blu,a,pn %XCC, 80f - or %o3, %o2, %o3 + or %i3, %i2, %i3 /* 2 blocks (128 bytes) is the minimum we can do the block * copy with. We need to ensure that we'll iterate at least @@ -93,31 +104,31 @@ FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */ * to (64 - 1) bytes from the length before we perform the * block copy loop. */ - cmp %o2, (2 * 64) + cmp %i2, (2 * 64) blu,pt %XCC, 70f - andcc %o3, 0x7, %g0 + andcc %i3, 0x7, %g0 /* %o0: dst - * %o1: src - * %o2: len (known to be >= 128) + * %i1: src + * %i2: len (known to be >= 128) * - * The block copy loops will use %o4/%o5,%g2/%g3 as + * The block copy loops will use %i4/%i5,%g2/%g3 as * temporaries while copying the data. */ - LOAD(prefetch, %o1, #one_read) + LOAD(prefetch, %i1, #one_read) wr %g0, STORE_ASI, %asi /* Align destination on 64-byte boundary. */ - andcc %o0, (64 - 1), %o4 + andcc %o0, (64 - 1), %i4 be,pt %XCC, 2f - sub %o4, 64, %o4 - sub %g0, %o4, %o4 ! bytes to align dst - sub %o2, %o4, %o2 -1: subcc %o4, 1, %o4 - EX_LD(LOAD(ldub, %o1, %g1)) + sub %i4, 64, %i4 + sub %g0, %i4, %i4 ! bytes to align dst + sub %i2, %i4, %i2 +1: subcc %i4, 1, %i4 + EX_LD(LOAD(ldub, %i1, %g1)) EX_ST(STORE(stb, %g1, %o0)) - add %o1, 1, %o1 + add %i1, 1, %i1 bne,pt %XCC, 1b add %o0, 1, %o0 @@ -136,111 +147,155 @@ FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */ * aligned store data at a time, this is easy to ensure. */ 2: - andcc %o1, (16 - 1), %o4 - andn %o2, (64 - 1), %g1 ! block copy loop iterator - sub %o2, %g1, %o2 ! final sub-block copy bytes + andcc %i1, (16 - 1), %i4 + andn %i2, (64 - 1), %g1 ! block copy loop iterator be,pt %XCC, 50f - cmp %o4, 8 - be,a,pt %XCC, 10f - sub %o1, 0x8, %o1 + sub %i2, %g1, %i2 ! final sub-block copy bytes + + cmp %i4, 8 + be,pt %XCC, 10f + sub %i1, %i4, %i1 /* Neither 8-byte nor 16-byte aligned, shift and mask. */ - mov %g1, %o4 - and %o1, 0x7, %g1 - sll %g1, 3, %g1 - mov 64, %o3 - andn %o1, 0x7, %o1 - EX_LD(LOAD(ldx, %o1, %g2)) - sub %o3, %g1, %o3 - sllx %g2, %g1, %g2 + and %i4, 0x7, GLOBAL_SPARE + sll GLOBAL_SPARE, 3, GLOBAL_SPARE + mov 64, %i5 + EX_LD(LOAD_TWIN(%i1, %g2, %g3)) + sub %i5, GLOBAL_SPARE, %i5 + mov 16, %o4 + mov 32, %o5 + mov 48, %o7 + mov 64, %i3 + + bg,pn %XCC, 9f + nop -#define SWIVEL_ONE_DWORD(SRC, TMP1, TMP2, PRE_VAL, PRE_SHIFT, POST_SHIFT, DST)\ - EX_LD(LOAD(ldx, SRC, TMP1)); \ - srlx TMP1, PRE_SHIFT, TMP2; \ - or TMP2, PRE_VAL, TMP2; \ - EX_ST(STORE_INIT(TMP2, DST)); \ - sllx TMP1, POST_SHIFT, PRE_VAL; - -1: add %o1, 0x8, %o1 - SWIVEL_ONE_DWORD(%o1, %g3, %o5, %g2, %o3, %g1, %o0 + 0x00) - add %o1, 0x8, %o1 - SWIVEL_ONE_DWORD(%o1, %g3, %o5, %g2, %o3, %g1, %o0 + 0x08) - add %o1, 0x8, %o1 - SWIVEL_ONE_DWORD(%o1, %g3, %o5, %g2, %o3, %g1, %o0 + 0x10) - add %o1, 0x8, %o1 - SWIVEL_ONE_DWORD(%o1, %g3, %o5, %g2, %o3, %g1, %o0 + 0x18) - add %o1, 32, %o1 - LOAD(prefetch, %o1, #one_read) - sub %o1, 32 - 8, %o1 - SWIVEL_ONE_DWORD(%o1, %g3, %o5, %g2, %o3, %g1, %o0 + 0x20) - add %o1, 8, %o1 - SWIVEL_ONE_DWORD(%o1, %g3, %o5, %g2, %o3, %g1, %o0 + 0x28) - add %o1, 8, %o1 - SWIVEL_ONE_DWORD(%o1, %g3, %o5, %g2, %o3, %g1, %o0 + 0x30) - add %o1, 8, %o1 - SWIVEL_ONE_DWORD(%o1, %g3, %o5, %g2, %o3, %g1, %o0 + 0x38) - subcc %o4, 64, %o4 - bne,pt %XCC, 1b +#define MIX_THREE_WORDS(WORD1, WORD2, WORD3, PRE_SHIFT, POST_SHIFT, TMP) \ + sllx WORD1, POST_SHIFT, WORD1; \ + srlx WORD2, PRE_SHIFT, TMP; \ + sllx WORD2, POST_SHIFT, WORD2; \ + or WORD1, TMP, WORD1; \ + srlx WORD3, PRE_SHIFT, TMP; \ + or WORD2, TMP, WORD2; + +8: EX_LD(LOAD_TWIN(%i1 + %o4, %o2, %o3)) + MIX_THREE_WORDS(%g2, %g3, %o2, %i5, GLOBAL_SPARE, %o1) + LOAD(prefetch, %i1 + %i3, #one_read) + + EX_ST(STORE_INIT(%g2, %o0 + 0x00)) + EX_ST(STORE_INIT(%g3, %o0 + 0x08)) + + EX_LD(LOAD_TWIN(%i1 + %o5, %g2, %g3)) + MIX_THREE_WORDS(%o2, %o3, %g2, %i5, GLOBAL_SPARE, %o1) + + EX_ST(STORE_INIT(%o2, %o0 + 0x10)) + EX_ST(STORE_INIT(%o3, %o0 + 0x18)) + + EX_LD(LOAD_TWIN(%i1 + %o7, %o2, %o3)) + MIX_THREE_WORDS(%g2, %g3, %o2, %i5, GLOBAL_SPARE, %o1) + + EX_ST(STORE_INIT(%g2, %o0 + 0x20)) + EX_ST(STORE_INIT(%g3, %o0 + 0x28)) + + EX_LD(LOAD_TWIN(%i1 + %i3, %g2, %g3)) + add %i1, 64, %i1 + MIX_THREE_WORDS(%o2, %o3, %g2, %i5, GLOBAL_SPARE, %o1) + + EX_ST(STORE_INIT(%o2, %o0 + 0x30)) + EX_ST(STORE_INIT(%o3, %o0 + 0x38)) + + subcc %g1, 64, %g1 + bne,pt %XCC, 8b add %o0, 64, %o0 -#undef SWIVEL_ONE_DWORD + ba,pt %XCC, 60f + add %i1, %i4, %i1 + +9: EX_LD(LOAD_TWIN(%i1 + %o4, %o2, %o3)) + MIX_THREE_WORDS(%g3, %o2, %o3, %i5, GLOBAL_SPARE, %o1) + LOAD(prefetch, %i1 + %i3, #one_read) + + EX_ST(STORE_INIT(%g3, %o0 + 0x00)) + EX_ST(STORE_INIT(%o2, %o0 + 0x08)) + + EX_LD(LOAD_TWIN(%i1 + %o5, %g2, %g3)) + MIX_THREE_WORDS(%o3, %g2, %g3, %i5, GLOBAL_SPARE, %o1) + + EX_ST(STORE_INIT(%o3, %o0 + 0x10)) + EX_ST(STORE_INIT(%g2, %o0 + 0x18)) + + EX_LD(LOAD_TWIN(%i1 + %o7, %o2, %o3)) + MIX_THREE_WORDS(%g3, %o2, %o3, %i5, GLOBAL_SPARE, %o1) + + EX_ST(STORE_INIT(%g3, %o0 + 0x20)) + EX_ST(STORE_INIT(%o2, %o0 + 0x28)) + + EX_LD(LOAD_TWIN(%i1 + %i3, %g2, %g3)) + add %i1, 64, %i1 + MIX_THREE_WORDS(%o3, %g2, %g3, %i5, GLOBAL_SPARE, %o1) + + EX_ST(STORE_INIT(%o3, %o0 + 0x30)) + EX_ST(STORE_INIT(%g2, %o0 + 0x38)) + + subcc %g1, 64, %g1 + bne,pt %XCC, 9b + add %o0, 64, %o0 - srl %g1, 3, %g1 ba,pt %XCC, 60f - add %o1, %g1, %o1 + add %i1, %i4, %i1 10: /* Destination is 64-byte aligned, source was only 8-byte * aligned but it has been subtracted by 8 and we perform * one twin load ahead, then add 8 back into source when * we finish the loop. */ - EX_LD(LOAD_TWIN(%o1, %o4, %o5)) -1: add %o1, 16, %o1 - EX_LD(LOAD_TWIN(%o1, %g2, %g3)) - add %o1, 16 + 32, %o1 - LOAD(prefetch, %o1, #one_read) - sub %o1, 32, %o1 + EX_LD(LOAD_TWIN(%i1, %o4, %o5)) + mov 16, %o7 + mov 32, %g2 + mov 48, %g3 + mov 64, %o1 +1: EX_LD(LOAD_TWIN(%i1 + %o7, %o2, %o3)) + LOAD(prefetch, %i1 + %o1, #one_read) EX_ST(STORE_INIT(%o5, %o0 + 0x00)) ! initializes cache line - EX_ST(STORE_INIT(%g2, %o0 + 0x08)) - EX_LD(LOAD_TWIN(%o1, %o4, %o5)) - add %o1, 16, %o1 - EX_ST(STORE_INIT(%g3, %o0 + 0x10)) + EX_ST(STORE_INIT(%o2, %o0 + 0x08)) + EX_LD(LOAD_TWIN(%i1 + %g2, %o4, %o5)) + EX_ST(STORE_INIT(%o3, %o0 + 0x10)) EX_ST(STORE_INIT(%o4, %o0 + 0x18)) - EX_LD(LOAD_TWIN(%o1, %g2, %g3)) - add %o1, 16, %o1 + EX_LD(LOAD_TWIN(%i1 + %g3, %o2, %o3)) EX_ST(STORE_INIT(%o5, %o0 + 0x20)) - EX_ST(STORE_INIT(%g2, %o0 + 0x28)) - EX_LD(LOAD_TWIN(%o1, %o4, %o5)) - EX_ST(STORE_INIT(%g3, %o0 + 0x30)) + EX_ST(STORE_INIT(%o2, %o0 + 0x28)) + EX_LD(LOAD_TWIN(%i1 + %o1, %o4, %o5)) + add %i1, 64, %i1 + EX_ST(STORE_INIT(%o3, %o0 + 0x30)) EX_ST(STORE_INIT(%o4, %o0 + 0x38)) subcc %g1, 64, %g1 bne,pt %XCC, 1b add %o0, 64, %o0 ba,pt %XCC, 60f - add %o1, 0x8, %o1 + add %i1, 0x8, %i1 50: /* Destination is 64-byte aligned, and source is 16-byte * aligned. */ -1: EX_LD(LOAD_TWIN(%o1, %o4, %o5)) - add %o1, 16, %o1 - EX_LD(LOAD_TWIN(%o1, %g2, %g3)) - add %o1, 16 + 32, %o1 - LOAD(prefetch, %o1, #one_read) - sub %o1, 32, %o1 + mov 16, %o7 + mov 32, %g2 + mov 48, %g3 + mov 64, %o1 +1: EX_LD(LOAD_TWIN(%i1 + %g0, %o4, %o5)) + EX_LD(LOAD_TWIN(%i1 + %o7, %o2, %o3)) + LOAD(prefetch, %i1 + %o1, #one_read) EX_ST(STORE_INIT(%o4, %o0 + 0x00)) ! initializes cache line EX_ST(STORE_INIT(%o5, %o0 + 0x08)) - EX_LD(LOAD_TWIN(%o1, %o4, %o5)) - add %o1, 16, %o1 - EX_ST(STORE_INIT(%g2, %o0 + 0x10)) - EX_ST(STORE_INIT(%g3, %o0 + 0x18)) - EX_LD(LOAD_TWIN(%o1, %g2, %g3)) - add %o1, 16, %o1 + EX_LD(LOAD_TWIN(%i1 + %g2, %o4, %o5)) + EX_ST(STORE_INIT(%o2, %o0 + 0x10)) + EX_ST(STORE_INIT(%o3, %o0 + 0x18)) + EX_LD(LOAD_TWIN(%i1 + %g3, %o2, %o3)) + add %i1, 64, %i1 EX_ST(STORE_INIT(%o4, %o0 + 0x20)) EX_ST(STORE_INIT(%o5, %o0 + 0x28)) - EX_ST(STORE_INIT(%g2, %o0 + 0x30)) - EX_ST(STORE_INIT(%g3, %o0 + 0x38)) + EX_ST(STORE_INIT(%o2, %o0 + 0x30)) + EX_ST(STORE_INIT(%o3, %o0 + 0x38)) subcc %g1, 64, %g1 bne,pt %XCC, 1b add %o0, 64, %o0 @@ -249,47 +304,47 @@ FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */ 60: membar #Sync - /* %o2 contains any final bytes still needed to be copied + /* %i2 contains any final bytes still needed to be copied * over. If anything is left, we copy it one byte at a time. */ - RESTORE_ASI(%o3) - brz,pt %o2, 85f - sub %o0, %o1, %o3 + RESTORE_ASI(%i3) + brz,pt %i2, 85f + sub %o0, %i1, %i3 ba,a,pt %XCC, 90f .align 64 70: /* 16 < len <= 64 */ bne,pn %XCC, 75f - sub %o0, %o1, %o3 + sub %o0, %i1, %i3 72: - andn %o2, 0xf, %o4 - and %o2, 0xf, %o2 -1: subcc %o4, 0x10, %o4 - EX_LD(LOAD(ldx, %o1, %o5)) - add %o1, 0x08, %o1 - EX_LD(LOAD(ldx, %o1, %g1)) - sub %o1, 0x08, %o1 - EX_ST(STORE(stx, %o5, %o1 + %o3)) - add %o1, 0x8, %o1 - EX_ST(STORE(stx, %g1, %o1 + %o3)) + andn %i2, 0xf, %i4 + and %i2, 0xf, %i2 +1: subcc %i4, 0x10, %i4 + EX_LD(LOAD(ldx, %i1, %i5)) + add %i1, 0x08, %i1 + EX_LD(LOAD(ldx, %i1, %g1)) + sub %i1, 0x08, %i1 + EX_ST(STORE(stx, %i5, %i1 + %i3)) + add %i1, 0x8, %i1 + EX_ST(STORE(stx, %g1, %i1 + %i3)) bgu,pt %XCC, 1b - add %o1, 0x8, %o1 -73: andcc %o2, 0x8, %g0 + add %i1, 0x8, %i1 +73: andcc %i2, 0x8, %g0 be,pt %XCC, 1f nop - sub %o2, 0x8, %o2 - EX_LD(LOAD(ldx, %o1, %o5)) - EX_ST(STORE(stx, %o5, %o1 + %o3)) - add %o1, 0x8, %o1 -1: andcc %o2, 0x4, %g0 + sub %i2, 0x8, %i2 + EX_LD(LOAD(ldx, %i1, %i5)) + EX_ST(STORE(stx, %i5, %i1 + %i3)) + add %i1, 0x8, %i1 +1: andcc %i2, 0x4, %g0 be,pt %XCC, 1f nop - sub %o2, 0x4, %o2 - EX_LD(LOAD(lduw, %o1, %o5)) - EX_ST(STORE(stw, %o5, %o1 + %o3)) - add %o1, 0x4, %o1 -1: cmp %o2, 0 + sub %i2, 0x4, %i2 + EX_LD(LOAD(lduw, %i1, %i5)) + EX_ST(STORE(stw, %i5, %i1 + %i3)) + add %i1, 0x4, %i1 +1: cmp %i2, 0 be,pt %XCC, 85f nop ba,pt %xcc, 90f @@ -300,71 +355,71 @@ FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */ sub %g1, 0x8, %g1 be,pn %icc, 2f sub %g0, %g1, %g1 - sub %o2, %g1, %o2 + sub %i2, %g1, %i2 1: subcc %g1, 1, %g1 - EX_LD(LOAD(ldub, %o1, %o5)) - EX_ST(STORE(stb, %o5, %o1 + %o3)) + EX_LD(LOAD(ldub, %i1, %i5)) + EX_ST(STORE(stb, %i5, %i1 + %i3)) bgu,pt %icc, 1b - add %o1, 1, %o1 + add %i1, 1, %i1 -2: add %o1, %o3, %o0 - andcc %o1, 0x7, %g1 +2: add %i1, %i3, %o0 + andcc %i1, 0x7, %g1 bne,pt %icc, 8f sll %g1, 3, %g1 - cmp %o2, 16 + cmp %i2, 16 bgeu,pt %icc, 72b nop ba,a,pt %xcc, 73b -8: mov 64, %o3 - andn %o1, 0x7, %o1 - EX_LD(LOAD(ldx, %o1, %g2)) - sub %o3, %g1, %o3 - andn %o2, 0x7, %o4 +8: mov 64, %i3 + andn %i1, 0x7, %i1 + EX_LD(LOAD(ldx, %i1, %g2)) + sub %i3, %g1, %i3 + andn %i2, 0x7, %i4 sllx %g2, %g1, %g2 -1: add %o1, 0x8, %o1 - EX_LD(LOAD(ldx, %o1, %g3)) - subcc %o4, 0x8, %o4 - srlx %g3, %o3, %o5 - or %o5, %g2, %o5 - EX_ST(STORE(stx, %o5, %o0)) +1: add %i1, 0x8, %i1 + EX_LD(LOAD(ldx, %i1, %g3)) + subcc %i4, 0x8, %i4 + srlx %g3, %i3, %i5 + or %i5, %g2, %i5 + EX_ST(STORE(stx, %i5, %o0)) add %o0, 0x8, %o0 bgu,pt %icc, 1b sllx %g3, %g1, %g2 srl %g1, 3, %g1 - andcc %o2, 0x7, %o2 + andcc %i2, 0x7, %i2 be,pn %icc, 85f - add %o1, %g1, %o1 + add %i1, %g1, %i1 ba,pt %xcc, 90f - sub %o0, %o1, %o3 + sub %o0, %i1, %i3 .align 64 80: /* 0 < len <= 16 */ - andcc %o3, 0x3, %g0 + andcc %i3, 0x3, %g0 bne,pn %XCC, 90f - sub %o0, %o1, %o3 + sub %o0, %i1, %i3 1: - subcc %o2, 4, %o2 - EX_LD(LOAD(lduw, %o1, %g1)) - EX_ST(STORE(stw, %g1, %o1 + %o3)) + subcc %i2, 4, %i2 + EX_LD(LOAD(lduw, %i1, %g1)) + EX_ST(STORE(stw, %g1, %i1 + %i3)) bgu,pt %XCC, 1b - add %o1, 4, %o1 + add %i1, 4, %i1 -85: retl - mov EX_RETVAL(GLOBAL_SPARE), %o0 +85: ret + restore EX_RETVAL(%i0), %g0, %o0 .align 32 90: - subcc %o2, 1, %o2 - EX_LD(LOAD(ldub, %o1, %g1)) - EX_ST(STORE(stb, %g1, %o1 + %o3)) + subcc %i2, 1, %i2 + EX_LD(LOAD(ldub, %i1, %g1)) + EX_ST(STORE(stb, %g1, %i1 + %i3)) bgu,pt %XCC, 90b - add %o1, 1, %o1 - retl - mov EX_RETVAL(GLOBAL_SPARE), %o0 + add %i1, 1, %i1 + ret + restore EX_RETVAL(%i0), %g0, %o0 .size FUNC_NAME, .-FUNC_NAME diff --git a/arch/x86_64/Kconfig b/arch/x86_64/Kconfig index ffa0364..b4d9089 100644 --- a/arch/x86_64/Kconfig +++ b/arch/x86_64/Kconfig @@ -60,14 +60,6 @@ config ZONE_DMA bool default y -config QUICKLIST - bool - default y - -config NR_QUICK - int - default 2 - config ISA bool diff --git a/arch/x86_64/ia32/ia32entry.S b/arch/x86_64/ia32/ia32entry.S index 9382786..18b2318 100644 --- a/arch/x86_64/ia32/ia32entry.S +++ b/arch/x86_64/ia32/ia32entry.S @@ -38,6 +38,18 @@ movq %rax,R8(%rsp) .endm + .macro LOAD_ARGS32 offset + movl \offset(%rsp),%r11d + movl \offset+8(%rsp),%r10d + movl \offset+16(%rsp),%r9d + movl \offset+24(%rsp),%r8d + movl \offset+40(%rsp),%ecx + movl \offset+48(%rsp),%edx + movl \offset+56(%rsp),%esi + movl \offset+64(%rsp),%edi + movl \offset+72(%rsp),%eax + .endm + .macro CFI_STARTPROC32 simple CFI_STARTPROC \simple CFI_UNDEFINED r8 @@ -152,7 +164,7 @@ sysenter_tracesys: movq $-ENOSYS,RAX(%rsp) /* really needed? */ movq %rsp,%rdi /* &pt_regs -> arg1 */ call syscall_trace_enter - LOAD_ARGS ARGOFFSET /* reload args from stack in case ptrace changed it */ + LOAD_ARGS32 ARGOFFSET /* reload args from stack in case ptrace changed it */ RESTORE_REST movl %ebp, %ebp /* no need to do an access_ok check here because rbp has been @@ -255,7 +267,7 @@ cstar_tracesys: movq $-ENOSYS,RAX(%rsp) /* really needed? */ movq %rsp,%rdi /* &pt_regs -> arg1 */ call syscall_trace_enter - LOAD_ARGS ARGOFFSET /* reload args from stack in case ptrace changed it */ + LOAD_ARGS32 ARGOFFSET /* reload args from stack in case ptrace changed it */ RESTORE_REST movl RSP-ARGOFFSET(%rsp), %r8d /* no need to do an access_ok check here because r8 has been @@ -334,7 +346,7 @@ ia32_tracesys: movq $-ENOSYS,RAX(%rsp) /* really needed? */ movq %rsp,%rdi /* &pt_regs -> arg1 */ call syscall_trace_enter - LOAD_ARGS ARGOFFSET /* reload args from stack in case ptrace changed it */ + LOAD_ARGS32 ARGOFFSET /* reload args from stack in case ptrace changed it */ RESTORE_REST jmp ia32_do_syscall END(ia32_syscall) diff --git a/arch/x86_64/kernel/acpi/wakeup.S b/arch/x86_64/kernel/acpi/wakeup.S index 13f1480..a06f2bc 100644 --- a/arch/x86_64/kernel/acpi/wakeup.S +++ b/arch/x86_64/kernel/acpi/wakeup.S @@ -81,7 +81,7 @@ wakeup_code: testl $2, realmode_flags - wakeup_code jz 1f mov video_mode - wakeup_code, %ax - call mode_seta + call mode_set 1: movw $0xb800, %ax @@ -291,52 +291,31 @@ no_longmode: #define VIDEO_FIRST_V7 0x0900 # Setting of user mode (AX=mode ID) => CF=success + +# For now, we only handle VESA modes (0x0200..0x03ff). To handle other +# modes, we should probably compile in the video code from the boot +# directory. .code16 -mode_seta: +mode_set: movw %ax, %bx -#if 0 - cmpb $0xff, %ah - jz setalias - - testb $VIDEO_RECALC>>8, %ah - jnz _setrec - - cmpb $VIDEO_FIRST_RESOLUTION>>8, %ah - jnc setres - - cmpb $VIDEO_FIRST_SPECIAL>>8, %ah - jz setspc - - cmpb $VIDEO_FIRST_V7>>8, %ah - jz setv7 -#endif - - cmpb $VIDEO_FIRST_VESA>>8, %ah - jnc check_vesaa -#if 0 - orb %ah, %ah - jz setmenu -#endif - - decb %ah -# jz setbios Add bios modes later + subb $VIDEO_FIRST_VESA>>8, %bh + cmpb $2, %bh + jb check_vesa -setbada: clc +setbad: + clc ret -check_vesaa: - subb $VIDEO_FIRST_VESA>>8, %bh +check_vesa: orw $0x4000, %bx # Use linear frame buffer movw $0x4f02, %ax # VESA BIOS mode set call int $0x10 cmpw $0x004f, %ax # AL=4f if implemented - jnz _setbada # AH=0 if OK + jnz setbad # AH=0 if OK stc ret -_setbada: jmp setbada - wakeup_stack_begin: # Stack grows down .org 0xff0 diff --git a/arch/x86_64/kernel/process.c b/arch/x86_64/kernel/process.c index 2842f50..9895655 100644 --- a/arch/x86_64/kernel/process.c +++ b/arch/x86_64/kernel/process.c @@ -208,7 +208,6 @@ void cpu_idle (void) if (__get_cpu_var(cpu_idle_state)) __get_cpu_var(cpu_idle_state) = 0; - check_pgt_cache(); rmb(); idle = pm_idle; if (!idle) diff --git a/arch/x86_64/kernel/ptrace.c b/arch/x86_64/kernel/ptrace.c index e83cc67..eea3702 100644 --- a/arch/x86_64/kernel/ptrace.c +++ b/arch/x86_64/kernel/ptrace.c @@ -232,10 +232,6 @@ static int putreg(struct task_struct *child, { unsigned long tmp; - /* Some code in the 64bit emulation may not be 64bit clean. - Don't take any chances. */ - if (test_tsk_thread_flag(child, TIF_IA32)) - value &= 0xffffffff; switch (regno) { case offsetof(struct user_regs_struct,fs): if (value && (value & 3) != 3) diff --git a/arch/x86_64/kernel/smp.c b/arch/x86_64/kernel/smp.c index 673a300..df4a828 100644 --- a/arch/x86_64/kernel/smp.c +++ b/arch/x86_64/kernel/smp.c @@ -241,7 +241,7 @@ void flush_tlb_mm (struct mm_struct * mm) } if (!cpus_empty(cpu_mask)) flush_tlb_others(cpu_mask, mm, FLUSH_ALL); - check_pgt_cache(); + preempt_enable(); } EXPORT_SYMBOL(flush_tlb_mm); diff --git a/arch/x86_64/vdso/voffset.h b/arch/x86_64/vdso/voffset.h index 5304204..4af67c7 100644 --- a/arch/x86_64/vdso/voffset.h +++ b/arch/x86_64/vdso/voffset.h @@ -1 +1 @@ -#define VDSO_TEXT_OFFSET 0x500 +#define VDSO_TEXT_OFFSET 0x600 diff --git a/crypto/async_tx/async_tx.c b/crypto/async_tx/async_tx.c index 0350071..bc18cbb 100644 --- a/crypto/async_tx/async_tx.c +++ b/crypto/async_tx/async_tx.c @@ -80,6 +80,7 @@ dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx) { enum dma_status status; struct dma_async_tx_descriptor *iter; + struct dma_async_tx_descriptor *parent; if (!tx) return DMA_SUCCESS; @@ -87,8 +88,15 @@ dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx) /* poll through the dependency chain, return when tx is complete */ do { iter = tx; - while (iter->cookie == -EBUSY) - iter = iter->parent; + + /* find the root of the unsubmitted dependency chain */ + while (iter->cookie == -EBUSY) { + parent = iter->parent; + if (parent && parent->cookie == -EBUSY) + iter = iter->parent; + else + break; + } status = dma_sync_wait(iter->chan, iter->cookie); } while (status == DMA_IN_PROGRESS || (iter != tx)); diff --git a/drivers/acpi/processor_core.c b/drivers/acpi/processor_core.c index 2afb3d2..9f11dc2 100644 --- a/drivers/acpi/processor_core.c +++ b/drivers/acpi/processor_core.c @@ -102,6 +102,8 @@ static struct acpi_driver acpi_processor_driver = { .add = acpi_processor_add, .remove = acpi_processor_remove, .start = acpi_processor_start, + .suspend = acpi_processor_suspend, + .resume = acpi_processor_resume, }, }; diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c index d9b8af7..f182613 100644 --- a/drivers/acpi/processor_idle.c +++ b/drivers/acpi/processor_idle.c @@ -325,6 +325,23 @@ static void acpi_state_timer_broadcast(struct acpi_processor *pr, #endif +/* + * Suspend / resume control + */ +static int acpi_idle_suspend; + +int acpi_processor_suspend(struct acpi_device * device, pm_message_t state) +{ + acpi_idle_suspend = 1; + return 0; +} + +int acpi_processor_resume(struct acpi_device * device) +{ + acpi_idle_suspend = 0; + return 0; +} + static void acpi_processor_idle(void) { struct acpi_processor *pr = NULL; @@ -355,7 +372,7 @@ static void acpi_processor_idle(void) } cx = pr->power.state; - if (!cx) { + if (!cx || acpi_idle_suspend) { if (pm_idle_save) pm_idle_save(); else diff --git a/drivers/acpi/sleep/Makefile b/drivers/acpi/sleep/Makefile index 195a4f6..f1fb888 100644 --- a/drivers/acpi/sleep/Makefile +++ b/drivers/acpi/sleep/Makefile @@ -1,5 +1,5 @@ -obj-y := poweroff.o wakeup.o -obj-$(CONFIG_ACPI_SLEEP) += main.o +obj-y := wakeup.o +obj-y += main.o obj-$(CONFIG_ACPI_SLEEP) += proc.o EXTRA_CFLAGS += $(ACPI_CFLAGS) diff --git a/drivers/acpi/sleep/main.c b/drivers/acpi/sleep/main.c index c52ade8..2cbb9aa 100644 --- a/drivers/acpi/sleep/main.c +++ b/drivers/acpi/sleep/main.c @@ -15,13 +15,39 @@ #include <linux/dmi.h> #include <linux/device.h> #include <linux/suspend.h> + +#include <asm/io.h> + #include <acpi/acpi_bus.h> #include <acpi/acpi_drivers.h> #include "sleep.h" u8 sleep_states[ACPI_S_STATE_COUNT]; +#ifdef CONFIG_PM_SLEEP static u32 acpi_target_sleep_state = ACPI_STATE_S0; +#endif + +int acpi_sleep_prepare(u32 acpi_state) +{ +#ifdef CONFIG_ACPI_SLEEP + /* do we have a wakeup address for S2 and S3? */ + if (acpi_state == ACPI_STATE_S3) { + if (!acpi_wakeup_address) { + return -EFAULT; + } + acpi_set_firmware_waking_vector((acpi_physical_address) + virt_to_phys((void *) + acpi_wakeup_address)); + + } + ACPI_FLUSH_CPU_CACHE(); + acpi_enable_wakeup_device_prep(acpi_state); +#endif + acpi_gpe_sleep_prepare(acpi_state); + acpi_enter_sleep_state_prep(acpi_state); + return 0; +} #ifdef CONFIG_SUSPEND static struct pm_ops acpi_pm_ops; @@ -275,6 +301,7 @@ int acpi_suspend(u32 acpi_state) return -EINVAL; } +#ifdef CONFIG_PM_SLEEP /** * acpi_pm_device_sleep_state - return preferred power state of ACPI device * in the system sleep state given by %acpi_target_sleep_state @@ -349,6 +376,21 @@ int acpi_pm_device_sleep_state(struct device *dev, int wake, int *d_min_p) *d_min_p = d_min; return d_max; } +#endif + +static void acpi_power_off_prepare(void) +{ + /* Prepare to power off the system */ + acpi_sleep_prepare(ACPI_STATE_S5); +} + +static void acpi_power_off(void) +{ + /* acpi_sleep_prepare(ACPI_STATE_S5) should have already been called */ + printk("%s called\n", __FUNCTION__); + local_irq_disable(); + acpi_enter_sleep_state(ACPI_STATE_S5); +} int __init acpi_sleep_init(void) { @@ -363,16 +405,17 @@ int __init acpi_sleep_init(void) if (acpi_disabled) return 0; + sleep_states[ACPI_STATE_S0] = 1; + printk(KERN_INFO PREFIX "(supports S0"); + #ifdef CONFIG_SUSPEND - printk(KERN_INFO PREFIX "(supports"); - for (i = ACPI_STATE_S0; i < ACPI_STATE_S4; i++) { + for (i = ACPI_STATE_S1; i < ACPI_STATE_S4; i++) { status = acpi_get_sleep_type_data(i, &type_a, &type_b); if (ACPI_SUCCESS(status)) { sleep_states[i] = 1; printk(" S%d", i); } } - printk(")\n"); pm_set_ops(&acpi_pm_ops); #endif @@ -382,10 +425,16 @@ int __init acpi_sleep_init(void) if (ACPI_SUCCESS(status)) { hibernation_set_ops(&acpi_hibernation_ops); sleep_states[ACPI_STATE_S4] = 1; + printk(" S4"); } -#else - sleep_states[ACPI_STATE_S4] = 0; #endif - + status = acpi_get_sleep_type_data(ACPI_STATE_S5, &type_a, &type_b); + if (ACPI_SUCCESS(status)) { + sleep_states[ACPI_STATE_S5] = 1; + printk(" S5"); + pm_power_off_prepare = acpi_power_off_prepare; + pm_power_off = acpi_power_off; + } + printk(")\n"); return 0; } diff --git a/drivers/acpi/sleep/poweroff.c b/drivers/acpi/sleep/poweroff.c deleted file mode 100644 index 39e40d5..0000000 --- a/drivers/acpi/sleep/poweroff.c +++ /dev/null @@ -1,75 +0,0 @@ -/* - * poweroff.c - ACPI handler for powering off the system. - * - * AKA S5, but it is independent of whether or not the kernel supports - * any other sleep support in the system. - * - * Copyright (c) 2005 Alexey Starikovskiy <alexey.y.starikovskiy@intel.com> - * - * This file is released under the GPLv2. - */ - -#include <linux/pm.h> -#include <linux/init.h> -#include <acpi/acpi_bus.h> -#include <linux/sysdev.h> -#include <asm/io.h> -#include "sleep.h" - -int acpi_sleep_prepare(u32 acpi_state) -{ -#ifdef CONFIG_ACPI_SLEEP - /* do we have a wakeup address for S2 and S3? */ - if (acpi_state == ACPI_STATE_S3) { - if (!acpi_wakeup_address) { - return -EFAULT; - } - acpi_set_firmware_waking_vector((acpi_physical_address) - virt_to_phys((void *) - acpi_wakeup_address)); - - } - ACPI_FLUSH_CPU_CACHE(); - acpi_enable_wakeup_device_prep(acpi_state); -#endif - acpi_gpe_sleep_prepare(acpi_state); - acpi_enter_sleep_state_prep(acpi_state); - return 0; -} - -#ifdef CONFIG_PM - -static void acpi_power_off_prepare(void) -{ - /* Prepare to power off the system */ - acpi_sleep_prepare(ACPI_STATE_S5); -} - -static void acpi_power_off(void) -{ - /* acpi_sleep_prepare(ACPI_STATE_S5) should have already been called */ - printk("%s called\n", __FUNCTION__); - local_irq_disable(); - /* Some SMP machines only can poweroff in boot CPU */ - acpi_enter_sleep_state(ACPI_STATE_S5); -} - -static int acpi_poweroff_init(void) -{ - if (!acpi_disabled) { - u8 type_a, type_b; - acpi_status status; - - status = - acpi_get_sleep_type_data(ACPI_STATE_S5, &type_a, &type_b); - if (ACPI_SUCCESS(status)) { - pm_power_off_prepare = acpi_power_off_prepare; - pm_power_off = acpi_power_off; - } - } - return 0; -} - -late_initcall(acpi_poweroff_init); - -#endif /* CONFIG_PM */ diff --git a/drivers/acpi/video.c b/drivers/acpi/video.c index 3c9bb85..d05891f 100644 --- a/drivers/acpi/video.c +++ b/drivers/acpi/video.c @@ -417,7 +417,6 @@ acpi_video_device_lcd_set_level(struct acpi_video_device *device, int level) arg0.integer.value = level; status = acpi_evaluate_object(device->dev->handle, "_BCM", &args, NULL); - printk(KERN_DEBUG "set_level status: %x\n", status); return status; } @@ -1754,7 +1753,7 @@ static int acpi_video_bus_put_devices(struct acpi_video_bus *video) static int acpi_video_bus_start_devices(struct acpi_video_bus *video) { - return acpi_video_bus_DOS(video, 1, 0); + return acpi_video_bus_DOS(video, 0, 0); } static int acpi_video_bus_stop_devices(struct acpi_video_bus *video) diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c index 06f212f..c168203 100644 --- a/drivers/ata/ahci.c +++ b/drivers/ata/ahci.c @@ -418,10 +418,12 @@ static const struct pci_device_id ahci_pci_tbl[] = { /* ATI */ { PCI_VDEVICE(ATI, 0x4380), board_ahci_sb600 }, /* ATI SB600 */ - { PCI_VDEVICE(ATI, 0x4390), board_ahci_sb600 }, /* ATI SB700 IDE */ - { PCI_VDEVICE(ATI, 0x4391), board_ahci_sb600 }, /* ATI SB700 AHCI */ - { PCI_VDEVICE(ATI, 0x4392), board_ahci_sb600 }, /* ATI SB700 nraid5 */ - { PCI_VDEVICE(ATI, 0x4393), board_ahci_sb600 }, /* ATI SB700 raid5 */ + { PCI_VDEVICE(ATI, 0x4390), board_ahci_sb600 }, /* ATI SB700/800 */ + { PCI_VDEVICE(ATI, 0x4391), board_ahci_sb600 }, /* ATI SB700/800 */ + { PCI_VDEVICE(ATI, 0x4392), board_ahci_sb600 }, /* ATI SB700/800 */ + { PCI_VDEVICE(ATI, 0x4393), board_ahci_sb600 }, /* ATI SB700/800 */ + { PCI_VDEVICE(ATI, 0x4394), board_ahci_sb600 }, /* ATI SB700/800 */ + { PCI_VDEVICE(ATI, 0x4395), board_ahci_sb600 }, /* ATI SB700/800 */ /* VIA */ { PCI_VDEVICE(VIA, 0x3349), board_ahci_vt8251 }, /* VIA VT8251 */ diff --git a/drivers/ata/ata_piix.c b/drivers/ata/ata_piix.c index 3b8bf18..6996eb5 100644 --- a/drivers/ata/ata_piix.c +++ b/drivers/ata/ata_piix.c @@ -921,6 +921,13 @@ static int piix_broken_suspend(void) { static struct dmi_system_id sysids[] = { { + .ident = "TECRA M3", + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"), + DMI_MATCH(DMI_PRODUCT_NAME, "TECRA M3"), + }, + }, + { .ident = "TECRA M5", .matches = { DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"), diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c index c43de9a..772be09 100644 --- a/drivers/ata/libata-core.c +++ b/drivers/ata/libata-core.c @@ -3778,6 +3778,9 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = { { "Maxtor 6L250S0", "BANC1G10", ATA_HORKAGE_NONCQ }, { "Maxtor 6B200M0", "BANC1BM0", ATA_HORKAGE_NONCQ }, { "Maxtor 6B200M0", "BANC1B10", ATA_HORKAGE_NONCQ }, + { "Maxtor 7B250S0", "BANC1B70", ATA_HORKAGE_NONCQ, }, + { "Maxtor 7B300S0", "BANC1B70", ATA_HORKAGE_NONCQ }, + { "Maxtor 7V300F0", "VA111630", ATA_HORKAGE_NONCQ }, { "HITACHI HDS7250SASUN500G 0621KTAWSD", "K2AOAJ0AHITACHI", ATA_HORKAGE_NONCQ }, /* NCQ hard hangs device under heavier load, needs hard power cycle */ @@ -3794,6 +3797,7 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = { { "WDC WD740ADFD-00NLR1", NULL, ATA_HORKAGE_NONCQ, }, { "FUJITSU MHV2080BH", "00840028", ATA_HORKAGE_NONCQ, }, { "ST9160821AS", "3.CLF", ATA_HORKAGE_NONCQ, }, + { "ST3160812AS", "3.AD", ATA_HORKAGE_NONCQ, }, { "SAMSUNG HD401LJ", "ZZ100-15", ATA_HORKAGE_NONCQ, }, /* devices which puke on READ_NATIVE_MAX */ diff --git a/drivers/ata/libata-sff.c b/drivers/ata/libata-sff.c index 1cce219..8023167 100644 --- a/drivers/ata/libata-sff.c +++ b/drivers/ata/libata-sff.c @@ -297,7 +297,7 @@ void ata_bmdma_start (struct ata_queued_cmd *qc) dmactl = ioread8(ap->ioaddr.bmdma_addr + ATA_DMA_CMD); iowrite8(dmactl | ATA_DMA_START, ap->ioaddr.bmdma_addr + ATA_DMA_CMD); - /* Strictly, one may wish to issue a readb() here, to + /* Strictly, one may wish to issue an ioread8() here, to * flush the mmio write. However, control also passes * to the hardware at this point, and it will interrupt * us when we are to resume control. So, in effect, @@ -307,6 +307,9 @@ void ata_bmdma_start (struct ata_queued_cmd *qc) * is expected, so I think it is best to not add a readb() * without first all the MMIO ATA cards/mobos. * Or maybe I'm just being paranoid. + * + * FIXME: The posting of this write means I/O starts are + * unneccessarily delayed for MMIO */ } diff --git a/drivers/ata/pata_sis.c b/drivers/ata/pata_sis.c index 2bd7645..cce2834 100644 --- a/drivers/ata/pata_sis.c +++ b/drivers/ata/pata_sis.c @@ -375,8 +375,9 @@ static void sis_66_set_dmamode (struct ata_port *ap, struct ata_device *adev) int drive_pci = sis_old_port_base(adev); u16 timing; + /* MWDMA 0-2 and UDMA 0-5 */ const u16 mwdma_bits[] = { 0x008, 0x302, 0x301 }; - const u16 udma_bits[] = { 0xF000, 0xD000, 0xB000, 0xA000, 0x9000}; + const u16 udma_bits[] = { 0xF000, 0xD000, 0xB000, 0xA000, 0x9000, 0x8000 }; pci_read_config_word(pdev, drive_pci, &timing); diff --git a/drivers/ata/sata_sil24.c b/drivers/ata/sata_sil24.c index ef83e6b..233e886 100644 --- a/drivers/ata/sata_sil24.c +++ b/drivers/ata/sata_sil24.c @@ -888,6 +888,16 @@ static inline void sil24_host_intr(struct ata_port *ap) u32 slot_stat, qc_active; int rc; + /* If PCIX_IRQ_WOC, there's an inherent race window between + * clearing IRQ pending status and reading PORT_SLOT_STAT + * which may cause spurious interrupts afterwards. This is + * unavoidable and much better than losing interrupts which + * happens if IRQ pending is cleared after reading + * PORT_SLOT_STAT. + */ + if (ap->flags & SIL24_FLAG_PCIX_IRQ_WOC) + writel(PORT_IRQ_COMPLETE, port + PORT_IRQ_STAT); + slot_stat = readl(port + PORT_SLOT_STAT); if (unlikely(slot_stat & HOST_SSTAT_ATTN)) { @@ -895,9 +905,6 @@ static inline void sil24_host_intr(struct ata_port *ap) return; } - if (ap->flags & SIL24_FLAG_PCIX_IRQ_WOC) - writel(PORT_IRQ_COMPLETE, port + PORT_IRQ_STAT); - qc_active = slot_stat & ~HOST_SSTAT_ATTN; rc = ata_qc_complete_multiple(ap, qc_active, sil24_finish_qc); if (rc > 0) @@ -910,7 +917,8 @@ static inline void sil24_host_intr(struct ata_port *ap) return; } - if (ata_ratelimit()) + /* spurious interrupts are expected if PCIX_IRQ_WOC */ + if (!(ap->flags & SIL24_FLAG_PCIX_IRQ_WOC) && ata_ratelimit()) ata_port_printk(ap, KERN_INFO, "spurious interrupt " "(slot_stat 0x%x active_tag %d sactive 0x%x)\n", slot_stat, ap->active_tag, ap->sactive); diff --git a/drivers/base/core.c b/drivers/base/core.c index 6de33d7a..67c9258 100644 --- a/drivers/base/core.c +++ b/drivers/base/core.c @@ -284,6 +284,7 @@ static ssize_t show_uevent(struct device *dev, struct device_attribute *attr, /* let the kset specific function add its keys */ pos = data; + memset(envp, 0, sizeof(envp)); retval = kset->uevent_ops->uevent(kset, &dev->kobj, envp, ARRAY_SIZE(envp), pos, PAGE_SIZE); diff --git a/drivers/cdrom/cdrom.c b/drivers/cdrom/cdrom.c index 67ee3d4..79245714 100644 --- a/drivers/cdrom/cdrom.c +++ b/drivers/cdrom/cdrom.c @@ -1032,6 +1032,10 @@ int cdrom_open(struct cdrom_device_info *cdi, struct inode *ip, struct file *fp) check_disk_change(ip->i_bdev); return 0; err_release: + if (CDROM_CAN(CDC_LOCK) && cdi->options & CDO_LOCK) { + cdi->ops->lock_door(cdi, 0); + cdinfo(CD_OPEN, "door unlocked.\n"); + } cdi->ops->release(cdi); err: cdi->use_count--; diff --git a/drivers/char/drm/i915_drv.h b/drivers/char/drm/i915_drv.h index 737088b..28b9873 100644 --- a/drivers/char/drm/i915_drv.h +++ b/drivers/char/drm/i915_drv.h @@ -210,6 +210,12 @@ extern int i915_wait_ring(struct drm_device * dev, int n, const char *caller); #define I915REG_INT_MASK_R 0x020a8 #define I915REG_INT_ENABLE_R 0x020a0 +#define I915REG_PIPEASTAT 0x70024 +#define I915REG_PIPEBSTAT 0x71024 + +#define I915_VBLANK_INTERRUPT_ENABLE (1UL<<17) +#define I915_VBLANK_CLEAR (1UL<<1) + #define SRX_INDEX 0x3c4 #define SRX_DATA 0x3c5 #define SR01 1 diff --git a/drivers/char/drm/i915_irq.c b/drivers/char/drm/i915_irq.c index 4b4b2ce..bb8e9e9 100644 --- a/drivers/char/drm/i915_irq.c +++ b/drivers/char/drm/i915_irq.c @@ -214,6 +214,10 @@ irqreturn_t i915_driver_irq_handler(DRM_IRQ_ARGS) struct drm_device *dev = (struct drm_device *) arg; drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; u16 temp; + u32 pipea_stats, pipeb_stats; + + pipea_stats = I915_READ(I915REG_PIPEASTAT); + pipeb_stats = I915_READ(I915REG_PIPEBSTAT); temp = I915_READ16(I915REG_INT_IDENTITY_R); @@ -225,6 +229,8 @@ irqreturn_t i915_driver_irq_handler(DRM_IRQ_ARGS) return IRQ_NONE; I915_WRITE16(I915REG_INT_IDENTITY_R, temp); + (void) I915_READ16(I915REG_INT_IDENTITY_R); + DRM_READMEMORYBARRIER(); dev_priv->sarea_priv->last_dispatch = READ_BREADCRUMB(dev_priv); @@ -252,6 +258,12 @@ irqreturn_t i915_driver_irq_handler(DRM_IRQ_ARGS) if (dev_priv->swaps_pending > 0) drm_locked_tasklet(dev, i915_vblank_tasklet); + I915_WRITE(I915REG_PIPEASTAT, + pipea_stats|I915_VBLANK_INTERRUPT_ENABLE| + I915_VBLANK_CLEAR); + I915_WRITE(I915REG_PIPEBSTAT, + pipeb_stats|I915_VBLANK_INTERRUPT_ENABLE| + I915_VBLANK_CLEAR); } return IRQ_HANDLED; diff --git a/drivers/char/hpet.c b/drivers/char/hpet.c index 7ecffc9..4c16778 100644 --- a/drivers/char/hpet.c +++ b/drivers/char/hpet.c @@ -62,6 +62,8 @@ static u32 hpet_nhpet, hpet_max_freq = HPET_USER_FREQ; +/* This clocksource driver currently only works on ia64 */ +#ifdef CONFIG_IA64 static void __iomem *hpet_mctr; static cycle_t read_hpet(void) @@ -79,6 +81,7 @@ static struct clocksource clocksource_hpet = { .flags = CLOCK_SOURCE_IS_CONTINUOUS, }; static struct clocksource *hpet_clocksource; +#endif /* A lock for concurrent access by app and isr hpet activity. */ static DEFINE_SPINLOCK(hpet_lock); @@ -943,14 +946,14 @@ static acpi_status hpet_resources(struct acpi_resource *res, void *data) printk(KERN_DEBUG "%s: 0x%lx is busy\n", __FUNCTION__, hdp->hd_phys_address); iounmap(hdp->hd_address); - return -EBUSY; + return AE_ALREADY_EXISTS; } } else if (res->type == ACPI_RESOURCE_TYPE_FIXED_MEMORY32) { struct acpi_resource_fixed_memory32 *fixmem32; fixmem32 = &res->data.fixed_memory32; if (!fixmem32) - return -EINVAL; + return AE_NO_MEMORY; hdp->hd_phys_address = fixmem32->address; hdp->hd_address = ioremap(fixmem32->address, @@ -960,7 +963,7 @@ static acpi_status hpet_resources(struct acpi_resource *res, void *data) printk(KERN_DEBUG "%s: 0x%lx is busy\n", __FUNCTION__, hdp->hd_phys_address); iounmap(hdp->hd_address); - return -EBUSY; + return AE_ALREADY_EXISTS; } } else if (res->type == ACPI_RESOURCE_TYPE_EXTENDED_IRQ) { struct acpi_resource_extended_irq *irqp; diff --git a/drivers/char/mspec.c b/drivers/char/mspec.c index 049a46cc9..04ac155 100644 --- a/drivers/char/mspec.c +++ b/drivers/char/mspec.c @@ -155,23 +155,22 @@ mspec_open(struct vm_area_struct *vma) * mspec_close * * Called when unmapping a device mapping. Frees all mspec pages - * belonging to the vma. + * belonging to all the vma's sharing this vma_data structure. */ static void mspec_close(struct vm_area_struct *vma) { struct vma_data *vdata; - int index, last_index, result; + int index, last_index; unsigned long my_page; vdata = vma->vm_private_data; - BUG_ON(vma->vm_start < vdata->vm_start || vma->vm_end > vdata->vm_end); + if (!atomic_dec_and_test(&vdata->refcnt)) + return; - spin_lock(&vdata->lock); - index = (vma->vm_start - vdata->vm_start) >> PAGE_SHIFT; - last_index = (vma->vm_end - vdata->vm_start) >> PAGE_SHIFT; - for (; index < last_index; index++) { + last_index = (vdata->vm_end - vdata->vm_start) >> PAGE_SHIFT; + for (index = 0; index < last_index; index++) { if (vdata->maddr[index] == 0) continue; /* @@ -180,20 +179,12 @@ mspec_close(struct vm_area_struct *vma) */ my_page = vdata->maddr[index]; vdata->maddr[index] = 0; - spin_unlock(&vdata->lock); - result = mspec_zero_block(my_page, PAGE_SIZE); - if (!result) + if (!mspec_zero_block(my_page, PAGE_SIZE)) uncached_free_page(my_page); else printk(KERN_WARNING "mspec_close(): " - "failed to zero page %i\n", - result); - spin_lock(&vdata->lock); + "failed to zero page %ld\n", my_page); } - spin_unlock(&vdata->lock); - - if (!atomic_dec_and_test(&vdata->refcnt)) - return; if (vdata->flags & VMD_VMALLOCED) vfree(vdata); @@ -201,7 +192,6 @@ mspec_close(struct vm_area_struct *vma) kfree(vdata); } - /* * mspec_nopfn * diff --git a/drivers/char/random.c b/drivers/char/random.c index 397c714..af274e5 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1550,11 +1550,13 @@ __u32 secure_tcp_sequence_number(__be32 saddr, __be32 daddr, * As close as possible to RFC 793, which * suggests using a 250 kHz clock. * Further reading shows this assumes 2 Mb/s networks. - * For 10 Gb/s Ethernet, a 1 GHz clock is appropriate. - * That's funny, Linux has one built in! Use it! - * (Networks are faster now - should this be increased?) + * For 10 Mb/s Ethernet, a 1 MHz clock is appropriate. + * For 10 Gb/s Ethernet, a 1 GHz clock should be ok, but + * we also need to limit the resolution so that the u32 seq + * overlaps less than one time per MSL (2 minutes). + * Choosing a clock of 64 ns period is OK. (period of 274 s) */ - seq += ktime_get_real().tv64; + seq += ktime_get_real().tv64 >> 6; #if 0 printk("init_seq(%lx, %lx, %d, %d) = %d\n", saddr, daddr, sport, dport, seq); diff --git a/drivers/char/vt_ioctl.c b/drivers/char/vt_ioctl.c index c6f6f42..c799b7f 100644 --- a/drivers/char/vt_ioctl.c +++ b/drivers/char/vt_ioctl.c @@ -770,6 +770,7 @@ int vt_ioctl(struct tty_struct *tty, struct file * file, /* * Switching-from response */ + acquire_console_sem(); if (vc->vt_newvt >= 0) { if (arg == 0) /* @@ -784,7 +785,6 @@ int vt_ioctl(struct tty_struct *tty, struct file * file, * complete the switch. */ int newvt; - acquire_console_sem(); newvt = vc->vt_newvt; vc->vt_newvt = -1; i = vc_allocate(newvt); @@ -798,7 +798,6 @@ int vt_ioctl(struct tty_struct *tty, struct file * file, * other console switches.. */ complete_change_console(vc_cons[newvt].d); - release_console_sem(); } } @@ -810,9 +809,12 @@ int vt_ioctl(struct tty_struct *tty, struct file * file, /* * If it's just an ACK, ignore it */ - if (arg != VT_ACKACQ) + if (arg != VT_ACKACQ) { + release_console_sem(); return -EINVAL; + } } + release_console_sem(); return 0; @@ -1208,15 +1210,18 @@ void change_console(struct vc_data *new_vc) /* * Send the signal as privileged - kill_pid() will * tell us if the process has gone or something else - * is awry + * is awry. + * + * We need to set vt_newvt *before* sending the signal or we + * have a race. */ + vc->vt_newvt = new_vc->vc_num; if (kill_pid(vc->vt_pid, vc->vt_mode.relsig, 1) == 0) { /* * It worked. Mark the vt to switch to and * return. The process needs to send us a * VT_RELDISP ioctl to complete the switch. */ - vc->vt_newvt = new_vc->vc_num; return; } diff --git a/drivers/ieee1394/ieee1394_core.c b/drivers/ieee1394/ieee1394_core.c index ee452595..98fd985 100644 --- a/drivers/ieee1394/ieee1394_core.c +++ b/drivers/ieee1394/ieee1394_core.c @@ -1273,7 +1273,7 @@ static void __exit ieee1394_cleanup(void) unregister_chrdev_region(IEEE1394_CORE_DEV, 256); } -fs_initcall(ieee1394_init); /* same as ohci1394 */ +module_init(ieee1394_init); module_exit(ieee1394_cleanup); /* Exported symbols */ diff --git a/drivers/ieee1394/ohci1394.c b/drivers/ieee1394/ohci1394.c index 5667c81..372c5c1 100644 --- a/drivers/ieee1394/ohci1394.c +++ b/drivers/ieee1394/ohci1394.c @@ -3537,7 +3537,5 @@ static int __init ohci1394_init(void) return pci_register_driver(&ohci1394_pci_driver); } -/* Register before most other device drivers. - * Useful for remote debugging via physical DMA, e.g. using firescope. */ -fs_initcall(ohci1394_init); +module_init(ohci1394_init); module_exit(ohci1394_cleanup); diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c index ba0428d..85c51bd 100644 --- a/drivers/infiniband/hw/mlx4/qp.c +++ b/drivers/infiniband/hw/mlx4/qp.c @@ -1211,12 +1211,42 @@ static void set_datagram_seg(struct mlx4_wqe_datagram_seg *dseg, dseg->qkey = cpu_to_be32(wr->wr.ud.remote_qkey); } -static void set_data_seg(struct mlx4_wqe_data_seg *dseg, - struct ib_sge *sg) +static void set_mlx_icrc_seg(void *dseg) +{ + u32 *t = dseg; + struct mlx4_wqe_inline_seg *iseg = dseg; + + t[1] = 0; + + /* + * Need a barrier here before writing the byte_count field to + * make sure that all the data is visible before the + * byte_count field is set. Otherwise, if the segment begins + * a new cacheline, the HCA prefetcher could grab the 64-byte + * chunk and get a valid (!= * 0xffffffff) byte count but + * stale data, and end up sending the wrong data. + */ + wmb(); + + iseg->byte_count = cpu_to_be32((1 << 31) | 4); +} + +static void set_data_seg(struct mlx4_wqe_data_seg *dseg, struct ib_sge *sg) { - dseg->byte_count = cpu_to_be32(sg->length); dseg->lkey = cpu_to_be32(sg->lkey); dseg->addr = cpu_to_be64(sg->addr); + + /* + * Need a barrier here before writing the byte_count field to + * make sure that all the data is visible before the + * byte_count field is set. Otherwise, if the segment begins + * a new cacheline, the HCA prefetcher could grab the 64-byte + * chunk and get a valid (!= * 0xffffffff) byte count but + * stale data, and end up sending the wrong data. + */ + wmb(); + + dseg->byte_count = cpu_to_be32(sg->length); } int mlx4_ib_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr, @@ -1225,6 +1255,7 @@ int mlx4_ib_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr, struct mlx4_ib_qp *qp = to_mqp(ibqp); void *wqe; struct mlx4_wqe_ctrl_seg *ctrl; + struct mlx4_wqe_data_seg *dseg; unsigned long flags; int nreq; int err = 0; @@ -1324,22 +1355,27 @@ int mlx4_ib_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr, break; } - for (i = 0; i < wr->num_sge; ++i) { - set_data_seg(wqe, wr->sg_list + i); + /* + * Write data segments in reverse order, so as to + * overwrite cacheline stamp last within each + * cacheline. This avoids issues with WQE + * prefetching. + */ - wqe += sizeof (struct mlx4_wqe_data_seg); - size += sizeof (struct mlx4_wqe_data_seg) / 16; - } + dseg = wqe; + dseg += wr->num_sge - 1; + size += wr->num_sge * (sizeof (struct mlx4_wqe_data_seg) / 16); /* Add one more inline data segment for ICRC for MLX sends */ - if (qp->ibqp.qp_type == IB_QPT_SMI || qp->ibqp.qp_type == IB_QPT_GSI) { - ((struct mlx4_wqe_inline_seg *) wqe)->byte_count = - cpu_to_be32((1 << 31) | 4); - ((u32 *) wqe)[1] = 0; - wqe += sizeof (struct mlx4_wqe_data_seg); + if (unlikely(qp->ibqp.qp_type == IB_QPT_SMI || + qp->ibqp.qp_type == IB_QPT_GSI)) { + set_mlx_icrc_seg(dseg + 1); size += sizeof (struct mlx4_wqe_data_seg) / 16; } + for (i = wr->num_sge - 1; i >= 0; --i, --dseg) + set_data_seg(dseg, wr->sg_list + i); + ctrl->fence_size = (wr->send_flags & IB_SEND_FENCE ? MLX4_WQE_CTRL_FENCE : 0) | size; diff --git a/drivers/input/joystick/Kconfig b/drivers/input/joystick/Kconfig index e2abe18..7c662ee 100644 --- a/drivers/input/joystick/Kconfig +++ b/drivers/input/joystick/Kconfig @@ -277,7 +277,7 @@ config JOYSTICK_XPAD_FF config JOYSTICK_XPAD_LEDS bool "LED Support for Xbox360 controller 'BigX' LED" - depends on LEDS_CLASS && JOYSTICK_XPAD + depends on JOYSTICK_XPAD && (LEDS_CLASS=y || LEDS_CLASS=JOYSTICK_XPAD) ---help--- This option enables support for the LED which surrounds the Big X on XBox 360 controller. diff --git a/drivers/input/mouse/appletouch.c b/drivers/input/mouse/appletouch.c index 2bea1b2..a1804bf 100644 --- a/drivers/input/mouse/appletouch.c +++ b/drivers/input/mouse/appletouch.c @@ -328,6 +328,7 @@ static void atp_complete(struct urb* urb) { int x, y, x_z, y_z, x_f, y_f; int retval, i, j; + int key; struct atp *dev = urb->context; switch (urb->status) { @@ -468,6 +469,7 @@ static void atp_complete(struct urb* urb) ATP_XFACT, &x_z, &x_f); y = atp_calculate_abs(dev->xy_acc + ATP_XSENSORS, ATP_YSENSORS, ATP_YFACT, &y_z, &y_f); + key = dev->data[dev->datalen - 1] & 1; if (x && y) { if (dev->x_old != -1) { @@ -505,7 +507,7 @@ static void atp_complete(struct urb* urb) the first touch unless reinitialised. Do so if it's been idle for a while in order to avoid waking the kernel up several hundred times a second */ - if (atp_is_geyser_3(dev)) { + if (!key && atp_is_geyser_3(dev)) { dev->idlecount++; if (dev->idlecount == 10) { dev->valid = 0; @@ -514,7 +516,7 @@ static void atp_complete(struct urb* urb) } } - input_report_key(dev->input, BTN_LEFT, dev->data[dev->datalen - 1] & 1); + input_report_key(dev->input, BTN_LEFT, key); input_sync(dev->input); exit: diff --git a/drivers/kvm/Kconfig b/drivers/kvm/Kconfig index 7b64fd4..0a419a0 100644 --- a/drivers/kvm/Kconfig +++ b/drivers/kvm/Kconfig @@ -6,7 +6,8 @@ menuconfig VIRTUALIZATION depends on X86 default y ---help--- - Say Y here to get to see options for virtualization guest drivers. + Say Y here to get to see options for using your Linux host to run other + operating systems inside virtual machines (guests). This option alone does not add any kernel code. If you say N, all options in this submenu will be skipped and disabled. diff --git a/drivers/lguest/lguest_asm.S b/drivers/lguest/lguest_asm.S index f182c6a..1ddcd5c 100644 --- a/drivers/lguest/lguest_asm.S +++ b/drivers/lguest/lguest_asm.S @@ -22,8 +22,9 @@ jmp lguest_init /*G:055 We create a macro which puts the assembler code between lgstart_ and - * lgend_ markers. These templates end up in the .init.text section, so they - * are discarded after boot. */ + * lgend_ markers. These templates are put in the .text section: they can't be + * discarded after boot as we may need to patch modules, too. */ +.text #define LGUEST_PATCH(name, insns...) \ lgstart_##name: insns; lgend_##name:; \ .globl lgstart_##name; .globl lgend_##name @@ -34,7 +35,6 @@ LGUEST_PATCH(popf, movl %eax, lguest_data+LGUEST_DATA_irq_enabled) LGUEST_PATCH(pushf, movl lguest_data+LGUEST_DATA_irq_enabled, %eax) /*:*/ -.text /* These demark the EIP range where host should never deliver interrupts. */ .global lguest_noirq_start .global lguest_noirq_end diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 4d63773..f96dea9 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -514,7 +514,7 @@ static void ops_complete_biofill(void *stripe_head_ref) struct stripe_head *sh = stripe_head_ref; struct bio *return_bi = NULL; raid5_conf_t *conf = sh->raid_conf; - int i, more_to_read = 0; + int i; pr_debug("%s: stripe %llu\n", __FUNCTION__, (unsigned long long)sh->sector); @@ -522,16 +522,14 @@ static void ops_complete_biofill(void *stripe_head_ref) /* clear completed biofills */ for (i = sh->disks; i--; ) { struct r5dev *dev = &sh->dev[i]; - /* check if this stripe has new incoming reads */ - if (dev->toread) - more_to_read++; /* acknowledge completion of a biofill operation */ - /* and check if we need to reply to a read request - */ - if (test_bit(R5_Wantfill, &dev->flags) && !dev->toread) { + /* and check if we need to reply to a read request, + * new R5_Wantfill requests are held off until + * !test_bit(STRIPE_OP_BIOFILL, &sh->ops.pending) + */ + if (test_and_clear_bit(R5_Wantfill, &dev->flags)) { struct bio *rbi, *rbi2; - clear_bit(R5_Wantfill, &dev->flags); /* The access to dev->read is outside of the * spin_lock_irq(&conf->device_lock), but is protected @@ -558,8 +556,7 @@ static void ops_complete_biofill(void *stripe_head_ref) return_io(return_bi); - if (more_to_read) - set_bit(STRIPE_HANDLE, &sh->state); + set_bit(STRIPE_HANDLE, &sh->state); release_stripe(sh); } diff --git a/drivers/media/video/ivtv/ivtv-fileops.c b/drivers/media/video/ivtv/ivtv-fileops.c index 0285c4a..66ea3cb 100644 --- a/drivers/media/video/ivtv/ivtv-fileops.c +++ b/drivers/media/video/ivtv/ivtv-fileops.c @@ -754,9 +754,11 @@ static void ivtv_stop_decoding(struct ivtv_open_id *id, int flags, u64 pts) ivtv_yuv_close(itv); } if (s->type == IVTV_DEC_STREAM_TYPE_YUV && itv->output_mode == OUT_YUV) - itv->output_mode = OUT_NONE; + itv->output_mode = OUT_NONE; + else if (s->type == IVTV_DEC_STREAM_TYPE_YUV && itv->output_mode == OUT_UDMA_YUV) + itv->output_mode = OUT_NONE; else if (s->type == IVTV_DEC_STREAM_TYPE_MPG && itv->output_mode == OUT_MPG) - itv->output_mode = OUT_NONE; + itv->output_mode = OUT_NONE; itv->speed = 0; clear_bit(IVTV_F_I_DEC_PAUSED, &itv->i_flags); diff --git a/drivers/media/video/usbvision/usbvision-video.c b/drivers/media/video/usbvision/usbvision-video.c index e3371f9..0cb006f 100644 --- a/drivers/media/video/usbvision/usbvision-video.c +++ b/drivers/media/video/usbvision/usbvision-video.c @@ -1387,7 +1387,6 @@ static const struct file_operations usbvision_fops = { .ioctl = video_ioctl2, .llseek = no_llseek, /* .poll = video_poll, */ - .mmap = usbvision_v4l2_mmap, .compat_ioctl = v4l_compat_ioctl32, }; static struct video_device usbvision_video_template = { @@ -1413,7 +1412,7 @@ static struct video_device usbvision_video_template = { .vidioc_s_input = vidioc_s_input, .vidioc_queryctrl = vidioc_queryctrl, .vidioc_g_audio = vidioc_g_audio, - .vidioc_g_audio = vidioc_s_audio, + .vidioc_s_audio = vidioc_s_audio, .vidioc_g_ctrl = vidioc_g_ctrl, .vidioc_s_ctrl = vidioc_s_ctrl, .vidioc_streamon = vidioc_streamon, @@ -1459,7 +1458,7 @@ static struct video_device usbvision_radio_template= .vidioc_s_input = vidioc_s_input, .vidioc_queryctrl = vidioc_queryctrl, .vidioc_g_audio = vidioc_g_audio, - .vidioc_g_audio = vidioc_s_audio, + .vidioc_s_audio = vidioc_s_audio, .vidioc_g_ctrl = vidioc_g_ctrl, .vidioc_s_ctrl = vidioc_s_ctrl, .vidioc_g_tuner = vidioc_g_tuner, diff --git a/drivers/net/bnx2.c b/drivers/net/bnx2.c index 854d80c..66eed22 100644 --- a/drivers/net/bnx2.c +++ b/drivers/net/bnx2.c @@ -54,8 +54,8 @@ #define DRV_MODULE_NAME "bnx2" #define PFX DRV_MODULE_NAME ": " -#define DRV_MODULE_VERSION "1.6.4" -#define DRV_MODULE_RELDATE "August 3, 2007" +#define DRV_MODULE_VERSION "1.6.5" +#define DRV_MODULE_RELDATE "September 20, 2007" #define RUN_AT(x) (jiffies + (x)) @@ -6727,7 +6727,8 @@ bnx2_init_board(struct pci_dev *pdev, struct net_device *dev) } else if (CHIP_NUM(bp) == CHIP_NUM_5706 || CHIP_NUM(bp) == CHIP_NUM_5708) bp->phy_flags |= PHY_CRC_FIX_FLAG; - else if (CHIP_ID(bp) == CHIP_ID_5709_A0) + else if (CHIP_ID(bp) == CHIP_ID_5709_A0 || + CHIP_ID(bp) == CHIP_ID_5709_A1) bp->phy_flags |= PHY_DIS_EARLY_DAC_FLAG; if ((CHIP_ID(bp) == CHIP_ID_5708_A0) || diff --git a/drivers/net/e1000/e1000_ethtool.c b/drivers/net/e1000/e1000_ethtool.c index 4c3785c..9ecc3ad 100644 --- a/drivers/net/e1000/e1000_ethtool.c +++ b/drivers/net/e1000/e1000_ethtool.c @@ -1726,6 +1726,7 @@ static int e1000_wol_exclusion(struct e1000_adapter *adapter, struct ethtool_wol case E1000_DEV_ID_82571EB_QUAD_COPPER: case E1000_DEV_ID_82571EB_QUAD_FIBER: case E1000_DEV_ID_82571EB_QUAD_COPPER_LOWPROFILE: + case E1000_DEV_ID_82571PT_QUAD_COPPER: case E1000_DEV_ID_82546GB_QUAD_COPPER_KSP3: /* quad port adapters only support WoL on port A */ if (!adapter->quad_port_a) { diff --git a/drivers/net/e1000/e1000_hw.c b/drivers/net/e1000/e1000_hw.c index ba120f7..8604adb 100644 --- a/drivers/net/e1000/e1000_hw.c +++ b/drivers/net/e1000/e1000_hw.c @@ -387,6 +387,7 @@ e1000_set_mac_type(struct e1000_hw *hw) case E1000_DEV_ID_82571EB_SERDES_DUAL: case E1000_DEV_ID_82571EB_SERDES_QUAD: case E1000_DEV_ID_82571EB_QUAD_COPPER: + case E1000_DEV_ID_82571PT_QUAD_COPPER: case E1000_DEV_ID_82571EB_QUAD_FIBER: case E1000_DEV_ID_82571EB_QUAD_COPPER_LOWPROFILE: hw->mac_type = e1000_82571; diff --git a/drivers/net/e1000/e1000_hw.h b/drivers/net/e1000/e1000_hw.h index fe87146..07f0ea7 100644 --- a/drivers/net/e1000/e1000_hw.h +++ b/drivers/net/e1000/e1000_hw.h @@ -475,6 +475,7 @@ int32_t e1000_check_phy_reset_block(struct e1000_hw *hw); #define E1000_DEV_ID_82571EB_FIBER 0x105F #define E1000_DEV_ID_82571EB_SERDES 0x1060 #define E1000_DEV_ID_82571EB_QUAD_COPPER 0x10A4 +#define E1000_DEV_ID_82571PT_QUAD_COPPER 0x10D5 #define E1000_DEV_ID_82571EB_QUAD_FIBER 0x10A5 #define E1000_DEV_ID_82571EB_QUAD_COPPER_LOWPROFILE 0x10BC #define E1000_DEV_ID_82571EB_SERDES_DUAL 0x10D9 diff --git a/drivers/net/e1000/e1000_main.c b/drivers/net/e1000/e1000_main.c index 4a22595..e7c8951 100644 --- a/drivers/net/e1000/e1000_main.c +++ b/drivers/net/e1000/e1000_main.c @@ -108,6 +108,7 @@ static struct pci_device_id e1000_pci_tbl[] = { INTEL_E1000_ETHERNET_DEVICE(0x10BC), INTEL_E1000_ETHERNET_DEVICE(0x10C4), INTEL_E1000_ETHERNET_DEVICE(0x10C5), + INTEL_E1000_ETHERNET_DEVICE(0x10D5), INTEL_E1000_ETHERNET_DEVICE(0x10D9), INTEL_E1000_ETHERNET_DEVICE(0x10DA), /* required last entry */ @@ -1101,6 +1102,7 @@ e1000_probe(struct pci_dev *pdev, case E1000_DEV_ID_82571EB_QUAD_COPPER: case E1000_DEV_ID_82571EB_QUAD_FIBER: case E1000_DEV_ID_82571EB_QUAD_COPPER_LOWPROFILE: + case E1000_DEV_ID_82571PT_QUAD_COPPER: /* if quad port adapter, disable WoL on all but port A */ if (global_quad_port_a != 0) adapter->eeprom_wol = 0; diff --git a/drivers/net/mv643xx_eth.c b/drivers/net/mv643xx_eth.c index 6a117e9..3153356 100644 --- a/drivers/net/mv643xx_eth.c +++ b/drivers/net/mv643xx_eth.c @@ -534,7 +534,7 @@ static irqreturn_t mv643xx_eth_int_handler(int irq, void *dev_id) } /* PHY status changed */ - if (eth_int_cause_ext & ETH_INT_CAUSE_PHY) { + if (eth_int_cause_ext & (ETH_INT_CAUSE_PHY | ETH_INT_CAUSE_STATE)) { struct ethtool_cmd cmd; if (mii_link_ok(&mp->mii)) { @@ -1357,7 +1357,6 @@ static int mv643xx_eth_probe(struct platform_device *pdev) #endif dev->watchdog_timeo = 2 * HZ; - dev->tx_queue_len = mp->tx_ring_size; dev->base_addr = 0; dev->change_mtu = mv643xx_eth_change_mtu; dev->do_ioctl = mv643xx_eth_do_ioctl; @@ -2768,8 +2767,6 @@ static const struct ethtool_ops mv643xx_ethtool_ops = { .get_stats_count = mv643xx_get_stats_count, .get_ethtool_stats = mv643xx_get_ethtool_stats, .get_strings = mv643xx_get_strings, - .get_stats_count = mv643xx_get_stats_count, - .get_ethtool_stats = mv643xx_get_ethtool_stats, .nway_reset = mv643xx_eth_nway_restart, }; diff --git a/drivers/net/mv643xx_eth.h b/drivers/net/mv643xx_eth.h index 82f8c0c..565b966 100644 --- a/drivers/net/mv643xx_eth.h +++ b/drivers/net/mv643xx_eth.h @@ -64,7 +64,9 @@ #define ETH_INT_CAUSE_TX_ERROR (ETH_TX_QUEUES_ENABLED << 8) #define ETH_INT_CAUSE_TX (ETH_INT_CAUSE_TX_DONE | ETH_INT_CAUSE_TX_ERROR) #define ETH_INT_CAUSE_PHY 0x00010000 -#define ETH_INT_UNMASK_ALL_EXT (ETH_INT_CAUSE_TX | ETH_INT_CAUSE_PHY) +#define ETH_INT_CAUSE_STATE 0x00100000 +#define ETH_INT_UNMASK_ALL_EXT (ETH_INT_CAUSE_TX | ETH_INT_CAUSE_PHY | \ + ETH_INT_CAUSE_STATE) #define ETH_INT_MASK_ALL 0x00000000 #define ETH_INT_MASK_ALL_EXT 0x00000000 diff --git a/drivers/net/myri10ge/myri10ge.c b/drivers/net/myri10ge/myri10ge.c index 1c42266..556962f 100644 --- a/drivers/net/myri10ge/myri10ge.c +++ b/drivers/net/myri10ge/myri10ge.c @@ -3094,9 +3094,12 @@ static void myri10ge_remove(struct pci_dev *pdev) } #define PCI_DEVICE_ID_MYRICOM_MYRI10GE_Z8E 0x0008 +#define PCI_DEVICE_ID_MYRICOM_MYRI10GE_Z8E_9 0x0009 static struct pci_device_id myri10ge_pci_tbl[] = { {PCI_DEVICE(PCI_VENDOR_ID_MYRICOM, PCI_DEVICE_ID_MYRICOM_MYRI10GE_Z8E)}, + {PCI_DEVICE + (PCI_VENDOR_ID_MYRICOM, PCI_DEVICE_ID_MYRICOM_MYRI10GE_Z8E_9)}, {0}, }; diff --git a/drivers/net/pcmcia/3c589_cs.c b/drivers/net/pcmcia/3c589_cs.c index c06cae3..503f268 100644 --- a/drivers/net/pcmcia/3c589_cs.c +++ b/drivers/net/pcmcia/3c589_cs.c @@ -116,7 +116,7 @@ struct el3_private { spinlock_t lock; }; -static const char *if_names[] = { "auto", "10base2", "10baseT", "AUI" }; +static const char *if_names[] = { "auto", "10baseT", "10base2", "AUI" }; /*====================================================================*/ diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c index 0cc4369..cb230f4 100644 --- a/drivers/net/phy/phy.c +++ b/drivers/net/phy/phy.c @@ -409,6 +409,7 @@ int phy_mii_ioctl(struct phy_device *phydev, return 0; } +EXPORT_SYMBOL(phy_mii_ioctl); /** * phy_start_aneg - start auto-negotiation for this PHY device diff --git a/drivers/net/ppp_mppe.c b/drivers/net/ppp_mppe.c index f79cf87..c0b6d19 100644 --- a/drivers/net/ppp_mppe.c +++ b/drivers/net/ppp_mppe.c @@ -136,7 +136,7 @@ struct ppp_mppe_state { * Key Derivation, from RFC 3078, RFC 3079. * Equivalent to Get_Key() for MS-CHAP as described in RFC 3079. */ -static void get_new_key_from_sha(struct ppp_mppe_state * state, unsigned char *InterimKey) +static void get_new_key_from_sha(struct ppp_mppe_state * state) { struct hash_desc desc; struct scatterlist sg[4]; @@ -153,8 +153,6 @@ static void get_new_key_from_sha(struct ppp_mppe_state * state, unsigned char *I desc.flags = 0; crypto_hash_digest(&desc, sg, nbytes, state->sha1_digest); - - memcpy(InterimKey, state->sha1_digest, state->keylen); } /* @@ -163,21 +161,21 @@ static void get_new_key_from_sha(struct ppp_mppe_state * state, unsigned char *I */ static void mppe_rekey(struct ppp_mppe_state * state, int initial_key) { - unsigned char InterimKey[MPPE_MAX_KEY_LEN]; struct scatterlist sg_in[1], sg_out[1]; struct blkcipher_desc desc = { .tfm = state->arc4 }; - get_new_key_from_sha(state, InterimKey); + get_new_key_from_sha(state); if (!initial_key) { - crypto_blkcipher_setkey(state->arc4, InterimKey, state->keylen); - setup_sg(sg_in, InterimKey, state->keylen); + crypto_blkcipher_setkey(state->arc4, state->sha1_digest, + state->keylen); + setup_sg(sg_in, state->sha1_digest, state->keylen); setup_sg(sg_out, state->session_key, state->keylen); if (crypto_blkcipher_encrypt(&desc, sg_out, sg_in, state->keylen) != 0) { printk(KERN_WARNING "mppe_rekey: cipher_encrypt failed\n"); } } else { - memcpy(state->session_key, InterimKey, state->keylen); + memcpy(state->session_key, state->sha1_digest, state->keylen); } if (state->keylen == 8) { /* See RFC 3078 */ diff --git a/drivers/net/pppoe.c b/drivers/net/pppoe.c index 0d7f570..9b30cd6 100644 --- a/drivers/net/pppoe.c +++ b/drivers/net/pppoe.c @@ -879,8 +879,7 @@ static int __pppoe_xmit(struct sock *sk, struct sk_buff *skb) dev->hard_header(skb, dev, ETH_P_PPP_SES, po->pppoe_pa.remote, NULL, data_len); - if (dev_queue_xmit(skb) < 0) - goto abort; + dev_queue_xmit(skb); return 1; diff --git a/drivers/net/pppol2tp.c b/drivers/net/pppol2tp.c index 266e8b3..abe91cb 100644 --- a/drivers/net/pppol2tp.c +++ b/drivers/net/pppol2tp.c @@ -491,44 +491,46 @@ static int pppol2tp_recv_core(struct sock *sock, struct sk_buff *skb) u16 hdrflags; u16 tunnel_id, session_id; int length; - struct udphdr *uh; + int offset; tunnel = pppol2tp_sock_to_tunnel(sock); if (tunnel == NULL) goto error; + /* UDP always verifies the packet length. */ + __skb_pull(skb, sizeof(struct udphdr)); + /* Short packet? */ - if (skb->len < sizeof(struct udphdr)) { + if (!pskb_may_pull(skb, 12)) { PRINTK(tunnel->debug, PPPOL2TP_MSG_DATA, KERN_INFO, "%s: recv short packet (len=%d)\n", tunnel->name, skb->len); goto error; } /* Point to L2TP header */ - ptr = skb->data + sizeof(struct udphdr); + ptr = skb->data; /* Get L2TP header flags */ hdrflags = ntohs(*(__be16*)ptr); /* Trace packet contents, if enabled */ if (tunnel->debug & PPPOL2TP_MSG_DATA) { + length = min(16u, skb->len); + if (!pskb_may_pull(skb, length)) + goto error; + printk(KERN_DEBUG "%s: recv: ", tunnel->name); - for (length = 0; length < 16; length++) - printk(" %02X", ptr[length]); + offset = 0; + do { + printk(" %02X", ptr[offset]); + } while (++offset < length); + printk("\n"); } /* Get length of L2TP packet */ - uh = (struct udphdr *) skb_transport_header(skb); - length = ntohs(uh->len) - sizeof(struct udphdr); - - /* Too short? */ - if (length < 12) { - PRINTK(tunnel->debug, PPPOL2TP_MSG_DATA, KERN_INFO, - "%s: recv short L2TP packet (len=%d)\n", tunnel->name, length); - goto error; - } + length = skb->len; /* If type is control packet, it is handled by userspace. */ if (hdrflags & L2TP_HDRFLAG_T) { @@ -606,7 +608,6 @@ static int pppol2tp_recv_core(struct sock *sock, struct sk_buff *skb) "%s: recv data has no seq numbers when required. " "Discarding\n", session->name); session->stats.rx_seq_discards++; - session->stats.rx_errors++; goto discard; } @@ -625,7 +626,6 @@ static int pppol2tp_recv_core(struct sock *sock, struct sk_buff *skb) "%s: recv data has no seq numbers when required. " "Discarding\n", session->name); session->stats.rx_seq_discards++; - session->stats.rx_errors++; goto discard; } @@ -634,10 +634,14 @@ static int pppol2tp_recv_core(struct sock *sock, struct sk_buff *skb) } /* If offset bit set, skip it. */ - if (hdrflags & L2TP_HDRFLAG_O) - ptr += 2 + ntohs(*(__be16 *) ptr); + if (hdrflags & L2TP_HDRFLAG_O) { + offset = ntohs(*(__be16 *)ptr); + skb->transport_header += 2 + offset; + if (!pskb_may_pull(skb, skb_transport_offset(skb) + 2)) + goto discard; + } - skb_pull(skb, ptr - skb->data); + __skb_pull(skb, skb_transport_offset(skb)); /* Skip PPP header, if present. In testing, Microsoft L2TP clients * don't send the PPP header (PPP header compression enabled), but @@ -673,7 +677,6 @@ static int pppol2tp_recv_core(struct sock *sock, struct sk_buff *skb) */ if (PPPOL2TP_SKB_CB(skb)->ns != session->nr) { session->stats.rx_seq_discards++; - session->stats.rx_errors++; PRINTK(session->debug, PPPOL2TP_MSG_SEQ, KERN_DEBUG, "%s: oos pkt %hu len %d discarded, " "waiting for %hu, reorder_q_len=%d\n", @@ -698,6 +701,7 @@ static int pppol2tp_recv_core(struct sock *sock, struct sk_buff *skb) return 0; discard: + session->stats.rx_errors++; kfree_skb(skb); sock_put(session->sock); @@ -958,7 +962,6 @@ static int pppol2tp_xmit(struct ppp_channel *chan, struct sk_buff *skb) int data_len = skb->len; struct inet_sock *inet; __wsum csum = 0; - struct sk_buff *skb2 = NULL; struct udphdr *uh; unsigned int len; @@ -989,41 +992,30 @@ static int pppol2tp_xmit(struct ppp_channel *chan, struct sk_buff *skb) */ headroom = NET_SKB_PAD + sizeof(struct iphdr) + sizeof(struct udphdr) + hdr_len + sizeof(ppph); - if (skb_headroom(skb) < headroom) { - skb2 = skb_realloc_headroom(skb, headroom); - if (skb2 == NULL) - goto abort; - } else - skb2 = skb; - - /* Check that the socket has room */ - if (atomic_read(&sk_tun->sk_wmem_alloc) < sk_tun->sk_sndbuf) - skb_set_owner_w(skb2, sk_tun); - else - goto discard; + if (skb_cow_head(skb, headroom)) + goto abort; /* Setup PPP header */ - skb_push(skb2, sizeof(ppph)); - skb2->data[0] = ppph[0]; - skb2->data[1] = ppph[1]; + __skb_push(skb, sizeof(ppph)); + skb->data[0] = ppph[0]; + skb->data[1] = ppph[1]; /* Setup L2TP header */ - skb_push(skb2, hdr_len); - pppol2tp_build_l2tp_header(session, skb2->data); + pppol2tp_build_l2tp_header(session, __skb_push(skb, hdr_len)); /* Setup UDP header */ inet = inet_sk(sk_tun); - skb_push(skb2, sizeof(struct udphdr)); - skb_reset_transport_header(skb2); - uh = (struct udphdr *) skb2->data; + __skb_push(skb, sizeof(*uh)); + skb_reset_transport_header(skb); + uh = udp_hdr(skb); uh->source = inet->sport; uh->dest = inet->dport; uh->len = htons(sizeof(struct udphdr) + hdr_len + sizeof(ppph) + data_len); uh->check = 0; - /* Calculate UDP checksum if configured to do so */ + /* *BROKEN* Calculate UDP checksum if configured to do so */ if (sk_tun->sk_no_check != UDP_CSUM_NOXMIT) - csum = udp_csum_outgoing(sk_tun, skb2); + csum = udp_csum_outgoing(sk_tun, skb); /* Debug */ if (session->send_seq) @@ -1036,7 +1028,7 @@ static int pppol2tp_xmit(struct ppp_channel *chan, struct sk_buff *skb) if (session->debug & PPPOL2TP_MSG_DATA) { int i; - unsigned char *datap = skb2->data; + unsigned char *datap = skb->data; printk(KERN_DEBUG "%s: xmit:", session->name); for (i = 0; i < data_len; i++) { @@ -1049,18 +1041,18 @@ static int pppol2tp_xmit(struct ppp_channel *chan, struct sk_buff *skb) printk("\n"); } - memset(&(IPCB(skb2)->opt), 0, sizeof(IPCB(skb2)->opt)); - IPCB(skb2)->flags &= ~(IPSKB_XFRM_TUNNEL_SIZE | IPSKB_XFRM_TRANSFORMED | - IPSKB_REROUTED); - nf_reset(skb2); + memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt)); + IPCB(skb)->flags &= ~(IPSKB_XFRM_TUNNEL_SIZE | IPSKB_XFRM_TRANSFORMED | + IPSKB_REROUTED); + nf_reset(skb); /* Get routing info from the tunnel socket */ - dst_release(skb2->dst); - skb2->dst = sk_dst_get(sk_tun); + dst_release(skb->dst); + skb->dst = sk_dst_get(sk_tun); /* Queue the packet to IP for output */ - len = skb2->len; - rc = ip_queue_xmit(skb2, 1); + len = skb->len; + rc = ip_queue_xmit(skb, 1); /* Update stats */ if (rc >= 0) { @@ -1073,17 +1065,12 @@ static int pppol2tp_xmit(struct ppp_channel *chan, struct sk_buff *skb) session->stats.tx_errors++; } - /* Free the original skb */ - kfree_skb(skb); - return 1; -discard: - /* Free the new skb. Caller will free original skb. */ - if (skb2 != skb) - kfree_skb(skb2); abort: - return 0; + /* Free the original skb */ + kfree_skb(skb); + return 1; } /***************************************************************************** @@ -1326,12 +1313,14 @@ static struct sock *pppol2tp_prepare_tunnel_socket(int fd, u16 tunnel_id, goto err; } + sk = sock->sk; + /* Quick sanity checks */ - err = -ESOCKTNOSUPPORT; - if (sock->type != SOCK_DGRAM) { + err = -EPROTONOSUPPORT; + if (sk->sk_protocol != IPPROTO_UDP) { PRINTK(-1, PPPOL2TP_MSG_CONTROL, KERN_ERR, - "tunl %hu: fd %d wrong type, got %d, expected %d\n", - tunnel_id, fd, sock->type, SOCK_DGRAM); + "tunl %hu: fd %d wrong protocol, got %d, expected %d\n", + tunnel_id, fd, sk->sk_protocol, IPPROTO_UDP); goto err; } err = -EAFNOSUPPORT; @@ -1343,7 +1332,6 @@ static struct sock *pppol2tp_prepare_tunnel_socket(int fd, u16 tunnel_id, } err = -ENOTCONN; - sk = sock->sk; /* Check if this socket has already been prepped */ tunnel = (struct pppol2tp_tunnel *)sk->sk_user_data; diff --git a/drivers/net/qla3xxx.c b/drivers/net/qla3xxx.c index 69da95b..ea15131 100755 --- a/drivers/net/qla3xxx.c +++ b/drivers/net/qla3xxx.c @@ -2248,6 +2248,13 @@ static int ql_tx_rx_clean(struct ql3_adapter *qdev, qdev->rsp_consumer_index) && (work_done < work_to_do)) { net_rsp = qdev->rsp_current; + rmb(); + /* + * Fix 4032 chipe undocumented "feature" where bit-8 is set if the + * inbound completion is for a VLAN. + */ + if (qdev->device_id == QL3032_DEVICE_ID) + net_rsp->opcode &= 0x7f; switch (net_rsp->opcode) { case OPCODE_OB_MAC_IOCB_FN0: diff --git a/drivers/net/r8169.c b/drivers/net/r8169.c index b85ab4a..c921ec3 100644 --- a/drivers/net/r8169.c +++ b/drivers/net/r8169.c @@ -1228,7 +1228,10 @@ static void rtl8169_hw_phy_config(struct net_device *dev) return; } - /* phy config for RTL8169s mac_version C chip */ + if ((tp->mac_version != RTL_GIGA_MAC_VER_02) && + (tp->mac_version != RTL_GIGA_MAC_VER_03)) + return; + mdio_write(ioaddr, 31, 0x0001); //w 31 2 0 1 mdio_write(ioaddr, 21, 0x1000); //w 21 15 0 1000 mdio_write(ioaddr, 24, 0x65c7); //w 24 15 0 65c7 @@ -2567,6 +2570,15 @@ static void rtl8169_tx_interrupt(struct net_device *dev, (TX_BUFFS_AVAIL(tp) >= MAX_SKB_FRAGS)) { netif_wake_queue(dev); } + /* + * 8168 hack: TxPoll requests are lost when the Tx packets are + * too close. Let's kick an extra TxPoll request when a burst + * of start_xmit activity is detected (if it is not detected, + * it is slow enough). -- FR + */ + smp_rmb(); + if (tp->cur_tx != dirty_tx) + RTL_W8(TxPoll, NPQ); } } diff --git a/drivers/net/sky2.c b/drivers/net/sky2.c index 5d812de..162489b 100644 --- a/drivers/net/sky2.c +++ b/drivers/net/sky2.c @@ -51,7 +51,7 @@ #include "sky2.h" #define DRV_NAME "sky2" -#define DRV_VERSION "1.17" +#define DRV_VERSION "1.18" #define PFX DRV_NAME " " /* @@ -118,12 +118,15 @@ static const struct pci_device_id sky2_id_table[] = { { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4351) }, /* 88E8036 */ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4352) }, /* 88E8038 */ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4353) }, /* 88E8039 */ + { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4354) }, /* 88E8040 */ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4356) }, /* 88EC033 */ + { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x435A) }, /* 88E8048 */ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4360) }, /* 88E8052 */ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4361) }, /* 88E8050 */ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4362) }, /* 88E8053 */ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4363) }, /* 88E8055 */ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4364) }, /* 88E8056 */ + { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4365) }, /* 88E8070 */ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4366) }, /* 88EC036 */ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4367) }, /* 88EC032 */ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4368) }, /* 88EC034 */ @@ -147,6 +150,7 @@ static const char *yukon2_name[] = { "Extreme", /* 0xb5 */ "EC", /* 0xb6 */ "FE", /* 0xb7 */ + "FE+", /* 0xb8 */ }; static void sky2_set_multicast(struct net_device *dev); @@ -217,8 +221,7 @@ static void sky2_power_on(struct sky2_hw *hw) else sky2_write8(hw, B2_Y2_CLK_GATE, 0); - if (hw->chip_id == CHIP_ID_YUKON_EC_U || - hw->chip_id == CHIP_ID_YUKON_EX) { + if (hw->flags & SKY2_HW_ADV_POWER_CTL) { u32 reg; sky2_pci_write32(hw, PCI_DEV_REG3, 0); @@ -311,10 +314,8 @@ static void sky2_phy_init(struct sky2_hw *hw, unsigned port) struct sky2_port *sky2 = netdev_priv(hw->dev[port]); u16 ctrl, ct1000, adv, pg, ledctrl, ledover, reg; - if (sky2->autoneg == AUTONEG_ENABLE - && !(hw->chip_id == CHIP_ID_YUKON_XL - || hw->chip_id == CHIP_ID_YUKON_EC_U - || hw->chip_id == CHIP_ID_YUKON_EX)) { + if (sky2->autoneg == AUTONEG_ENABLE && + !(hw->flags & SKY2_HW_NEWER_PHY)) { u16 ectrl = gm_phy_read(hw, port, PHY_MARV_EXT_CTRL); ectrl &= ~(PHY_M_EC_M_DSC_MSK | PHY_M_EC_S_DSC_MSK | @@ -334,9 +335,19 @@ static void sky2_phy_init(struct sky2_hw *hw, unsigned port) ctrl = gm_phy_read(hw, port, PHY_MARV_PHY_CTRL); if (sky2_is_copper(hw)) { - if (hw->chip_id == CHIP_ID_YUKON_FE) { + if (!(hw->flags & SKY2_HW_GIGABIT)) { /* enable automatic crossover */ ctrl |= PHY_M_PC_MDI_XMODE(PHY_M_PC_ENA_AUTO) >> 1; + + if (hw->chip_id == CHIP_ID_YUKON_FE_P && + hw->chip_rev == CHIP_REV_YU_FE2_A0) { + u16 spec; + + /* Enable Class A driver for FE+ A0 */ + spec = gm_phy_read(hw, port, PHY_MARV_FE_SPEC_2); + spec |= PHY_M_FESC_SEL_CL_A; + gm_phy_write(hw, port, PHY_MARV_FE_SPEC_2, spec); + } } else { /* disable energy detect */ ctrl &= ~PHY_M_PC_EN_DET_MSK; @@ -346,9 +357,7 @@ static void sky2_phy_init(struct sky2_hw *hw, unsigned port) /* downshift on PHY 88E1112 and 88E1149 is changed */ if (sky2->autoneg == AUTONEG_ENABLE - && (hw->chip_id == CHIP_ID_YUKON_XL - || hw->chip_id == CHIP_ID_YUKON_EC_U - || hw->chip_id == CHIP_ID_YUKON_EX)) { + && (hw->flags & SKY2_HW_NEWER_PHY)) { /* set downshift counter to 3x and enable downshift */ ctrl &= ~PHY_M_PC_DSC_MSK; ctrl |= PHY_M_PC_DSC(2) | PHY_M_PC_DOWN_S_ENA; @@ -364,7 +373,7 @@ static void sky2_phy_init(struct sky2_hw *hw, unsigned port) gm_phy_write(hw, port, PHY_MARV_PHY_CTRL, ctrl); /* special setup for PHY 88E1112 Fiber */ - if (hw->chip_id == CHIP_ID_YUKON_XL && !sky2_is_copper(hw)) { + if (hw->chip_id == CHIP_ID_YUKON_XL && (hw->flags & SKY2_HW_FIBRE_PHY)) { pg = gm_phy_read(hw, port, PHY_MARV_EXT_ADR); /* Fiber: select 1000BASE-X only mode MAC Specific Ctrl Reg. */ @@ -455,7 +464,7 @@ static void sky2_phy_init(struct sky2_hw *hw, unsigned port) gma_write16(hw, port, GM_GP_CTRL, reg); - if (hw->chip_id != CHIP_ID_YUKON_FE) + if (hw->flags & SKY2_HW_GIGABIT) gm_phy_write(hw, port, PHY_MARV_1000T_CTRL, ct1000); gm_phy_write(hw, port, PHY_MARV_AUNE_ADV, adv); @@ -479,6 +488,23 @@ static void sky2_phy_init(struct sky2_hw *hw, unsigned port) gm_phy_write(hw, port, PHY_MARV_FE_LED_PAR, ctrl); break; + case CHIP_ID_YUKON_FE_P: + /* Enable Link Partner Next Page */ + ctrl = gm_phy_read(hw, port, PHY_MARV_PHY_CTRL); + ctrl |= PHY_M_PC_ENA_LIP_NP; + + /* disable Energy Detect and enable scrambler */ + ctrl &= ~(PHY_M_PC_ENA_ENE_DT | PHY_M_PC_DIS_SCRAMB); + gm_phy_write(hw, port, PHY_MARV_PHY_CTRL, ctrl); + + /* set LED2 -> ACT, LED1 -> LINK, LED0 -> SPEED */ + ctrl = PHY_M_FELP_LED2_CTRL(LED_PAR_CTRL_ACT_BL) | + PHY_M_FELP_LED1_CTRL(LED_PAR_CTRL_LINK) | + PHY_M_FELP_LED0_CTRL(LED_PAR_CTRL_SPEED); + + gm_phy_write(hw, port, PHY_MARV_FE_LED_PAR, ctrl); + break; + case CHIP_ID_YUKON_XL: pg = gm_phy_read(hw, port, PHY_MARV_EXT_ADR); @@ -548,7 +574,13 @@ static void sky2_phy_init(struct sky2_hw *hw, unsigned port) /* set page register to 0 */ gm_phy_write(hw, port, PHY_MARV_EXT_ADR, 0); + } else if (hw->chip_id == CHIP_ID_YUKON_FE_P && + hw->chip_rev == CHIP_REV_YU_FE2_A0) { + /* apply workaround for integrated resistors calibration */ + gm_phy_write(hw, port, PHY_MARV_PAGE_ADDR, 17); + gm_phy_write(hw, port, PHY_MARV_PAGE_DATA, 0x3f60); } else if (hw->chip_id != CHIP_ID_YUKON_EX) { + /* no effect on Yukon-XL */ gm_phy_write(hw, port, PHY_MARV_LED_CTRL, ledctrl); if (sky2->autoneg == AUTONEG_DISABLE || sky2->speed == SPEED_100) { @@ -669,25 +701,25 @@ static void sky2_wol_init(struct sky2_port *sky2) static void sky2_set_tx_stfwd(struct sky2_hw *hw, unsigned port) { - if (hw->chip_id == CHIP_ID_YUKON_EX && hw->chip_rev != CHIP_REV_YU_EX_A0) { + struct net_device *dev = hw->dev[port]; + + if (dev->mtu <= ETH_DATA_LEN) sky2_write32(hw, SK_REG(port, TX_GMF_CTRL_T), - TX_STFW_ENA | - (hw->dev[port]->mtu > ETH_DATA_LEN) ? TX_JUMBO_ENA : TX_JUMBO_DIS); - } else { - if (hw->dev[port]->mtu > ETH_DATA_LEN) { - /* set Tx GMAC FIFO Almost Empty Threshold */ - sky2_write32(hw, SK_REG(port, TX_GMF_AE_THR), - (ECU_JUMBO_WM << 16) | ECU_AE_THR); + TX_JUMBO_DIS | TX_STFW_ENA); - sky2_write32(hw, SK_REG(port, TX_GMF_CTRL_T), - TX_JUMBO_ENA | TX_STFW_DIS); + else if (hw->chip_id != CHIP_ID_YUKON_EC_U) + sky2_write32(hw, SK_REG(port, TX_GMF_CTRL_T), + TX_STFW_ENA | TX_JUMBO_ENA); + else { + /* set Tx GMAC FIFO Almost Empty Threshold */ + sky2_write32(hw, SK_REG(port, TX_GMF_AE_THR), + (ECU_JUMBO_WM << 16) | ECU_AE_THR); - /* Can't do offload because of lack of store/forward */ - hw->dev[port]->features &= ~(NETIF_F_TSO | NETIF_F_SG - | NETIF_F_ALL_CSUM); - } else - sky2_write32(hw, SK_REG(port, TX_GMF_CTRL_T), - TX_JUMBO_DIS | TX_STFW_ENA); + sky2_write32(hw, SK_REG(port, TX_GMF_CTRL_T), + TX_JUMBO_ENA | TX_STFW_DIS); + + /* Can't do offload because of lack of store/forward */ + dev->features &= ~(NETIF_F_TSO | NETIF_F_SG | NETIF_F_ALL_CSUM); } } @@ -773,7 +805,8 @@ static void sky2_mac_init(struct sky2_hw *hw, unsigned port) /* Configure Rx MAC FIFO */ sky2_write8(hw, SK_REG(port, RX_GMF_CTRL_T), GMF_RST_CLR); rx_reg = GMF_OPER_ON | GMF_RX_F_FL_ON; - if (hw->chip_id == CHIP_ID_YUKON_EX) + if (hw->chip_id == CHIP_ID_YUKON_EX || + hw->chip_id == CHIP_ID_YUKON_FE_P) rx_reg |= GMF_RX_OVER_ON; sky2_write32(hw, SK_REG(port, RX_GMF_CTRL_T), rx_reg); @@ -782,13 +815,19 @@ static void sky2_mac_init(struct sky2_hw *hw, unsigned port) sky2_write16(hw, SK_REG(port, RX_GMF_FL_MSK), GMR_FS_ANY_ERR); /* Set threshold to 0xa (64 bytes) + 1 to workaround pause bug */ - sky2_write16(hw, SK_REG(port, RX_GMF_FL_THR), RX_GMF_FL_THR_DEF+1); + reg = RX_GMF_FL_THR_DEF + 1; + /* Another magic mystery workaround from sk98lin */ + if (hw->chip_id == CHIP_ID_YUKON_FE_P && + hw->chip_rev == CHIP_REV_YU_FE2_A0) + reg = 0x178; + sky2_write16(hw, SK_REG(port, RX_GMF_FL_THR), reg); /* Configure Tx MAC FIFO */ sky2_write8(hw, SK_REG(port, TX_GMF_CTRL_T), GMF_RST_CLR); sky2_write16(hw, SK_REG(port, TX_GMF_CTRL_T), GMF_OPER_ON); - if (hw->chip_id == CHIP_ID_YUKON_EC_U || hw->chip_id == CHIP_ID_YUKON_EX) { + /* On chips without ram buffer, pause is controled by MAC level */ + if (sky2_read8(hw, B2_E_0) == 0) { sky2_write8(hw, SK_REG(port, RX_GMF_LP_THR), 768/8); sky2_write8(hw, SK_REG(port, RX_GMF_UP_THR), 1024/8); @@ -871,6 +910,20 @@ static inline struct sky2_tx_le *get_tx_le(struct sky2_port *sky2) return le; } +static void tx_init(struct sky2_port *sky2) +{ + struct sky2_tx_le *le; + + sky2->tx_prod = sky2->tx_cons = 0; + sky2->tx_tcpsum = 0; + sky2->tx_last_mss = 0; + + le = get_tx_le(sky2); + le->addr = 0; + le->opcode = OP_ADDR64 | HW_OWNER; + sky2->tx_addr64 = 0; +} + static inline struct tx_ring_info *tx_le_re(struct sky2_port *sky2, struct sky2_tx_le *le) { @@ -967,19 +1020,15 @@ static void sky2_rx_unmap_skb(struct pci_dev *pdev, struct rx_ring_info *re) */ static void rx_set_checksum(struct sky2_port *sky2) { - struct sky2_rx_le *le; - - if (sky2->hw->chip_id != CHIP_ID_YUKON_EX) { - le = sky2_next_rx(sky2); - le->addr = cpu_to_le32((ETH_HLEN << 16) | ETH_HLEN); - le->ctrl = 0; - le->opcode = OP_TCPSTART | HW_OWNER; + struct sky2_rx_le *le = sky2_next_rx(sky2); - sky2_write32(sky2->hw, - Q_ADDR(rxqaddr[sky2->port], Q_CSR), - sky2->rx_csum ? BMU_ENA_RX_CHKSUM : BMU_DIS_RX_CHKSUM); - } + le->addr = cpu_to_le32((ETH_HLEN << 16) | ETH_HLEN); + le->ctrl = 0; + le->opcode = OP_TCPSTART | HW_OWNER; + sky2_write32(sky2->hw, + Q_ADDR(rxqaddr[sky2->port], Q_CSR), + sky2->rx_csum ? BMU_ENA_RX_CHKSUM : BMU_DIS_RX_CHKSUM); } /* @@ -1175,7 +1224,8 @@ static int sky2_rx_start(struct sky2_port *sky2) sky2_prefetch_init(hw, rxq, sky2->rx_le_map, RX_LE_SIZE - 1); - rx_set_checksum(sky2); + if (!(hw->flags & SKY2_HW_NEW_LE)) + rx_set_checksum(sky2); /* Space needed for frame data + headers rounded up */ size = roundup(sky2->netdev->mtu + ETH_HLEN + VLAN_HLEN, 8); @@ -1246,7 +1296,7 @@ static int sky2_up(struct net_device *dev) struct sky2_port *sky2 = netdev_priv(dev); struct sky2_hw *hw = sky2->hw; unsigned port = sky2->port; - u32 ramsize, imask; + u32 imask, ramsize; int cap, err = -ENOMEM; struct net_device *otherdev = hw->dev[sky2->port^1]; @@ -1284,7 +1334,8 @@ static int sky2_up(struct net_device *dev) GFP_KERNEL); if (!sky2->tx_ring) goto err_out; - sky2->tx_prod = sky2->tx_cons = 0; + + tx_init(sky2); sky2->rx_le = pci_alloc_consistent(hw->pdev, RX_LE_BYTES, &sky2->rx_le_map); @@ -1303,11 +1354,10 @@ static int sky2_up(struct net_device *dev) /* Register is number of 4K blocks on internal RAM buffer. */ ramsize = sky2_read8(hw, B2_E_0) * 4; - printk(KERN_INFO PFX "%s: ram buffer %dK\n", dev->name, ramsize); - if (ramsize > 0) { u32 rxspace; + pr_debug(PFX "%s: ram buffer %dK\n", dev->name, ramsize); if (ramsize < 16) rxspace = ramsize / 2; else @@ -1436,13 +1486,15 @@ static int sky2_xmit_frame(struct sk_buff *skb, struct net_device *dev) /* Check for TCP Segmentation Offload */ mss = skb_shinfo(skb)->gso_size; if (mss != 0) { - if (hw->chip_id != CHIP_ID_YUKON_EX) + + if (!(hw->flags & SKY2_HW_NEW_LE)) mss += ETH_HLEN + ip_hdrlen(skb) + tcp_hdrlen(skb); if (mss != sky2->tx_last_mss) { le = get_tx_le(sky2); le->addr = cpu_to_le32(mss); - if (hw->chip_id == CHIP_ID_YUKON_EX) + + if (hw->flags & SKY2_HW_NEW_LE) le->opcode = OP_MSS | HW_OWNER; else le->opcode = OP_LRGLEN | HW_OWNER; @@ -1468,8 +1520,7 @@ static int sky2_xmit_frame(struct sk_buff *skb, struct net_device *dev) /* Handle TCP checksum offload */ if (skb->ip_summed == CHECKSUM_PARTIAL) { /* On Yukon EX (some versions) encoding change. */ - if (hw->chip_id == CHIP_ID_YUKON_EX - && hw->chip_rev != CHIP_REV_YU_EX_B0) + if (hw->flags & SKY2_HW_AUTO_TX_SUM) ctrl |= CALSUM; /* auto checksum */ else { const unsigned offset = skb_transport_offset(skb); @@ -1622,9 +1673,6 @@ static int sky2_down(struct net_device *dev) if (netif_msg_ifdown(sky2)) printk(KERN_INFO PFX "%s: disabling interface\n", dev->name); - if (netif_carrier_ok(dev) && --hw->active == 0) - del_timer(&hw->watchdog_timer); - /* Stop more packets from being queued */ netif_stop_queue(dev); @@ -1708,11 +1756,15 @@ static int sky2_down(struct net_device *dev) static u16 sky2_phy_speed(const struct sky2_hw *hw, u16 aux) { - if (!sky2_is_copper(hw)) + if (hw->flags & SKY2_HW_FIBRE_PHY) return SPEED_1000; - if (hw->chip_id == CHIP_ID_YUKON_FE) - return (aux & PHY_M_PS_SPEED_100) ? SPEED_100 : SPEED_10; + if (!(hw->flags & SKY2_HW_GIGABIT)) { + if (aux & PHY_M_PS_SPEED_100) + return SPEED_100; + else + return SPEED_10; + } switch (aux & PHY_M_PS_SPEED_MSK) { case PHY_M_PS_SPEED_1000: @@ -1745,17 +1797,13 @@ static void sky2_link_up(struct sky2_port *sky2) netif_carrier_on(sky2->netdev); - if (hw->active++ == 0) - mod_timer(&hw->watchdog_timer, jiffies + 1); - + mod_timer(&hw->watchdog_timer, jiffies + 1); /* Turn on link LED */ sky2_write8(hw, SK_REG(port, LNK_LED_REG), LINKLED_ON | LINKLED_BLINK_OFF | LINKLED_LINKSYNC_OFF); - if (hw->chip_id == CHIP_ID_YUKON_XL - || hw->chip_id == CHIP_ID_YUKON_EC_U - || hw->chip_id == CHIP_ID_YUKON_EX) { + if (hw->flags & SKY2_HW_NEWER_PHY) { u16 pg = gm_phy_read(hw, port, PHY_MARV_EXT_ADR); u16 led = PHY_M_LEDC_LOS_CTRL(1); /* link active */ @@ -1800,11 +1848,6 @@ static void sky2_link_down(struct sky2_port *sky2) netif_carrier_off(sky2->netdev); - /* Stop watchdog if both ports are not active */ - if (--hw->active == 0) - del_timer(&hw->watchdog_timer); - - /* Turn on link LED */ sky2_write8(hw, SK_REG(port, LNK_LED_REG), LINKLED_OFF); @@ -1847,7 +1890,7 @@ static int sky2_autoneg_done(struct sky2_port *sky2, u16 aux) /* Since the pause result bits seem to in different positions on * different chips. look at registers. */ - if (!sky2_is_copper(hw)) { + if (hw->flags & SKY2_HW_FIBRE_PHY) { /* Shift for bits in fiber PHY */ advert &= ~(ADVERTISE_PAUSE_CAP|ADVERTISE_PAUSE_ASYM); lpa &= ~(LPA_PAUSE_CAP|LPA_PAUSE_ASYM); @@ -1958,7 +2001,9 @@ static int sky2_change_mtu(struct net_device *dev, int new_mtu) if (new_mtu < ETH_ZLEN || new_mtu > ETH_JUMBO_MTU) return -EINVAL; - if (new_mtu > ETH_DATA_LEN && hw->chip_id == CHIP_ID_YUKON_FE) + if (new_mtu > ETH_DATA_LEN && + (hw->chip_id == CHIP_ID_YUKON_FE || + hw->chip_id == CHIP_ID_YUKON_FE_P)) return -EINVAL; if (!netif_running(dev)) { @@ -1975,7 +2020,7 @@ static int sky2_change_mtu(struct net_device *dev, int new_mtu) synchronize_irq(hw->pdev->irq); - if (hw->chip_id == CHIP_ID_YUKON_EC_U || hw->chip_id == CHIP_ID_YUKON_EX) + if (sky2_read8(hw, B2_E_0) == 0) sky2_set_tx_stfwd(hw, port); ctl = gma_read16(hw, port, GM_GP_CTRL); @@ -2103,6 +2148,13 @@ static struct sk_buff *sky2_receive(struct net_device *dev, struct sky2_port *sky2 = netdev_priv(dev); struct rx_ring_info *re = sky2->rx_ring + sky2->rx_next; struct sk_buff *skb = NULL; + u16 count = (status & GMR_FS_LEN) >> 16; + +#ifdef SKY2_VLAN_TAG_USED + /* Account for vlan tag */ + if (sky2->vlgrp && (status & GMR_FS_VLAN)) + count -= VLAN_HLEN; +#endif if (unlikely(netif_msg_rx_status(sky2))) printk(KERN_DEBUG PFX "%s: rx slot %u status 0x%x len %d\n", @@ -2111,15 +2163,29 @@ static struct sk_buff *sky2_receive(struct net_device *dev, sky2->rx_next = (sky2->rx_next + 1) % sky2->rx_pending; prefetch(sky2->rx_ring + sky2->rx_next); + if (length < ETH_ZLEN || length > sky2->rx_data_size) + goto len_error; + + /* This chip has hardware problems that generates bogus status. + * So do only marginal checking and expect higher level protocols + * to handle crap frames. + */ + if (sky2->hw->chip_id == CHIP_ID_YUKON_FE_P && + sky2->hw->chip_rev == CHIP_REV_YU_FE2_A0 && + length != count) + goto okay; + if (status & GMR_FS_ANY_ERR) goto error; if (!(status & GMR_FS_RX_OK)) goto resubmit; - if (status >> 16 != length) - goto len_mismatch; + /* if length reported by DMA does not match PHY, packet was truncated */ + if (length != count) + goto len_error; +okay: if (length < copybreak) skb = receive_copy(sky2, re, length); else @@ -2129,10 +2195,14 @@ resubmit: return skb; -len_mismatch: +len_error: /* Truncation of overlength packets causes PHY length to not match MAC length */ ++sky2->net_stats.rx_length_errors; + if (netif_msg_rx_err(sky2) && net_ratelimit()) + pr_info(PFX "%s: rx length error: status %#x length %d\n", + dev->name, status, length); + goto resubmit; error: ++sky2->net_stats.rx_errors; @@ -2202,7 +2272,7 @@ static int sky2_status_intr(struct sky2_hw *hw, int to_do) } /* This chip reports checksum status differently */ - if (hw->chip_id == CHIP_ID_YUKON_EX) { + if (hw->flags & SKY2_HW_NEW_LE) { if (sky2->rx_csum && (le->css & (CSS_ISIPV4 | CSS_ISIPV6)) && (le->css & CSS_TCPUDPCSOK)) @@ -2243,8 +2313,14 @@ static int sky2_status_intr(struct sky2_hw *hw, int to_do) if (!sky2->rx_csum) break; - if (hw->chip_id == CHIP_ID_YUKON_EX) + /* If this happens then driver assuming wrong format */ + if (unlikely(hw->flags & SKY2_HW_NEW_LE)) { + if (net_ratelimit()) + printk(KERN_NOTICE "%s: unexpected" + " checksum status\n", + dev->name); break; + } /* Both checksum counters are programmed to start at * the same offset, so unless there is a problem they @@ -2436,20 +2512,72 @@ static void sky2_le_error(struct sky2_hw *hw, unsigned port, sky2_write32(hw, Q_ADDR(q, Q_CSR), BMU_CLR_IRQ_CHK); } -/* Check for lost IRQ once a second */ +static int sky2_rx_hung(struct net_device *dev) +{ + struct sky2_port *sky2 = netdev_priv(dev); + struct sky2_hw *hw = sky2->hw; + unsigned port = sky2->port; + unsigned rxq = rxqaddr[port]; + u32 mac_rp = sky2_read32(hw, SK_REG(port, RX_GMF_RP)); + u8 mac_lev = sky2_read8(hw, SK_REG(port, RX_GMF_RLEV)); + u8 fifo_rp = sky2_read8(hw, Q_ADDR(rxq, Q_RP)); + u8 fifo_lev = sky2_read8(hw, Q_ADDR(rxq, Q_RL)); + + /* If idle and MAC or PCI is stuck */ + if (sky2->check.last == dev->last_rx && + ((mac_rp == sky2->check.mac_rp && + mac_lev != 0 && mac_lev >= sky2->check.mac_lev) || + /* Check if the PCI RX hang */ + (fifo_rp == sky2->check.fifo_rp && + fifo_lev != 0 && fifo_lev >= sky2->check.fifo_lev))) { + printk(KERN_DEBUG PFX "%s: hung mac %d:%d fifo %d (%d:%d)\n", + dev->name, mac_lev, mac_rp, fifo_lev, fifo_rp, + sky2_read8(hw, Q_ADDR(rxq, Q_WP))); + return 1; + } else { + sky2->check.last = dev->last_rx; + sky2->check.mac_rp = mac_rp; + sky2->check.mac_lev = mac_lev; + sky2->check.fifo_rp = fifo_rp; + sky2->check.fifo_lev = fifo_lev; + return 0; + } +} + static void sky2_watchdog(unsigned long arg) { struct sky2_hw *hw = (struct sky2_hw *) arg; + struct net_device *dev; + /* Check for lost IRQ once a second */ if (sky2_read32(hw, B0_ISRC)) { - struct net_device *dev = hw->dev[0]; - + dev = hw->dev[0]; if (__netif_rx_schedule_prep(dev)) __netif_rx_schedule(dev); + } else { + int i, active = 0; + + for (i = 0; i < hw->ports; i++) { + dev = hw->dev[i]; + if (!netif_running(dev)) + continue; + ++active; + + /* For chips with Rx FIFO, check if stuck */ + if ((hw->flags & SKY2_HW_FIFO_HANG_CHECK) && + sky2_rx_hung(dev)) { + pr_info(PFX "%s: receiver hang detected\n", + dev->name); + schedule_work(&hw->restart_work); + return; + } + } + + if (active == 0) + return; } - if (hw->active > 0) - mod_timer(&hw->watchdog_timer, round_jiffies(jiffies + HZ)); + mod_timer(&hw->watchdog_timer, round_jiffies(jiffies + HZ)); } /* Hardware/software error handling */ @@ -2546,17 +2674,25 @@ static void sky2_netpoll(struct net_device *dev) #endif /* Chip internal frequency for clock calculations */ -static inline u32 sky2_mhz(const struct sky2_hw *hw) +static u32 sky2_mhz(const struct sky2_hw *hw) { switch (hw->chip_id) { case CHIP_ID_YUKON_EC: case CHIP_ID_YUKON_EC_U: case CHIP_ID_YUKON_EX: - return 125; /* 125 Mhz */ + return 125; + case CHIP_ID_YUKON_FE: - return 100; /* 100 Mhz */ - default: /* YUKON_XL */ - return 156; /* 156 Mhz */ + return 100; + + case CHIP_ID_YUKON_FE_P: + return 50; + + case CHIP_ID_YUKON_XL: + return 156; + + default: + BUG(); } } @@ -2581,23 +2717,63 @@ static int __devinit sky2_init(struct sky2_hw *hw) sky2_write8(hw, B0_CTST, CS_RST_CLR); hw->chip_id = sky2_read8(hw, B2_CHIP_ID); - if (hw->chip_id < CHIP_ID_YUKON_XL || hw->chip_id > CHIP_ID_YUKON_FE) { + hw->chip_rev = (sky2_read8(hw, B2_MAC_CFG) & CFG_CHIP_R_MSK) >> 4; + + switch(hw->chip_id) { + case CHIP_ID_YUKON_XL: + hw->flags = SKY2_HW_GIGABIT + | SKY2_HW_NEWER_PHY; + if (hw->chip_rev < 3) + hw->flags |= SKY2_HW_FIFO_HANG_CHECK; + + break; + + case CHIP_ID_YUKON_EC_U: + hw->flags = SKY2_HW_GIGABIT + | SKY2_HW_NEWER_PHY + | SKY2_HW_ADV_POWER_CTL; + break; + + case CHIP_ID_YUKON_EX: + hw->flags = SKY2_HW_GIGABIT + | SKY2_HW_NEWER_PHY + | SKY2_HW_NEW_LE + | SKY2_HW_ADV_POWER_CTL; + + /* New transmit checksum */ + if (hw->chip_rev != CHIP_REV_YU_EX_B0) + hw->flags |= SKY2_HW_AUTO_TX_SUM; + break; + + case CHIP_ID_YUKON_EC: + /* This rev is really old, and requires untested workarounds */ + if (hw->chip_rev == CHIP_REV_YU_EC_A1) { + dev_err(&hw->pdev->dev, "unsupported revision Yukon-EC rev A1\n"); + return -EOPNOTSUPP; + } + hw->flags = SKY2_HW_GIGABIT | SKY2_HW_FIFO_HANG_CHECK; + break; + + case CHIP_ID_YUKON_FE: + break; + + case CHIP_ID_YUKON_FE_P: + hw->flags = SKY2_HW_NEWER_PHY + | SKY2_HW_NEW_LE + | SKY2_HW_AUTO_TX_SUM + | SKY2_HW_ADV_POWER_CTL; + break; + default: dev_err(&hw->pdev->dev, "unsupported chip type 0x%x\n", hw->chip_id); return -EOPNOTSUPP; } - hw->chip_rev = (sky2_read8(hw, B2_MAC_CFG) & CFG_CHIP_R_MSK) >> 4; + hw->pmd_type = sky2_read8(hw, B2_PMD_TYP); + if (hw->pmd_type == 'L' || hw->pmd_type == 'S' || hw->pmd_type == 'P') + hw->flags |= SKY2_HW_FIBRE_PHY; - /* This rev is really old, and requires untested workarounds */ - if (hw->chip_id == CHIP_ID_YUKON_EC && hw->chip_rev == CHIP_REV_YU_EC_A1) { - dev_err(&hw->pdev->dev, "unsupported revision Yukon-%s (0x%x) rev %d\n", - yukon2_name[hw->chip_id - CHIP_ID_YUKON_XL], - hw->chip_id, hw->chip_rev); - return -EOPNOTSUPP; - } - hw->pmd_type = sky2_read8(hw, B2_PMD_TYP); hw->ports = 1; t8 = sky2_read8(hw, B2_Y2_HW_RES); if ((t8 & CFG_DUAL_MAC_MSK) == CFG_DUAL_MAC_MSK) { @@ -2791,7 +2967,9 @@ static int sky2_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol) sky2->wol = wol->wolopts; - if (hw->chip_id == CHIP_ID_YUKON_EC_U || hw->chip_id == CHIP_ID_YUKON_EX) + if (hw->chip_id == CHIP_ID_YUKON_EC_U || + hw->chip_id == CHIP_ID_YUKON_EX || + hw->chip_id == CHIP_ID_YUKON_FE_P) sky2_write32(hw, B0_CTST, sky2->wol ? Y2_HW_WOL_ON : Y2_HW_WOL_OFF); @@ -2809,7 +2987,7 @@ static u32 sky2_supported_modes(const struct sky2_hw *hw) | SUPPORTED_100baseT_Full | SUPPORTED_Autoneg | SUPPORTED_TP; - if (hw->chip_id != CHIP_ID_YUKON_FE) + if (hw->flags & SKY2_HW_GIGABIT) modes |= SUPPORTED_1000baseT_Half | SUPPORTED_1000baseT_Full; return modes; @@ -2829,13 +3007,6 @@ static int sky2_get_settings(struct net_device *dev, struct ethtool_cmd *ecmd) ecmd->supported = sky2_supported_modes(hw); ecmd->phy_address = PHY_ADDR_MARV; if (sky2_is_copper(hw)) { - ecmd->supported = SUPPORTED_10baseT_Half - | SUPPORTED_10baseT_Full - | SUPPORTED_100baseT_Half - | SUPPORTED_100baseT_Full - | SUPPORTED_1000baseT_Half - | SUPPORTED_1000baseT_Full - | SUPPORTED_Autoneg | SUPPORTED_TP; ecmd->port = PORT_TP; ecmd->speed = sky2->speed; } else { @@ -3814,8 +3985,12 @@ static __devinit struct net_device *sky2_init_netdev(struct sky2_hw *hw, dev->features |= NETIF_F_HIGHDMA; #ifdef SKY2_VLAN_TAG_USED - dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX; - dev->vlan_rx_register = sky2_vlan_rx_register; + /* The workaround for FE+ status conflicts with VLAN tag detection. */ + if (!(sky2->hw->chip_id == CHIP_ID_YUKON_FE_P && + sky2->hw->chip_rev == CHIP_REV_YU_FE2_A0)) { + dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX; + dev->vlan_rx_register = sky2_vlan_rx_register; + } #endif /* read the mac address */ @@ -3846,7 +4021,7 @@ static irqreturn_t __devinit sky2_test_intr(int irq, void *dev_id) return IRQ_NONE; if (status & Y2_IS_IRQ_SW) { - hw->msi = 1; + hw->flags |= SKY2_HW_USE_MSI; wake_up(&hw->msi_wait); sky2_write8(hw, B0_CTST, CS_CL_SW_IRQ); } @@ -3874,9 +4049,9 @@ static int __devinit sky2_test_msi(struct sky2_hw *hw) sky2_write8(hw, B0_CTST, CS_ST_SW_IRQ); sky2_read8(hw, B0_CTST); - wait_event_timeout(hw->msi_wait, hw->msi, HZ/10); + wait_event_timeout(hw->msi_wait, (hw->flags & SKY2_HW_USE_MSI), HZ/10); - if (!hw->msi) { + if (!(hw->flags & SKY2_HW_USE_MSI)) { /* MSI test failed, go back to INTx mode */ dev_info(&pdev->dev, "No interrupt generated using MSI, " "switching to INTx mode.\n"); @@ -4009,7 +4184,8 @@ static int __devinit sky2_probe(struct pci_dev *pdev, goto err_out_free_netdev; } - err = request_irq(pdev->irq, sky2_intr, hw->msi ? 0 : IRQF_SHARED, + err = request_irq(pdev->irq, sky2_intr, + (hw->flags & SKY2_HW_USE_MSI) ? 0 : IRQF_SHARED, dev->name, hw); if (err) { dev_err(&pdev->dev, "cannot assign irq %d\n", pdev->irq); @@ -4042,7 +4218,7 @@ static int __devinit sky2_probe(struct pci_dev *pdev, return 0; err_out_unregister: - if (hw->msi) + if (hw->flags & SKY2_HW_USE_MSI) pci_disable_msi(pdev); unregister_netdev(dev); err_out_free_netdev: @@ -4091,7 +4267,7 @@ static void __devexit sky2_remove(struct pci_dev *pdev) sky2_read8(hw, B0_CTST); free_irq(pdev->irq, hw); - if (hw->msi) + if (hw->flags & SKY2_HW_USE_MSI) pci_disable_msi(pdev); pci_free_consistent(pdev, STATUS_LE_BYTES, hw->st_le, hw->st_dma); pci_release_regions(pdev); @@ -4159,7 +4335,9 @@ static int sky2_resume(struct pci_dev *pdev) pci_enable_wake(pdev, PCI_D0, 0); /* Re-enable all clocks */ - if (hw->chip_id == CHIP_ID_YUKON_EX || hw->chip_id == CHIP_ID_YUKON_EC_U) + if (hw->chip_id == CHIP_ID_YUKON_EX || + hw->chip_id == CHIP_ID_YUKON_EC_U || + hw->chip_id == CHIP_ID_YUKON_FE_P) sky2_pci_write32(hw, PCI_DEV_REG3, 0); sky2_reset(hw); diff --git a/drivers/net/sky2.h b/drivers/net/sky2.h index 72e12b7c..8bc5c54 100644 --- a/drivers/net/sky2.h +++ b/drivers/net/sky2.h @@ -470,18 +470,24 @@ enum { CHIP_ID_YUKON_EX = 0xb5, /* Chip ID for YUKON-2 Extreme */ CHIP_ID_YUKON_EC = 0xb6, /* Chip ID for YUKON-2 EC */ CHIP_ID_YUKON_FE = 0xb7, /* Chip ID for YUKON-2 FE */ - + CHIP_ID_YUKON_FE_P = 0xb8, /* Chip ID for YUKON-2 FE+ */ +}; +enum yukon_ec_rev { CHIP_REV_YU_EC_A1 = 0, /* Chip Rev. for Yukon-EC A1/A0 */ CHIP_REV_YU_EC_A2 = 1, /* Chip Rev. for Yukon-EC A2 */ CHIP_REV_YU_EC_A3 = 2, /* Chip Rev. for Yukon-EC A3 */ - +}; +enum yukon_ec_u_rev { CHIP_REV_YU_EC_U_A0 = 1, CHIP_REV_YU_EC_U_A1 = 2, CHIP_REV_YU_EC_U_B0 = 3, - +}; +enum yukon_fe_rev { CHIP_REV_YU_FE_A1 = 1, CHIP_REV_YU_FE_A2 = 2, - +}; +enum yukon_fe_p_rev { + CHIP_REV_YU_FE2_A0 = 0, }; enum yukon_ex_rev { CHIP_REV_YU_EX_A0 = 1, @@ -1668,7 +1674,7 @@ enum { /* Receive Frame Status Encoding */ enum { - GMR_FS_LEN = 0xffff<<16, /* Bit 31..16: Rx Frame Length */ + GMR_FS_LEN = 0x7fff<<16, /* Bit 30..16: Rx Frame Length */ GMR_FS_VLAN = 1<<13, /* VLAN Packet */ GMR_FS_JABBER = 1<<12, /* Jabber Packet */ GMR_FS_UN_SIZE = 1<<11, /* Undersize Packet */ @@ -1729,6 +1735,10 @@ enum { GMF_RX_CTRL_DEF = GMF_OPER_ON | GMF_RX_F_FL_ON, }; +/* TX_GMF_EA 32 bit Tx GMAC FIFO End Address */ +enum { + TX_DYN_WM_ENA = 3, /* Yukon-FE+ specific */ +}; /* TX_GMF_CTRL_T 32 bit Tx GMAC FIFO Control/Test */ enum { @@ -2017,6 +2027,14 @@ struct sky2_port { u16 rx_tag; struct vlan_group *vlgrp; #endif + struct { + unsigned long last; + u32 mac_rp; + u8 mac_lev; + u8 fifo_rp; + u8 fifo_lev; + } check; + dma_addr_t rx_le_map; dma_addr_t tx_le_map; @@ -2040,12 +2058,20 @@ struct sky2_hw { void __iomem *regs; struct pci_dev *pdev; struct net_device *dev[2]; + unsigned long flags; +#define SKY2_HW_USE_MSI 0x00000001 +#define SKY2_HW_FIBRE_PHY 0x00000002 +#define SKY2_HW_GIGABIT 0x00000004 +#define SKY2_HW_NEWER_PHY 0x00000008 +#define SKY2_HW_FIFO_HANG_CHECK 0x00000010 +#define SKY2_HW_NEW_LE 0x00000020 /* new LSOv2 format */ +#define SKY2_HW_AUTO_TX_SUM 0x00000040 /* new IP decode for Tx */ +#define SKY2_HW_ADV_POWER_CTL 0x00000080 /* additional PHY power regs */ u8 chip_id; u8 chip_rev; u8 pmd_type; u8 ports; - u8 active; struct sky2_status_le *st_le; u32 st_idx; @@ -2053,13 +2079,12 @@ struct sky2_hw { struct timer_list watchdog_timer; struct work_struct restart_work; - int msi; wait_queue_head_t msi_wait; }; static inline int sky2_is_copper(const struct sky2_hw *hw) { - return !(hw->pmd_type == 'L' || hw->pmd_type == 'S' || hw->pmd_type == 'P'); + return !(hw->flags & SKY2_HW_FIBRE_PHY); } /* Register accessor for memory mapped device */ diff --git a/drivers/net/usb/dm9601.c b/drivers/net/usb/dm9601.c index 16c7a0e..a2de32f 100644 --- a/drivers/net/usb/dm9601.c +++ b/drivers/net/usb/dm9601.c @@ -405,7 +405,7 @@ static int dm9601_bind(struct usbnet *dev, struct usb_interface *intf) dev->net->ethtool_ops = &dm9601_ethtool_ops; dev->net->hard_header_len += DM_TX_OVERHEAD; dev->hard_mtu = dev->net->mtu + dev->net->hard_header_len; - dev->rx_urb_size = dev->net->mtu + DM_RX_OVERHEAD; + dev->rx_urb_size = dev->net->mtu + ETH_HLEN + DM_RX_OVERHEAD; dev->mii.dev = dev->net; dev->mii.mdio_read = dm9601_mdio_read; diff --git a/drivers/net/wireless/Makefile b/drivers/net/wireless/Makefile index ef35bc6..4eb6d97 100644 --- a/drivers/net/wireless/Makefile +++ b/drivers/net/wireless/Makefile @@ -43,7 +43,7 @@ obj-$(CONFIG_PCMCIA_RAYCS) += ray_cs.o obj-$(CONFIG_PCMCIA_WL3501) += wl3501_cs.o obj-$(CONFIG_USB_ZD1201) += zd1201.o -obj-$(CONFIG_LIBERTAS_USB) += libertas/ +obj-$(CONFIG_LIBERTAS) += libertas/ rtl8187-objs := rtl8187_dev.o rtl8187_rtl8225.o obj-$(CONFIG_RTL8187) += rtl8187.o diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c index 7dcaa09..50f2dd9 100644 --- a/drivers/pci/quirks.c +++ b/drivers/pci/quirks.c @@ -1444,7 +1444,6 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_NETMOS, PCI_ANY_ID, quirk_netmos); static void __devinit quirk_e100_interrupt(struct pci_dev *dev) { u16 command; - u32 bar; u8 __iomem *csr; u8 cmd_hi; @@ -1476,12 +1475,12 @@ static void __devinit quirk_e100_interrupt(struct pci_dev *dev) * re-enable them when it's ready. */ pci_read_config_word(dev, PCI_COMMAND, &command); - pci_read_config_dword(dev, PCI_BASE_ADDRESS_0, &bar); - if (!(command & PCI_COMMAND_MEMORY) || !bar) + if (!(command & PCI_COMMAND_MEMORY) || !pci_resource_start(dev, 0)) return; - csr = ioremap(bar, 8); + /* Convert from PCI bus to resource space. */ + csr = ioremap(pci_resource_start(dev, 0), 8); if (!csr) { printk(KERN_WARNING "PCI: Can't map %s e100 registers\n", pci_name(dev)); diff --git a/drivers/power/power_supply_sysfs.c b/drivers/power/power_supply_sysfs.c index c7c4574..de3155b 100644 --- a/drivers/power/power_supply_sysfs.c +++ b/drivers/power/power_supply_sysfs.c @@ -289,6 +289,7 @@ int power_supply_uevent(struct device *dev, char **envp, int num_envp, if (ret) goto out; } + envp[i] = NULL; out: free_page((unsigned long)prop_buf); diff --git a/drivers/scsi/aic94xx/aic94xx_task.c b/drivers/scsi/aic94xx/aic94xx_task.c index d5d8cab..ab13824 100644 --- a/drivers/scsi/aic94xx/aic94xx_task.c +++ b/drivers/scsi/aic94xx/aic94xx_task.c @@ -451,7 +451,7 @@ static int asd_build_smp_ascb(struct asd_ascb *ascb, struct sas_task *task, struct scb *scb; pci_map_sg(asd_ha->pcidev, &task->smp_task.smp_req, 1, - PCI_DMA_FROMDEVICE); + PCI_DMA_TODEVICE); pci_map_sg(asd_ha->pcidev, &task->smp_task.smp_resp, 1, PCI_DMA_FROMDEVICE); @@ -486,7 +486,7 @@ static void asd_unbuild_smp_ascb(struct asd_ascb *a) BUG_ON(!task); pci_unmap_sg(a->ha->pcidev, &task->smp_task.smp_req, 1, - PCI_DMA_FROMDEVICE); + PCI_DMA_TODEVICE); pci_unmap_sg(a->ha->pcidev, &task->smp_task.smp_resp, 1, PCI_DMA_FROMDEVICE); } diff --git a/drivers/scsi/esp_scsi.c b/drivers/scsi/esp_scsi.c index 77b06a9..95cf7b6 100644 --- a/drivers/scsi/esp_scsi.c +++ b/drivers/scsi/esp_scsi.c @@ -2314,6 +2314,7 @@ int __devinit scsi_esp_register(struct esp *esp, struct device *dev) esp->host->transportt = esp_transport_template; esp->host->max_lun = ESP_MAX_LUN; esp->host->cmd_per_lun = 2; + esp->host->unique_id = instance; esp_set_clock_params(esp); @@ -2337,7 +2338,7 @@ int __devinit scsi_esp_register(struct esp *esp, struct device *dev) if (err) return err; - esp->host->unique_id = instance++; + instance++; scsi_scan_host(esp->host); diff --git a/drivers/scsi/scsi_transport_spi.c b/drivers/scsi/scsi_transport_spi.c index 6f56f87..4df21c9 100644 --- a/drivers/scsi/scsi_transport_spi.c +++ b/drivers/scsi/scsi_transport_spi.c @@ -787,10 +787,12 @@ spi_dv_device_internal(struct scsi_device *sdev, u8 *buffer) struct scsi_target *starget = sdev->sdev_target; struct Scsi_Host *shost = sdev->host; int len = sdev->inquiry_len; + int min_period = spi_min_period(starget); + int max_width = spi_max_width(starget); /* first set us up for narrow async */ DV_SET(offset, 0); DV_SET(width, 0); - + if (spi_dv_device_compare_inquiry(sdev, buffer, buffer, DV_LOOPS) != SPI_COMPARE_SUCCESS) { starget_printk(KERN_ERR, starget, "Domain Validation Initial Inquiry Failed\n"); @@ -798,9 +800,13 @@ spi_dv_device_internal(struct scsi_device *sdev, u8 *buffer) return; } + if (!scsi_device_wide(sdev)) { + spi_max_width(starget) = 0; + max_width = 0; + } + /* test width */ - if (i->f->set_width && spi_max_width(starget) && - scsi_device_wide(sdev)) { + if (i->f->set_width && max_width) { i->f->set_width(starget, 1); if (spi_dv_device_compare_inquiry(sdev, buffer, @@ -809,6 +815,11 @@ spi_dv_device_internal(struct scsi_device *sdev, u8 *buffer) != SPI_COMPARE_SUCCESS) { starget_printk(KERN_ERR, starget, "Wide Transfers Fail\n"); i->f->set_width(starget, 0); + /* Make sure we don't force wide back on by asking + * for a transfer period that requires it */ + max_width = 0; + if (min_period < 10) + min_period = 10; } } @@ -828,7 +839,8 @@ spi_dv_device_internal(struct scsi_device *sdev, u8 *buffer) /* now set up to the maximum */ DV_SET(offset, spi_max_offset(starget)); - DV_SET(period, spi_min_period(starget)); + DV_SET(period, min_period); + /* try QAS requests; this should be harmless to set if the * target supports it */ if (scsi_device_qas(sdev)) { @@ -837,14 +849,14 @@ spi_dv_device_internal(struct scsi_device *sdev, u8 *buffer) DV_SET(qas, 0); } - if (scsi_device_ius(sdev) && spi_min_period(starget) < 9) { + if (scsi_device_ius(sdev) && min_period < 9) { /* This u320 (or u640). Set IU transfers */ DV_SET(iu, 1); /* Then set the optional parameters */ DV_SET(rd_strm, 1); DV_SET(wr_flow, 1); DV_SET(rti, 1); - if (spi_min_period(starget) == 8) + if (min_period == 8) DV_SET(pcomp_en, 1); } else { DV_SET(iu, 0); @@ -862,6 +874,10 @@ spi_dv_device_internal(struct scsi_device *sdev, u8 *buffer) } else { DV_SET(dt, 1); } + /* set width last because it will pull all the other + * parameters down to required values */ + DV_SET(width, max_width); + /* Do the read only INQUIRY tests */ spi_dv_retrain(sdev, buffer, buffer + sdev->inquiry_len, spi_dv_device_compare_inquiry); diff --git a/drivers/serial/cpm_uart/cpm_uart_cpm1.h b/drivers/serial/cpm_uart/cpm_uart_cpm1.h index a99e45e..2a64778 100644 --- a/drivers/serial/cpm_uart/cpm_uart_cpm1.h +++ b/drivers/serial/cpm_uart/cpm_uart_cpm1.h @@ -37,6 +37,6 @@ static inline void cpm_set_smc_fcr(volatile smc_uart_t * up) up->smc_tfcr = SMC_EB; } -#define DPRAM_BASE ((unsigned char *)&cpmp->cp_dpmem[0]) +#define DPRAM_BASE ((unsigned char *)cpm_dpram_addr(0)) #endif diff --git a/drivers/serial/sunsab.c b/drivers/serial/sunsab.c index e348ba6..ff610c2 100644 --- a/drivers/serial/sunsab.c +++ b/drivers/serial/sunsab.c @@ -38,7 +38,7 @@ #include <asm/prom.h> #include <asm/of_device.h> -#if defined(CONFIG_SERIAL_SUNZILOG_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ) +#if defined(CONFIG_SERIAL_SUNSAB_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ) #define SUPPORT_SYSRQ #endif diff --git a/drivers/w1/w1.c b/drivers/w1/w1.c index 8d7ab74..a593f90 100644 --- a/drivers/w1/w1.c +++ b/drivers/w1/w1.c @@ -431,6 +431,7 @@ static int w1_uevent(struct device *dev, char **envp, int num_envp, err = add_uevent_var(envp, num_envp, &cur_index, buffer, buffer_size, &cur_len, "W1_SLAVE_ID=%024LX", (unsigned long long)sl->reg_num.id); + envp[cur_index] = NULL; if (err) return err; diff --git a/fs/compat_ioctl.c b/fs/compat_ioctl.c index 5a5b711..37310b0 100644 --- a/fs/compat_ioctl.c +++ b/fs/compat_ioctl.c @@ -3190,6 +3190,8 @@ COMPATIBLE_IOCTL(SIOCSIWRETRY) COMPATIBLE_IOCTL(SIOCGIWRETRY) COMPATIBLE_IOCTL(SIOCSIWPOWER) COMPATIBLE_IOCTL(SIOCGIWPOWER) +COMPATIBLE_IOCTL(SIOCSIWAUTH) +COMPATIBLE_IOCTL(SIOCGIWAUTH) /* hiddev */ COMPATIBLE_IOCTL(HIDIOCGVERSION) COMPATIBLE_IOCTL(HIDIOCAPPLICATION) @@ -50,7 +50,6 @@ #include <linux/tsacct_kern.h> #include <linux/cn_proc.h> #include <linux/audit.h> -#include <linux/signalfd.h> #include <asm/uaccess.h> #include <asm/mmu_context.h> @@ -784,7 +783,6 @@ static int de_thread(struct task_struct *tsk) * and we can just re-use it all. */ if (atomic_read(&oldsighand->count) <= 1) { - signalfd_detach(tsk); exit_itimers(sig); return 0; } @@ -923,7 +921,6 @@ static int de_thread(struct task_struct *tsk) sig->flags = 0; no_thread_group: - signalfd_detach(tsk); exit_itimers(sig); if (leader) release_task(leader); diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c index a21e4bc..d098c7a 100644 --- a/fs/lockd/svclock.c +++ b/fs/lockd/svclock.c @@ -171,19 +171,14 @@ found: * GRANTED_RES message by cookie, without having to rely on the client's IP * address. --okir */ -static inline struct nlm_block * -nlmsvc_create_block(struct svc_rqst *rqstp, struct nlm_file *file, - struct nlm_lock *lock, struct nlm_cookie *cookie) +static struct nlm_block * +nlmsvc_create_block(struct svc_rqst *rqstp, struct nlm_host *host, + struct nlm_file *file, struct nlm_lock *lock, + struct nlm_cookie *cookie) { struct nlm_block *block; - struct nlm_host *host; struct nlm_rqst *call = NULL; - /* Create host handle for callback */ - host = nlmsvc_lookup_host(rqstp, lock->caller, lock->len); - if (host == NULL) - return NULL; - call = nlm_alloc_call(host); if (call == NULL) return NULL; @@ -366,6 +361,7 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file, struct nlm_lock *lock, int wait, struct nlm_cookie *cookie) { struct nlm_block *block = NULL; + struct nlm_host *host; int error; __be32 ret; @@ -377,6 +373,10 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file, (long long)lock->fl.fl_end, wait); + /* Create host handle for callback */ + host = nlmsvc_lookup_host(rqstp, lock->caller, lock->len); + if (host == NULL) + return nlm_lck_denied_nolocks; /* Lock file against concurrent access */ mutex_lock(&file->f_mutex); @@ -385,7 +385,8 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file, */ block = nlmsvc_lookup_block(file, lock); if (block == NULL) { - block = nlmsvc_create_block(rqstp, file, lock, cookie); + block = nlmsvc_create_block(rqstp, nlm_get_host(host), file, + lock, cookie); ret = nlm_lck_denied_nolocks; if (block == NULL) goto out; @@ -449,6 +450,7 @@ nlmsvc_lock(struct svc_rqst *rqstp, struct nlm_file *file, out: mutex_unlock(&file->f_mutex); nlmsvc_release_block(block); + nlm_release_host(host); dprintk("lockd: nlmsvc_lock returned %u\n", ret); return ret; } @@ -477,10 +479,15 @@ nlmsvc_testlock(struct svc_rqst *rqstp, struct nlm_file *file, if (block == NULL) { struct file_lock *conf = kzalloc(sizeof(*conf), GFP_KERNEL); + struct nlm_host *host; if (conf == NULL) return nlm_granted; - block = nlmsvc_create_block(rqstp, file, lock, cookie); + /* Create host handle for callback */ + host = nlmsvc_lookup_host(rqstp, lock->caller, lock->len); + if (host == NULL) + return nlm_lck_denied_nolocks; + block = nlmsvc_create_block(rqstp, host, file, lock, cookie); if (block == NULL) { kfree(conf); return nlm_granted; diff --git a/fs/nfs/client.c b/fs/nfs/client.c index a49f9fe..a204484 100644 --- a/fs/nfs/client.c +++ b/fs/nfs/client.c @@ -588,16 +588,6 @@ static int nfs_init_server(struct nfs_server *server, const struct nfs_mount_dat server->namelen = data->namlen; /* Create a client RPC handle for the NFSv3 ACL management interface */ nfs_init_server_aclclient(server); - if (clp->cl_nfsversion == 3) { - if (server->namelen == 0 || server->namelen > NFS3_MAXNAMLEN) - server->namelen = NFS3_MAXNAMLEN; - if (!(data->flags & NFS_MOUNT_NORDIRPLUS)) - server->caps |= NFS_CAP_READDIRPLUS; - } else { - if (server->namelen == 0 || server->namelen > NFS2_MAXNAMLEN) - server->namelen = NFS2_MAXNAMLEN; - } - dprintk("<-- nfs_init_server() = 0 [new %p]\n", clp); return 0; @@ -794,6 +784,16 @@ struct nfs_server *nfs_create_server(const struct nfs_mount_data *data, error = nfs_probe_fsinfo(server, mntfh, &fattr); if (error < 0) goto error; + if (server->nfs_client->rpc_ops->version == 3) { + if (server->namelen == 0 || server->namelen > NFS3_MAXNAMLEN) + server->namelen = NFS3_MAXNAMLEN; + if (!(data->flags & NFS_MOUNT_NORDIRPLUS)) + server->caps |= NFS_CAP_READDIRPLUS; + } else { + if (server->namelen == 0 || server->namelen > NFS2_MAXNAMLEN) + server->namelen = NFS2_MAXNAMLEN; + } + if (!(fattr.valid & NFS_ATTR_FATTR)) { error = server->nfs_client->rpc_ops->getattr(server, mntfh, &fattr); if (error < 0) { @@ -984,6 +984,9 @@ struct nfs_server *nfs4_create_server(const struct nfs4_mount_data *data, if (error < 0) goto error; + if (server->namelen == 0 || server->namelen > NFS4_MAXNAMLEN) + server->namelen = NFS4_MAXNAMLEN; + BUG_ON(!server->nfs_client); BUG_ON(!server->nfs_client->rpc_ops); BUG_ON(!server->nfs_client->rpc_ops->file_inode_ops); @@ -1056,6 +1059,9 @@ struct nfs_server *nfs4_create_referral_server(struct nfs_clone_mount *data, if (error < 0) goto error; + if (server->namelen == 0 || server->namelen > NFS4_MAXNAMLEN) + server->namelen = NFS4_MAXNAMLEN; + dprintk("Referral FSID: %llx:%llx\n", (unsigned long long) server->fsid.major, (unsigned long long) server->fsid.minor); @@ -1115,6 +1121,9 @@ struct nfs_server *nfs_clone_server(struct nfs_server *source, if (error < 0) goto out_free_server; + if (server->namelen == 0 || server->namelen > NFS4_MAXNAMLEN) + server->namelen = NFS4_MAXNAMLEN; + dprintk("Cloned FSID: %llx:%llx\n", (unsigned long long) server->fsid.major, (unsigned long long) server->fsid.minor); diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c index ea97408..e4a04d1 100644 --- a/fs/nfs/dir.c +++ b/fs/nfs/dir.c @@ -1162,6 +1162,8 @@ static struct dentry *nfs_readdir_lookup(nfs_readdir_descriptor_t *desc) } if (!desc->plus || !(entry->fattr->valid & NFS_ATTR_FATTR)) return NULL; + if (name.len > NFS_SERVER(dir)->namelen) + return NULL; /* Note: caller is already holding the dir->i_mutex! */ dentry = d_alloc(parent, &name); if (dentry == NULL) diff --git a/fs/nfs/getroot.c b/fs/nfs/getroot.c index d1cbf0a..522e5ad 100644 --- a/fs/nfs/getroot.c +++ b/fs/nfs/getroot.c @@ -175,6 +175,9 @@ next_component: path++; name.len = path - (const char *) name.name; + if (name.len > NFS4_MAXNAMLEN) + return -ENAMETOOLONG; + eat_dot_dir: while (*path == '/') path++; diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c index 50cd8a2..f37f25c 100644 --- a/fs/ocfs2/aops.c +++ b/fs/ocfs2/aops.c @@ -930,18 +930,11 @@ static void ocfs2_write_failure(struct inode *inode, loff_t user_pos, unsigned user_len) { int i; - unsigned from, to; + unsigned from = user_pos & (PAGE_CACHE_SIZE - 1), + to = user_pos + user_len; struct page *tmppage; - ocfs2_zero_new_buffers(wc->w_target_page, user_pos, user_len); - - if (wc->w_large_pages) { - from = wc->w_target_from; - to = wc->w_target_to; - } else { - from = 0; - to = PAGE_CACHE_SIZE; - } + ocfs2_zero_new_buffers(wc->w_target_page, from, to); for(i = 0; i < wc->w_num_pages; i++) { tmppage = wc->w_pages[i]; @@ -991,9 +984,6 @@ static int ocfs2_prepare_page_for_write(struct inode *inode, u64 *p_blkno, map_from = cluster_start; map_to = cluster_end; } - - wc->w_target_from = map_from; - wc->w_target_to = map_to; } else { /* * If we haven't allocated the new page yet, we @@ -1211,18 +1201,33 @@ static int ocfs2_write_cluster_by_desc(struct address_space *mapping, loff_t pos, unsigned len) { int ret, i; + loff_t cluster_off; + unsigned int local_len = len; struct ocfs2_write_cluster_desc *desc; + struct ocfs2_super *osb = OCFS2_SB(mapping->host->i_sb); for (i = 0; i < wc->w_clen; i++) { desc = &wc->w_desc[i]; + /* + * We have to make sure that the total write passed in + * doesn't extend past a single cluster. + */ + local_len = len; + cluster_off = pos & (osb->s_clustersize - 1); + if ((cluster_off + local_len) > osb->s_clustersize) + local_len = osb->s_clustersize - cluster_off; + ret = ocfs2_write_cluster(mapping, desc->c_phys, desc->c_unwritten, data_ac, meta_ac, - wc, desc->c_cpos, pos, len); + wc, desc->c_cpos, pos, local_len); if (ret) { mlog_errno(ret); goto out; } + + len -= local_len; + pos += local_len; } ret = 0; diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c index 7e34e66..f3bc365 100644 --- a/fs/ocfs2/file.c +++ b/fs/ocfs2/file.c @@ -491,8 +491,8 @@ int ocfs2_do_extend_allocation(struct ocfs2_super *osb, goto leave; } - status = ocfs2_claim_clusters(osb, handle, data_ac, 1, - &bit_off, &num_bits); + status = __ocfs2_claim_clusters(osb, handle, data_ac, 1, + clusters_to_add, &bit_off, &num_bits); if (status < 0) { if (status != -ENOSPC) mlog_errno(status); diff --git a/fs/ocfs2/localalloc.c b/fs/ocfs2/localalloc.c index 545f789..de984d2 100644 --- a/fs/ocfs2/localalloc.c +++ b/fs/ocfs2/localalloc.c @@ -524,13 +524,12 @@ bail: int ocfs2_claim_local_alloc_bits(struct ocfs2_super *osb, handle_t *handle, struct ocfs2_alloc_context *ac, - u32 min_bits, + u32 bits_wanted, u32 *bit_off, u32 *num_bits) { int status, start; struct inode *local_alloc_inode; - u32 bits_wanted; void *bitmap; struct ocfs2_dinode *alloc; struct ocfs2_local_alloc *la; @@ -538,7 +537,6 @@ int ocfs2_claim_local_alloc_bits(struct ocfs2_super *osb, mlog_entry_void(); BUG_ON(ac->ac_which != OCFS2_AC_USE_LOCAL); - bits_wanted = ac->ac_bits_wanted - ac->ac_bits_given; local_alloc_inode = ac->ac_inode; alloc = (struct ocfs2_dinode *) osb->local_alloc_bh->b_data; la = OCFS2_LOCAL_ALLOC(alloc); diff --git a/fs/ocfs2/localalloc.h b/fs/ocfs2/localalloc.h index 385a101..3f76631 100644 --- a/fs/ocfs2/localalloc.h +++ b/fs/ocfs2/localalloc.h @@ -48,7 +48,7 @@ int ocfs2_reserve_local_alloc_bits(struct ocfs2_super *osb, int ocfs2_claim_local_alloc_bits(struct ocfs2_super *osb, handle_t *handle, struct ocfs2_alloc_context *ac, - u32 min_bits, + u32 bits_wanted, u32 *bit_off, u32 *num_bits); diff --git a/fs/ocfs2/suballoc.c b/fs/ocfs2/suballoc.c index d9c5c9f..8f09f52 100644 --- a/fs/ocfs2/suballoc.c +++ b/fs/ocfs2/suballoc.c @@ -1486,21 +1486,21 @@ static inline void ocfs2_block_to_cluster_group(struct inode *inode, * contig. allocation, set to '1' to indicate we can deal with extents * of any size. */ -int ocfs2_claim_clusters(struct ocfs2_super *osb, - handle_t *handle, - struct ocfs2_alloc_context *ac, - u32 min_clusters, - u32 *cluster_start, - u32 *num_clusters) +int __ocfs2_claim_clusters(struct ocfs2_super *osb, + handle_t *handle, + struct ocfs2_alloc_context *ac, + u32 min_clusters, + u32 max_clusters, + u32 *cluster_start, + u32 *num_clusters) { int status; - unsigned int bits_wanted = ac->ac_bits_wanted - ac->ac_bits_given; + unsigned int bits_wanted = max_clusters; u64 bg_blkno = 0; u16 bg_bit_off; mlog_entry_void(); - BUG_ON(!ac); BUG_ON(ac->ac_bits_given >= ac->ac_bits_wanted); BUG_ON(ac->ac_which != OCFS2_AC_USE_LOCAL @@ -1557,6 +1557,19 @@ bail: return status; } +int ocfs2_claim_clusters(struct ocfs2_super *osb, + handle_t *handle, + struct ocfs2_alloc_context *ac, + u32 min_clusters, + u32 *cluster_start, + u32 *num_clusters) +{ + unsigned int bits_wanted = ac->ac_bits_wanted - ac->ac_bits_given; + + return __ocfs2_claim_clusters(osb, handle, ac, min_clusters, + bits_wanted, cluster_start, num_clusters); +} + static inline int ocfs2_block_group_clear_bits(handle_t *handle, struct inode *alloc_inode, struct ocfs2_group_desc *bg, diff --git a/fs/ocfs2/suballoc.h b/fs/ocfs2/suballoc.h index f212dc0..cafe937 100644 --- a/fs/ocfs2/suballoc.h +++ b/fs/ocfs2/suballoc.h @@ -85,6 +85,17 @@ int ocfs2_claim_clusters(struct ocfs2_super *osb, u32 min_clusters, u32 *cluster_start, u32 *num_clusters); +/* + * Use this variant of ocfs2_claim_clusters to specify a maxiumum + * number of clusters smaller than the allocation reserved. + */ +int __ocfs2_claim_clusters(struct ocfs2_super *osb, + handle_t *handle, + struct ocfs2_alloc_context *ac, + u32 min_clusters, + u32 max_clusters, + u32 *cluster_start, + u32 *num_clusters); int ocfs2_free_suballoc_bits(handle_t *handle, struct inode *alloc_inode, diff --git a/fs/ocfs2/vote.c b/fs/ocfs2/vote.c index 66a13ee..c053585 100644 --- a/fs/ocfs2/vote.c +++ b/fs/ocfs2/vote.c @@ -66,7 +66,7 @@ struct ocfs2_vote_msg { struct ocfs2_msg_hdr v_hdr; __be32 v_reserved1; -}; +} __attribute__ ((packed)); /* Responses are given these values to maintain backwards * compatibility with older ocfs2 versions */ @@ -78,7 +78,7 @@ struct ocfs2_response_msg { struct ocfs2_msg_hdr r_hdr; __be32 r_response; -}; +} __attribute__ ((packed)); struct ocfs2_vote_work { struct list_head w_list; diff --git a/fs/signalfd.c b/fs/signalfd.c index a8e293d..aefb0be 100644 --- a/fs/signalfd.c +++ b/fs/signalfd.c @@ -11,8 +11,10 @@ * Now using anonymous inode source. * Thanks to Oleg Nesterov for useful code review and suggestions. * More comments and suggestions from Arnd Bergmann. - * Sat May 19, 2007: Davi E. M. Arnaut <davi@haxent.com.br> + * Sat May 19, 2007: Davi E. M. Arnaut <davi@haxent.com.br> * Retrieve multiple signals with one read() call + * Sun Jul 15, 2007: Davide Libenzi <davidel@xmailserver.org> + * Attach to the sighand only during read() and poll(). */ #include <linux/file.h> @@ -27,102 +29,12 @@ #include <linux/signalfd.h> struct signalfd_ctx { - struct list_head lnk; - wait_queue_head_t wqh; sigset_t sigmask; - struct task_struct *tsk; }; -struct signalfd_lockctx { - struct task_struct *tsk; - unsigned long flags; -}; - -/* - * Tries to acquire the sighand lock. We do not increment the sighand - * use count, and we do not even pin the task struct, so we need to - * do it inside an RCU read lock, and we must be prepared for the - * ctx->tsk going to NULL (in signalfd_deliver()), and for the sighand - * being detached. We return 0 if the sighand has been detached, or - * 1 if we were able to pin the sighand lock. - */ -static int signalfd_lock(struct signalfd_ctx *ctx, struct signalfd_lockctx *lk) -{ - struct sighand_struct *sighand = NULL; - - rcu_read_lock(); - lk->tsk = rcu_dereference(ctx->tsk); - if (likely(lk->tsk != NULL)) - sighand = lock_task_sighand(lk->tsk, &lk->flags); - rcu_read_unlock(); - - if (!sighand) - return 0; - - if (!ctx->tsk) { - unlock_task_sighand(lk->tsk, &lk->flags); - return 0; - } - - if (lk->tsk->tgid == current->tgid) - lk->tsk = current; - - return 1; -} - -static void signalfd_unlock(struct signalfd_lockctx *lk) -{ - unlock_task_sighand(lk->tsk, &lk->flags); -} - -/* - * This must be called with the sighand lock held. - */ -void signalfd_deliver(struct task_struct *tsk, int sig) -{ - struct sighand_struct *sighand = tsk->sighand; - struct signalfd_ctx *ctx, *tmp; - - BUG_ON(!sig); - list_for_each_entry_safe(ctx, tmp, &sighand->signalfd_list, lnk) { - /* - * We use a negative signal value as a way to broadcast that the - * sighand has been orphaned, so that we can notify all the - * listeners about this. Remember the ctx->sigmask is inverted, - * so if the user is interested in a signal, that corresponding - * bit will be zero. - */ - if (sig < 0) { - if (ctx->tsk == tsk) { - ctx->tsk = NULL; - list_del_init(&ctx->lnk); - wake_up(&ctx->wqh); - } - } else { - if (!sigismember(&ctx->sigmask, sig)) - wake_up(&ctx->wqh); - } - } -} - -static void signalfd_cleanup(struct signalfd_ctx *ctx) -{ - struct signalfd_lockctx lk; - - /* - * This is tricky. If the sighand is gone, we do not need to remove - * context from the list, the list itself won't be there anymore. - */ - if (signalfd_lock(ctx, &lk)) { - list_del(&ctx->lnk); - signalfd_unlock(&lk); - } - kfree(ctx); -} - static int signalfd_release(struct inode *inode, struct file *file) { - signalfd_cleanup(file->private_data); + kfree(file->private_data); return 0; } @@ -130,23 +42,15 @@ static unsigned int signalfd_poll(struct file *file, poll_table *wait) { struct signalfd_ctx *ctx = file->private_data; unsigned int events = 0; - struct signalfd_lockctx lk; - poll_wait(file, &ctx->wqh, wait); + poll_wait(file, ¤t->sighand->signalfd_wqh, wait); - /* - * Let the caller get a POLLIN in this case, ala socket recv() when - * the peer disconnects. - */ - if (signalfd_lock(ctx, &lk)) { - if ((lk.tsk == current && - next_signal(&lk.tsk->pending, &ctx->sigmask) > 0) || - next_signal(&lk.tsk->signal->shared_pending, - &ctx->sigmask) > 0) - events |= POLLIN; - signalfd_unlock(&lk); - } else + spin_lock_irq(¤t->sighand->siglock); + if (next_signal(¤t->pending, &ctx->sigmask) || + next_signal(¤t->signal->shared_pending, + &ctx->sigmask)) events |= POLLIN; + spin_unlock_irq(¤t->sighand->siglock); return events; } @@ -219,59 +123,46 @@ static ssize_t signalfd_dequeue(struct signalfd_ctx *ctx, siginfo_t *info, int nonblock) { ssize_t ret; - struct signalfd_lockctx lk; DECLARE_WAITQUEUE(wait, current); - if (!signalfd_lock(ctx, &lk)) - return 0; - - ret = dequeue_signal(lk.tsk, &ctx->sigmask, info); + spin_lock_irq(¤t->sighand->siglock); + ret = dequeue_signal(current, &ctx->sigmask, info); switch (ret) { case 0: if (!nonblock) break; ret = -EAGAIN; default: - signalfd_unlock(&lk); + spin_unlock_irq(¤t->sighand->siglock); return ret; } - add_wait_queue(&ctx->wqh, &wait); + add_wait_queue(¤t->sighand->signalfd_wqh, &wait); for (;;) { set_current_state(TASK_INTERRUPTIBLE); - ret = dequeue_signal(lk.tsk, &ctx->sigmask, info); - signalfd_unlock(&lk); + ret = dequeue_signal(current, &ctx->sigmask, info); if (ret != 0) break; if (signal_pending(current)) { ret = -ERESTARTSYS; break; } + spin_unlock_irq(¤t->sighand->siglock); schedule(); - ret = signalfd_lock(ctx, &lk); - if (unlikely(!ret)) { - /* - * Let the caller read zero byte, ala socket - * recv() when the peer disconnect. This test - * must be done before doing a dequeue_signal(), - * because if the sighand has been orphaned, - * the dequeue_signal() call is going to crash - * because ->sighand will be long gone. - */ - break; - } + spin_lock_irq(¤t->sighand->siglock); } + spin_unlock_irq(¤t->sighand->siglock); - remove_wait_queue(&ctx->wqh, &wait); + remove_wait_queue(¤t->sighand->signalfd_wqh, &wait); __set_current_state(TASK_RUNNING); return ret; } /* - * Returns either the size of a "struct signalfd_siginfo", or zero if the - * sighand we are attached to, has been orphaned. The "count" parameter - * must be at least the size of a "struct signalfd_siginfo". + * Returns a multiple of the size of a "struct signalfd_siginfo", or a negative + * error code. The "count" parameter must be at least the size of a + * "struct signalfd_siginfo". */ static ssize_t signalfd_read(struct file *file, char __user *buf, size_t count, loff_t *ppos) @@ -287,7 +178,6 @@ static ssize_t signalfd_read(struct file *file, char __user *buf, size_t count, return -EINVAL; siginfo = (struct signalfd_siginfo __user *) buf; - do { ret = signalfd_dequeue(ctx, &info, nonblock); if (unlikely(ret <= 0)) @@ -300,7 +190,7 @@ static ssize_t signalfd_read(struct file *file, char __user *buf, size_t count, nonblock = 1; } while (--count); - return total ? total : ret; + return total ? total: ret; } static const struct file_operations signalfd_fops = { @@ -309,20 +199,13 @@ static const struct file_operations signalfd_fops = { .read = signalfd_read, }; -/* - * Create a file descriptor that is associated with our signal - * state. We can pass it around to others if we want to, but - * it will always be _our_ signal state. - */ asmlinkage long sys_signalfd(int ufd, sigset_t __user *user_mask, size_t sizemask) { int error; sigset_t sigmask; struct signalfd_ctx *ctx; - struct sighand_struct *sighand; struct file *file; struct inode *inode; - struct signalfd_lockctx lk; if (sizemask != sizeof(sigset_t) || copy_from_user(&sigmask, user_mask, sizeof(sigmask))) @@ -335,17 +218,7 @@ asmlinkage long sys_signalfd(int ufd, sigset_t __user *user_mask, size_t sizemas if (!ctx) return -ENOMEM; - init_waitqueue_head(&ctx->wqh); ctx->sigmask = sigmask; - ctx->tsk = current->group_leader; - - sighand = current->sighand; - /* - * Add this fd to the list of signal listeners. - */ - spin_lock_irq(&sighand->siglock); - list_add_tail(&ctx->lnk, &sighand->signalfd_list); - spin_unlock_irq(&sighand->siglock); /* * When we call this, the initialization must be complete, since @@ -364,23 +237,18 @@ asmlinkage long sys_signalfd(int ufd, sigset_t __user *user_mask, size_t sizemas fput(file); return -EINVAL; } - /* - * We need to be prepared of the fact that the sighand this fd - * is attached to, has been detched. In that case signalfd_lock() - * will return 0, and we'll just skip setting the new mask. - */ - if (signalfd_lock(ctx, &lk)) { - ctx->sigmask = sigmask; - signalfd_unlock(&lk); - } - wake_up(&ctx->wqh); + spin_lock_irq(¤t->sighand->siglock); + ctx->sigmask = sigmask; + spin_unlock_irq(¤t->sighand->siglock); + + wake_up(¤t->sighand->signalfd_wqh); fput(file); } return ufd; err_fdalloc: - signalfd_cleanup(ctx); + kfree(ctx); return error; } diff --git a/fs/splice.c b/fs/splice.c index c010a72..e95a362 100644 --- a/fs/splice.c +++ b/fs/splice.c @@ -1224,6 +1224,33 @@ static long do_splice(struct file *in, loff_t __user *off_in, } /* + * Do a copy-from-user while holding the mmap_semaphore for reading, in a + * manner safe from deadlocking with simultaneous mmap() (grabbing mmap_sem + * for writing) and page faulting on the user memory pointed to by src. + * This assumes that we will very rarely hit the partial != 0 path, or this + * will not be a win. + */ +static int copy_from_user_mmap_sem(void *dst, const void __user *src, size_t n) +{ + int partial; + + pagefault_disable(); + partial = __copy_from_user_inatomic(dst, src, n); + pagefault_enable(); + + /* + * Didn't copy everything, drop the mmap_sem and do a faulting copy + */ + if (unlikely(partial)) { + up_read(¤t->mm->mmap_sem); + partial = copy_from_user(dst, src, n); + down_read(¤t->mm->mmap_sem); + } + + return partial; +} + +/* * Map an iov into an array of pages and offset/length tupples. With the * partial_page structure, we can map several non-contiguous ranges into * our ones pages[] map instead of splitting that operation into pieces. @@ -1236,31 +1263,26 @@ static int get_iovec_page_array(const struct iovec __user *iov, { int buffers = 0, error = 0; - /* - * It's ok to take the mmap_sem for reading, even - * across a "get_user()". - */ down_read(¤t->mm->mmap_sem); while (nr_vecs) { unsigned long off, npages; + struct iovec entry; void __user *base; size_t len; int i; - /* - * Get user address base and length for this iovec. - */ - error = get_user(base, &iov->iov_base); - if (unlikely(error)) - break; - error = get_user(len, &iov->iov_len); - if (unlikely(error)) + error = -EFAULT; + if (copy_from_user_mmap_sem(&entry, iov, sizeof(entry))) break; + base = entry.iov_base; + len = entry.iov_len; + /* * Sanity check this iovec. 0 read succeeds. */ + error = 0; if (unlikely(!len)) break; error = -EFAULT; diff --git a/fs/ufs/super.c b/fs/ufs/super.c index 73402c5..38eb0b7 100644 --- a/fs/ufs/super.c +++ b/fs/ufs/super.c @@ -894,7 +894,7 @@ magic_found: goto again; } - + sbi->s_flags = flags;/*after that line some functions use s_flags*/ ufs_print_super_stuff(sb, usb1, usb2, usb3); /* @@ -1025,8 +1025,6 @@ magic_found: UFS_MOUNT_UFSTYPE_44BSD) uspi->s_maxsymlinklen = fs32_to_cpu(sb, usb3->fs_un2.fs_44.fs_maxsymlinklen); - - sbi->s_flags = flags; inode = iget(sb, UFS_ROOTINO); if (!inode || is_bad_inode(inode)) diff --git a/fs/xfs/xfs_buf_item.h b/fs/xfs/xfs_buf_item.h index fa25b7d..d7e1361 100644 --- a/fs/xfs/xfs_buf_item.h +++ b/fs/xfs/xfs_buf_item.h @@ -52,11 +52,6 @@ typedef struct xfs_buf_log_format_t { #define XFS_BLI_UDQUOT_BUF 0x4 #define XFS_BLI_PDQUOT_BUF 0x8 #define XFS_BLI_GDQUOT_BUF 0x10 -/* - * This flag indicates that the buffer contains newly allocated - * inodes. - */ -#define XFS_BLI_INODE_NEW_BUF 0x20 #define XFS_BLI_CHUNK 128 #define XFS_BLI_SHIFT 7 diff --git a/fs/xfs/xfs_filestream.c b/fs/xfs/xfs_filestream.c index 16f8e17..36d8f6a 100644 --- a/fs/xfs/xfs_filestream.c +++ b/fs/xfs/xfs_filestream.c @@ -350,9 +350,10 @@ _xfs_filestream_update_ag( /* xfs_fstrm_free_func(): callback for freeing cached stream items. */ void xfs_fstrm_free_func( - xfs_ino_t ino, - fstrm_item_t *item) + unsigned long ino, + void *data) { + fstrm_item_t *item = (fstrm_item_t *)data; xfs_inode_t *ip = item->ip; int ref; @@ -438,7 +439,7 @@ xfs_filestream_mount( grp_count = 10; err = xfs_mru_cache_create(&mp->m_filestream, lifetime, grp_count, - (xfs_mru_cache_free_func_t)xfs_fstrm_free_func); + xfs_fstrm_free_func); return err; } diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c index dacb197..8ae6e8e 100644 --- a/fs/xfs/xfs_log_recover.c +++ b/fs/xfs/xfs_log_recover.c @@ -1874,7 +1874,6 @@ xlog_recover_do_inode_buffer( /*ARGSUSED*/ STATIC void xlog_recover_do_reg_buffer( - xfs_mount_t *mp, xlog_recover_item_t *item, xfs_buf_t *bp, xfs_buf_log_format_t *buf_f) @@ -1885,50 +1884,6 @@ xlog_recover_do_reg_buffer( unsigned int *data_map = NULL; unsigned int map_size = 0; int error; - int stale_buf = 1; - - /* - * Scan through the on-disk inode buffer and attempt to - * determine if it has been written to since it was logged. - * - * - If any of the magic numbers are incorrect then the buffer is stale - * - If any of the modes are non-zero then the buffer is not stale - * - If all of the modes are zero and at least one of the generation - * counts is non-zero then the buffer is stale - * - * If the end result is a stale buffer then the log buffer is replayed - * otherwise it is skipped. - * - * This heuristic is not perfect. It can be improved by scanning the - * entire inode chunk for evidence that any of the inode clusters have - * been updated. To fix this problem completely we will need a major - * architectural change to the logging system. - */ - if (buf_f->blf_flags & XFS_BLI_INODE_NEW_BUF) { - xfs_dinode_t *dip; - int inodes_per_buf; - int mode_count = 0; - int gen_count = 0; - - stale_buf = 0; - inodes_per_buf = XFS_BUF_COUNT(bp) >> mp->m_sb.sb_inodelog; - for (i = 0; i < inodes_per_buf; i++) { - dip = (xfs_dinode_t *)xfs_buf_offset(bp, - i * mp->m_sb.sb_inodesize); - if (be16_to_cpu(dip->di_core.di_magic) != - XFS_DINODE_MAGIC) { - stale_buf = 1; - break; - } - if (be16_to_cpu(dip->di_core.di_mode)) - mode_count++; - if (be16_to_cpu(dip->di_core.di_gen)) - gen_count++; - } - - if (!mode_count && gen_count) - stale_buf = 1; - } switch (buf_f->blf_type) { case XFS_LI_BUF: @@ -1962,7 +1917,7 @@ xlog_recover_do_reg_buffer( -1, 0, XFS_QMOPT_DOWARN, "dquot_buf_recover"); } - if (!error && stale_buf) + if (!error) memcpy(xfs_buf_offset(bp, (uint)bit << XFS_BLI_SHIFT), /* dest */ item->ri_buf[i].i_addr, /* source */ @@ -2134,7 +2089,7 @@ xlog_recover_do_dquot_buffer( if (log->l_quotaoffs_flag & type) return; - xlog_recover_do_reg_buffer(mp, item, bp, buf_f); + xlog_recover_do_reg_buffer(item, bp, buf_f); } /* @@ -2235,7 +2190,7 @@ xlog_recover_do_buffer_trans( (XFS_BLI_UDQUOT_BUF|XFS_BLI_PDQUOT_BUF|XFS_BLI_GDQUOT_BUF)) { xlog_recover_do_dquot_buffer(mp, log, item, bp, buf_f); } else { - xlog_recover_do_reg_buffer(mp, item, bp, buf_f); + xlog_recover_do_reg_buffer(item, bp, buf_f); } if (error) return XFS_ERROR(error); diff --git a/fs/xfs/xfs_trans_buf.c b/fs/xfs/xfs_trans_buf.c index 95fff68..60b6b89 100644 --- a/fs/xfs/xfs_trans_buf.c +++ b/fs/xfs/xfs_trans_buf.c @@ -966,7 +966,6 @@ xfs_trans_inode_alloc_buf( ASSERT(atomic_read(&bip->bli_refcount) > 0); bip->bli_flags |= XFS_BLI_INODE_ALLOC_BUF; - bip->bli_format.blf_flags |= XFS_BLI_INODE_NEW_BUF; } diff --git a/include/acpi/acpi_drivers.h b/include/acpi/acpi_drivers.h index 202acb9..f85f77a 100644 --- a/include/acpi/acpi_drivers.h +++ b/include/acpi/acpi_drivers.h @@ -147,10 +147,6 @@ static inline void unregister_hotplug_dock_device(acpi_handle handle) /*-------------------------------------------------------------------------- Suspend/Resume -------------------------------------------------------------------------- */ -#ifdef CONFIG_ACPI_SLEEP extern int acpi_sleep_init(void); -#else -static inline int acpi_sleep_init(void) { return 0; } -#endif #endif /*__ACPI_DRIVERS_H__*/ diff --git a/include/acpi/processor.h b/include/acpi/processor.h index ec3ffda..99934a9 100644 --- a/include/acpi/processor.h +++ b/include/acpi/processor.h @@ -320,6 +320,8 @@ int acpi_processor_power_init(struct acpi_processor *pr, int acpi_processor_cst_has_changed(struct acpi_processor *pr); int acpi_processor_power_exit(struct acpi_processor *pr, struct acpi_device *device); +int acpi_processor_suspend(struct acpi_device * device, pm_message_t state); +int acpi_processor_resume(struct acpi_device * device); /* in processor_thermal.c */ int acpi_processor_get_limit_info(struct acpi_processor *pr); diff --git a/include/asm-i386/system.h b/include/asm-i386/system.h index 609756c..d69ba93 100644 --- a/include/asm-i386/system.h +++ b/include/asm-i386/system.h @@ -214,11 +214,6 @@ static inline unsigned long get_limit(unsigned long segment) */ -/* - * Actually only lfence would be needed for mb() because all stores done - * by the kernel should be already ordered. But keep a full barrier for now. - */ - #define mb() alternative("lock; addl $0,0(%%esp)", "mfence", X86_FEATURE_XMM2) #define rmb() alternative("lock; addl $0,0(%%esp)", "lfence", X86_FEATURE_XMM2) diff --git a/include/asm-mips/fcntl.h b/include/asm-mips/fcntl.h index 00a50ec..2a52333 100644 --- a/include/asm-mips/fcntl.h +++ b/include/asm-mips/fcntl.h @@ -13,6 +13,7 @@ #define O_SYNC 0x0010 #define O_NONBLOCK 0x0080 #define O_CREAT 0x0100 /* not fcntl */ +#define O_TRUNC 0x0200 /* not fcntl */ #define O_EXCL 0x0400 /* not fcntl */ #define O_NOCTTY 0x0800 /* not fcntl */ #define FASYNC 0x1000 /* fcntl, for BSD compatibility */ diff --git a/include/asm-mips/irq.h b/include/asm-mips/irq.h index 97102eb..2cb52cf 100644 --- a/include/asm-mips/irq.h +++ b/include/asm-mips/irq.h @@ -24,7 +24,30 @@ static inline int irq_canonicalize(int irq) #define irq_canonicalize(irq) (irq) /* Sane hardware, sane code ... */ #endif +#ifdef CONFIG_MIPS_MT_SMTC + +struct irqaction; + +extern unsigned long irq_hwmask[]; +extern int setup_irq_smtc(unsigned int irq, struct irqaction * new, + unsigned long hwmask); + +static inline void smtc_im_ack_irq(unsigned int irq) +{ + if (irq_hwmask[irq] & ST0_IM) + set_c0_status(irq_hwmask[irq] & ST0_IM); +} + +#else + +static inline void smtc_im_ack_irq(unsigned int irq) +{ +} + +#endif /* CONFIG_MIPS_MT_SMTC */ + #ifdef CONFIG_MIPS_MT_SMTC_IM_BACKSTOP + /* * Clear interrupt mask handling "backstop" if irq_hwmask * entry so indicates. This implies that the ack() or end() @@ -38,6 +61,7 @@ do { \ ~(irq_hwmask[irq] & 0x0000ff00)); \ } while (0) #else + #define __DO_IRQ_SMTC_HOOK(irq) do { } while (0) #endif @@ -60,14 +84,6 @@ do { \ extern void arch_init_irq(void); extern void spurious_interrupt(void); -#ifdef CONFIG_MIPS_MT_SMTC -struct irqaction; - -extern unsigned long irq_hwmask[]; -extern int setup_irq_smtc(unsigned int irq, struct irqaction * new, - unsigned long hwmask); -#endif /* CONFIG_MIPS_MT_SMTC */ - extern int allocate_irqno(void); extern void alloc_legacy_irqno(void); extern void free_irqno(unsigned int irq); diff --git a/include/asm-mips/page.h b/include/asm-mips/page.h index b92dd8c..e3301e5 100644 --- a/include/asm-mips/page.h +++ b/include/asm-mips/page.h @@ -142,7 +142,7 @@ typedef struct { unsigned long pgprot; } pgprot_t; /* * __pa()/__va() should be used only during mem init. */ -#if defined(CONFIG_64BIT) && !defined(CONFIG_BUILD_ELF64) +#ifdef CONFIG_64BIT #define __pa(x) \ ({ \ unsigned long __x = (unsigned long)(x); \ diff --git a/include/asm-x86_64/pgalloc.h b/include/asm-x86_64/pgalloc.h index b467be6..8bb5646 100644 --- a/include/asm-x86_64/pgalloc.h +++ b/include/asm-x86_64/pgalloc.h @@ -4,10 +4,6 @@ #include <asm/pda.h> #include <linux/threads.h> #include <linux/mm.h> -#include <linux/quicklist.h> - -#define QUICK_PGD 0 /* We preserve special mappings over free */ -#define QUICK_PT 1 /* Other page table pages that are zero on free */ #define pmd_populate_kernel(mm, pmd, pte) \ set_pmd(pmd, __pmd(_PAGE_TABLE | __pa(pte))) @@ -24,23 +20,23 @@ static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, struct page *p static inline void pmd_free(pmd_t *pmd) { BUG_ON((unsigned long)pmd & (PAGE_SIZE-1)); - quicklist_free(QUICK_PT, NULL, pmd); + free_page((unsigned long)pmd); } static inline pmd_t *pmd_alloc_one (struct mm_struct *mm, unsigned long addr) { - return (pmd_t *)quicklist_alloc(QUICK_PT, GFP_KERNEL|__GFP_REPEAT, NULL); + return (pmd_t *)get_zeroed_page(GFP_KERNEL|__GFP_REPEAT); } static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr) { - return (pud_t *)quicklist_alloc(QUICK_PT, GFP_KERNEL|__GFP_REPEAT, NULL); + return (pud_t *)get_zeroed_page(GFP_KERNEL|__GFP_REPEAT); } static inline void pud_free (pud_t *pud) { BUG_ON((unsigned long)pud & (PAGE_SIZE-1)); - quicklist_free(QUICK_PT, NULL, pud); + free_page((unsigned long)pud); } static inline void pgd_list_add(pgd_t *pgd) @@ -61,57 +57,41 @@ static inline void pgd_list_del(pgd_t *pgd) spin_unlock(&pgd_lock); } -static inline void pgd_ctor(void *x) +static inline pgd_t *pgd_alloc(struct mm_struct *mm) { unsigned boundary; - pgd_t *pgd = x; - struct page *page = virt_to_page(pgd); - + pgd_t *pgd = (pgd_t *)__get_free_page(GFP_KERNEL|__GFP_REPEAT); + if (!pgd) + return NULL; + pgd_list_add(pgd); /* * Copy kernel pointers in from init. + * Could keep a freelist or slab cache of those because the kernel + * part never changes. */ boundary = pgd_index(__PAGE_OFFSET); + memset(pgd, 0, boundary * sizeof(pgd_t)); memcpy(pgd + boundary, - init_level4_pgt + boundary, - (PTRS_PER_PGD - boundary) * sizeof(pgd_t)); - - spin_lock(&pgd_lock); - list_add(&page->lru, &pgd_list); - spin_unlock(&pgd_lock); -} - -static inline void pgd_dtor(void *x) -{ - pgd_t *pgd = x; - struct page *page = virt_to_page(pgd); - - spin_lock(&pgd_lock); - list_del(&page->lru); - spin_unlock(&pgd_lock); -} - -static inline pgd_t *pgd_alloc(struct mm_struct *mm) -{ - pgd_t *pgd = (pgd_t *)quicklist_alloc(QUICK_PGD, - GFP_KERNEL|__GFP_REPEAT, pgd_ctor); + init_level4_pgt + boundary, + (PTRS_PER_PGD - boundary) * sizeof(pgd_t)); return pgd; } static inline void pgd_free(pgd_t *pgd) { BUG_ON((unsigned long)pgd & (PAGE_SIZE-1)); - quicklist_free(QUICK_PGD, pgd_dtor, pgd); + pgd_list_del(pgd); + free_page((unsigned long)pgd); } static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm, unsigned long address) { - return (pte_t *)quicklist_alloc(QUICK_PT, GFP_KERNEL|__GFP_REPEAT, NULL); + return (pte_t *)get_zeroed_page(GFP_KERNEL|__GFP_REPEAT); } static inline struct page *pte_alloc_one(struct mm_struct *mm, unsigned long address) { - void *p = (void *)quicklist_alloc(QUICK_PT, GFP_KERNEL|__GFP_REPEAT, NULL); - + void *p = (void *)get_zeroed_page(GFP_KERNEL|__GFP_REPEAT); if (!p) return NULL; return virt_to_page(p); @@ -123,22 +103,17 @@ static inline struct page *pte_alloc_one(struct mm_struct *mm, unsigned long add static inline void pte_free_kernel(pte_t *pte) { BUG_ON((unsigned long)pte & (PAGE_SIZE-1)); - quicklist_free(QUICK_PT, NULL, pte); + free_page((unsigned long)pte); } static inline void pte_free(struct page *pte) { - quicklist_free_page(QUICK_PT, NULL, pte); -} + __free_page(pte); +} -#define __pte_free_tlb(tlb,pte) quicklist_free_page(QUICK_PT, NULL,(pte)) +#define __pte_free_tlb(tlb,pte) tlb_remove_page((tlb),(pte)) -#define __pmd_free_tlb(tlb,x) quicklist_free(QUICK_PT, NULL, (x)) -#define __pud_free_tlb(tlb,x) quicklist_free(QUICK_PT, NULL, (x)) +#define __pmd_free_tlb(tlb,x) tlb_remove_page((tlb),virt_to_page(x)) +#define __pud_free_tlb(tlb,x) tlb_remove_page((tlb),virt_to_page(x)) -static inline void check_pgt_cache(void) -{ - quicklist_trim(QUICK_PGD, pgd_dtor, 25, 16); - quicklist_trim(QUICK_PT, NULL, 25, 16); -} #endif /* _X86_64_PGALLOC_H */ diff --git a/include/asm-x86_64/pgtable.h b/include/asm-x86_64/pgtable.h index c9d8764..57dd6b3 100644 --- a/include/asm-x86_64/pgtable.h +++ b/include/asm-x86_64/pgtable.h @@ -411,6 +411,7 @@ pte_t *lookup_address(unsigned long addr); #define HAVE_ARCH_UNMAPPED_AREA #define pgtable_cache_init() do { } while (0) +#define check_pgt_cache() do { } while (0) #define PAGE_AGP PAGE_KERNEL_NOCACHE #define HAVE_PAGE_AGP 1 diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h index 963051a..3ec6e7f 100644 --- a/include/linux/cpufreq.h +++ b/include/linux/cpufreq.h @@ -32,15 +32,7 @@ * CPUFREQ NOTIFIER INTERFACE * *********************************************************************/ -#ifdef CONFIG_CPU_FREQ int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list); -#else -static inline int cpufreq_register_notifier(struct notifier_block *nb, - unsigned int list) -{ - return 0; -} -#endif int cpufreq_unregister_notifier(struct notifier_block *nb, unsigned int list); #define CPUFREQ_TRANSITION_NOTIFIER (0) @@ -268,22 +260,17 @@ struct freq_attr { int cpufreq_get_policy(struct cpufreq_policy *policy, unsigned int cpu); int cpufreq_update_policy(unsigned int cpu); +/* query the current CPU frequency (in kHz). If zero, cpufreq couldn't detect it */ +unsigned int cpufreq_get(unsigned int cpu); -/* - * query the last known CPU freq (in kHz). If zero, cpufreq couldn't detect it - */ +/* query the last known CPU freq (in kHz). If zero, cpufreq couldn't detect it */ #ifdef CONFIG_CPU_FREQ unsigned int cpufreq_quick_get(unsigned int cpu); -unsigned int cpufreq_get(unsigned int cpu); #else static inline unsigned int cpufreq_quick_get(unsigned int cpu) { return 0; } -static inline unsigned int cpufreq_get(unsigned int cpu) -{ - return 0; -} #endif diff --git a/include/linux/init_task.h b/include/linux/init_task.h index cab741c..f8abfa3 100644 --- a/include/linux/init_task.h +++ b/include/linux/init_task.h @@ -86,7 +86,7 @@ extern struct nsproxy init_nsproxy; .count = ATOMIC_INIT(1), \ .action = { { { .sa_handler = NULL, } }, }, \ .siglock = __SPIN_LOCK_UNLOCKED(sighand.siglock), \ - .signalfd_list = LIST_HEAD_INIT(sighand.signalfd_list), \ + .signalfd_wqh = __WAIT_QUEUE_HEAD_INITIALIZER(sighand.signalfd_wqh), \ } extern struct group_info init_groups; diff --git a/include/linux/sched.h b/include/linux/sched.h index 5445eae..a01ac6d 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -438,7 +438,7 @@ struct sighand_struct { atomic_t count; struct k_sigaction action[_NSIG]; spinlock_t siglock; - struct list_head signalfd_list; + wait_queue_head_t signalfd_wqh; }; struct pacct_struct { @@ -1406,6 +1406,7 @@ extern unsigned int sysctl_sched_wakeup_granularity; extern unsigned int sysctl_sched_batch_wakeup_granularity; extern unsigned int sysctl_sched_stat_granularity; extern unsigned int sysctl_sched_runtime_limit; +extern unsigned int sysctl_sched_compat_yield; extern unsigned int sysctl_sched_child_runs_first; extern unsigned int sysctl_sched_features; diff --git a/include/linux/signalfd.h b/include/linux/signalfd.h index 5104294..4c9ff09 100644 --- a/include/linux/signalfd.h +++ b/include/linux/signalfd.h @@ -45,49 +45,17 @@ struct signalfd_siginfo { #ifdef CONFIG_SIGNALFD /* - * Deliver the signal to listening signalfd. This must be called - * with the sighand lock held. Same are the following that end up - * calling signalfd_deliver(). - */ -void signalfd_deliver(struct task_struct *tsk, int sig); - -/* - * No need to fall inside signalfd_deliver() if no signal listeners - * are available. + * Deliver the signal to listening signalfd. */ static inline void signalfd_notify(struct task_struct *tsk, int sig) { - if (unlikely(!list_empty(&tsk->sighand->signalfd_list))) - signalfd_deliver(tsk, sig); -} - -/* - * The signal -1 is used to notify the signalfd that the sighand - * is on its way to be detached. - */ -static inline void signalfd_detach_locked(struct task_struct *tsk) -{ - if (unlikely(!list_empty(&tsk->sighand->signalfd_list))) - signalfd_deliver(tsk, -1); -} - -static inline void signalfd_detach(struct task_struct *tsk) -{ - struct sighand_struct *sighand = tsk->sighand; - - if (unlikely(!list_empty(&sighand->signalfd_list))) { - spin_lock_irq(&sighand->siglock); - signalfd_deliver(tsk, -1); - spin_unlock_irq(&sighand->siglock); - } + if (unlikely(waitqueue_active(&tsk->sighand->signalfd_wqh))) + wake_up(&tsk->sighand->signalfd_wqh); } #else /* CONFIG_SIGNALFD */ -#define signalfd_deliver(t, s) do { } while (0) -#define signalfd_notify(t, s) do { } while (0) -#define signalfd_detach_locked(t) do { } while (0) -#define signalfd_detach(t) do { } while (0) +static inline void signalfd_notify(struct task_struct *tsk, int sig) { } #endif /* CONFIG_SIGNALFD */ diff --git a/include/net/sctp/sm.h b/include/net/sctp/sm.h index 991c85b..e8e3a64 100644 --- a/include/net/sctp/sm.h +++ b/include/net/sctp/sm.h @@ -114,7 +114,6 @@ sctp_state_fn_t sctp_sf_do_4_C; sctp_state_fn_t sctp_sf_eat_data_6_2; sctp_state_fn_t sctp_sf_eat_data_fast_4_4; sctp_state_fn_t sctp_sf_eat_sack_6_2; -sctp_state_fn_t sctp_sf_tabort_8_4_8; sctp_state_fn_t sctp_sf_operr_notify; sctp_state_fn_t sctp_sf_t1_init_timer_expire; sctp_state_fn_t sctp_sf_t1_cookie_timer_expire; @@ -247,6 +246,9 @@ struct sctp_chunk *sctp_make_asconf_update_ip(struct sctp_association *, int, __be16); struct sctp_chunk *sctp_make_asconf_set_prim(struct sctp_association *asoc, union sctp_addr *addr); +int sctp_verify_asconf(const struct sctp_association *asoc, + struct sctp_paramhdr *param_hdr, void *chunk_end, + struct sctp_paramhdr **errp); struct sctp_chunk *sctp_process_asconf(struct sctp_association *asoc, struct sctp_chunk *asconf); int sctp_process_asconf_ack(struct sctp_association *asoc, diff --git a/include/net/sctp/structs.h b/include/net/sctp/structs.h index c2fe2dc..baff49d 100644 --- a/include/net/sctp/structs.h +++ b/include/net/sctp/structs.h @@ -421,6 +421,7 @@ struct sctp_signed_cookie { * internally. */ union sctp_addr_param { + struct sctp_paramhdr p; struct sctp_ipv4addr_param v4; struct sctp_ipv6addr_param v6; }; @@ -1156,7 +1157,7 @@ int sctp_bind_addr_copy(struct sctp_bind_addr *dest, int sctp_add_bind_addr(struct sctp_bind_addr *, union sctp_addr *, __u8 use_as_src, gfp_t gfp); int sctp_del_bind_addr(struct sctp_bind_addr *, union sctp_addr *, - void (*rcu_call)(struct rcu_head *, + void fastcall (*rcu_call)(struct rcu_head *, void (*func)(struct rcu_head *))); int sctp_bind_addr_match(struct sctp_bind_addr *, const union sctp_addr *, struct sctp_sock *); diff --git a/include/net/tcp.h b/include/net/tcp.h index 185c7ec..54053de 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1059,14 +1059,12 @@ struct tcp_md5sig_key { }; struct tcp4_md5sig_key { - u8 *key; - u16 keylen; + struct tcp_md5sig_key base; __be32 addr; }; struct tcp6_md5sig_key { - u8 *key; - u16 keylen; + struct tcp_md5sig_key base; #if 0 u32 scope_id; /* XXX */ #endif diff --git a/kernel/exit.c b/kernel/exit.c index 06b24b3..993369e 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -24,7 +24,6 @@ #include <linux/pid_namespace.h> #include <linux/ptrace.h> #include <linux/profile.h> -#include <linux/signalfd.h> #include <linux/mount.h> #include <linux/proc_fs.h> #include <linux/kthread.h> @@ -86,14 +85,6 @@ static void __exit_signal(struct task_struct *tsk) sighand = rcu_dereference(tsk->sighand); spin_lock(&sighand->siglock); - /* - * Notify that this sighand has been detached. This must - * be called with the tsk->sighand lock held. Also, this - * access tsk->sighand internally, so it must be called - * before tsk->sighand is reset. - */ - signalfd_detach_locked(tsk); - posix_cpu_timers_exit(tsk); if (atomic_dec_and_test(&sig->count)) posix_cpu_timers_exit_group(tsk); diff --git a/kernel/fork.c b/kernel/fork.c index 7332e23..33f12f4 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1438,7 +1438,7 @@ static void sighand_ctor(void *data, struct kmem_cache *cachep, struct sighand_struct *sighand = data; spin_lock_init(&sighand->siglock); - INIT_LIST_HEAD(&sighand->signalfd_list); + init_waitqueue_head(&sighand->signalfd_wqh); } void __init proc_caches_init(void) diff --git a/kernel/futex.c b/kernel/futex.c index e8935b1..fcc94e7 100644 --- a/kernel/futex.c +++ b/kernel/futex.c @@ -1943,9 +1943,10 @@ static inline int fetch_robust_entry(struct robust_list __user **entry, void exit_robust_list(struct task_struct *curr) { struct robust_list_head __user *head = curr->robust_list; - struct robust_list __user *entry, *pending; - unsigned int limit = ROBUST_LIST_LIMIT, pi, pip; + struct robust_list __user *entry, *next_entry, *pending; + unsigned int limit = ROBUST_LIST_LIMIT, pi, next_pi, pip; unsigned long futex_offset; + int rc; /* * Fetch the list head (which was registered earlier, via @@ -1965,12 +1966,14 @@ void exit_robust_list(struct task_struct *curr) if (fetch_robust_entry(&pending, &head->list_op_pending, &pip)) return; - if (pending) - handle_futex_death((void __user *)pending + futex_offset, - curr, pip); - + next_entry = NULL; /* avoid warning with gcc */ while (entry != &head->list) { /* + * Fetch the next entry in the list before calling + * handle_futex_death: + */ + rc = fetch_robust_entry(&next_entry, &entry->next, &next_pi); + /* * A pending lock might already be on the list, so * don't process it twice: */ @@ -1978,11 +1981,10 @@ void exit_robust_list(struct task_struct *curr) if (handle_futex_death((void __user *)entry + futex_offset, curr, pi)) return; - /* - * Fetch the next entry in the list: - */ - if (fetch_robust_entry(&entry, &entry->next, &pi)) + if (rc) return; + entry = next_entry; + pi = next_pi; /* * Avoid excessively long or circular lists: */ @@ -1991,6 +1993,10 @@ void exit_robust_list(struct task_struct *curr) cond_resched(); } + + if (pending) + handle_futex_death((void __user *)pending + futex_offset, + curr, pip); } long do_futex(u32 __user *uaddr, int op, u32 val, ktime_t *timeout, diff --git a/kernel/futex_compat.c b/kernel/futex_compat.c index 7e52eb0..2c2e295 100644 --- a/kernel/futex_compat.c +++ b/kernel/futex_compat.c @@ -38,10 +38,11 @@ fetch_robust_entry(compat_uptr_t *uentry, struct robust_list __user **entry, void compat_exit_robust_list(struct task_struct *curr) { struct compat_robust_list_head __user *head = curr->compat_robust_list; - struct robust_list __user *entry, *pending; - unsigned int limit = ROBUST_LIST_LIMIT, pi, pip; - compat_uptr_t uentry, upending; + struct robust_list __user *entry, *next_entry, *pending; + unsigned int limit = ROBUST_LIST_LIMIT, pi, next_pi, pip; + compat_uptr_t uentry, next_uentry, upending; compat_long_t futex_offset; + int rc; /* * Fetch the list head (which was registered earlier, via @@ -61,11 +62,16 @@ void compat_exit_robust_list(struct task_struct *curr) if (fetch_robust_entry(&upending, &pending, &head->list_op_pending, &pip)) return; - if (pending) - handle_futex_death((void __user *)pending + futex_offset, curr, pip); + next_entry = NULL; /* avoid warning with gcc */ while (entry != (struct robust_list __user *) &head->list) { /* + * Fetch the next entry in the list before calling + * handle_futex_death: + */ + rc = fetch_robust_entry(&next_uentry, &next_entry, + (compat_uptr_t __user *)&entry->next, &next_pi); + /* * A pending lock might already be on the list, so * dont process it twice: */ @@ -74,12 +80,11 @@ void compat_exit_robust_list(struct task_struct *curr) curr, pi)) return; - /* - * Fetch the next entry in the list: - */ - if (fetch_robust_entry(&uentry, &entry, - (compat_uptr_t __user *)&entry->next, &pi)) + if (rc) return; + uentry = next_uentry; + entry = next_entry; + pi = next_pi; /* * Avoid excessively long or circular lists: */ @@ -88,6 +93,9 @@ void compat_exit_robust_list(struct task_struct *curr) cond_resched(); } + if (pending) + handle_futex_death((void __user *)pending + futex_offset, + curr, pip); } asmlinkage long diff --git a/kernel/power/Kconfig b/kernel/power/Kconfig index c8580a1..14b0e10 100644 --- a/kernel/power/Kconfig +++ b/kernel/power/Kconfig @@ -110,7 +110,7 @@ config SUSPEND config HIBERNATION_UP_POSSIBLE bool - depends on X86 || PPC64_SWSUSP || FRV || PPC32 + depends on X86 || PPC64_SWSUSP || PPC32 depends on !SMP default y diff --git a/kernel/sched.c b/kernel/sched.c index deeb1f8..6107a0c 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -1682,6 +1682,11 @@ void fastcall wake_up_new_task(struct task_struct *p, unsigned long clone_flags) p->prio = effective_prio(p); + if (rt_prio(p->prio)) + p->sched_class = &rt_sched_class; + else + p->sched_class = &fair_sched_class; + if (!p->sched_class->task_new || !sysctl_sched_child_runs_first || (clone_flags & CLONE_VM) || task_cpu(p) != this_cpu || !current->se.on_rq) { @@ -4550,10 +4555,7 @@ asmlinkage long sys_sched_yield(void) struct rq *rq = this_rq_lock(); schedstat_inc(rq, yld_cnt); - if (unlikely(rq->nr_running == 1)) - schedstat_inc(rq, yld_act_empty); - else - current->sched_class->yield_task(rq, current); + current->sched_class->yield_task(rq, current); /* * Since we are going to call schedule() anyway, there's diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index 892616b..67c67a8 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -43,6 +43,14 @@ unsigned int sysctl_sched_latency __read_mostly = 20000000ULL; unsigned int sysctl_sched_min_granularity __read_mostly = 2000000ULL; /* + * sys_sched_yield() compat mode + * + * This option switches the agressive yield implementation of the + * old scheduler back on. + */ +unsigned int __read_mostly sysctl_sched_compat_yield; + +/* * SCHED_BATCH wake-up granularity. * (default: 25 msec, units: nanoseconds) * @@ -631,6 +639,16 @@ static void enqueue_sleeper(struct cfs_rq *cfs_rq, struct sched_entity *se) se->block_start = 0; se->sum_sleep_runtime += delta; + + /* + * Blocking time is in units of nanosecs, so shift by 20 to + * get a milliseconds-range estimation of the amount of + * time that the task spent sleeping: + */ + if (unlikely(prof_on == SLEEP_PROFILING)) { + profile_hits(SLEEP_PROFILING, (void *)get_wchan(tsk), + delta >> 20); + } } #endif } @@ -897,19 +915,62 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int sleep) } /* - * sched_yield() support is very simple - we dequeue and enqueue + * sched_yield() support is very simple - we dequeue and enqueue. + * + * If compat_yield is turned on then we requeue to the end of the tree. */ static void yield_task_fair(struct rq *rq, struct task_struct *p) { struct cfs_rq *cfs_rq = task_cfs_rq(p); + struct rb_node **link = &cfs_rq->tasks_timeline.rb_node; + struct sched_entity *rightmost, *se = &p->se; + struct rb_node *parent; - __update_rq_clock(rq); /* - * Dequeue and enqueue the task to update its - * position within the tree: + * Are we the only task in the tree? + */ + if (unlikely(cfs_rq->nr_running == 1)) + return; + + if (likely(!sysctl_sched_compat_yield)) { + __update_rq_clock(rq); + /* + * Dequeue and enqueue the task to update its + * position within the tree: + */ + dequeue_entity(cfs_rq, &p->se, 0); + enqueue_entity(cfs_rq, &p->se, 0); + + return; + } + /* + * Find the rightmost entry in the rbtree: + */ + do { + parent = *link; + link = &parent->rb_right; + } while (*link); + + rightmost = rb_entry(parent, struct sched_entity, run_node); + /* + * Already in the rightmost position? */ - dequeue_entity(cfs_rq, &p->se, 0); - enqueue_entity(cfs_rq, &p->se, 0); + if (unlikely(rightmost == se)) + return; + + /* + * Minimally necessary key value to be last in the tree: + */ + se->fair_key = rightmost->fair_key + 1; + + if (cfs_rq->rb_leftmost == &se->run_node) + cfs_rq->rb_leftmost = rb_next(&se->run_node); + /* + * Relink the task to the rightmost position: + */ + rb_erase(&se->run_node, &cfs_rq->tasks_timeline); + rb_link_node(&se->run_node, parent, link); + rb_insert_color(&se->run_node, &cfs_rq->tasks_timeline); } /* diff --git a/kernel/signal.c b/kernel/signal.c index 3169bed..9fb91a3 100644 --- a/kernel/signal.c +++ b/kernel/signal.c @@ -378,8 +378,7 @@ int dequeue_signal(struct task_struct *tsk, sigset_t *mask, siginfo_t *info) /* We only dequeue private signals from ourselves, we don't let * signalfd steal them */ - if (likely(tsk == current)) - signr = __dequeue_signal(&tsk->pending, mask, info); + signr = __dequeue_signal(&tsk->pending, mask, info); if (!signr) { signr = __dequeue_signal(&tsk->signal->shared_pending, mask, info); @@ -407,8 +406,7 @@ int dequeue_signal(struct task_struct *tsk, sigset_t *mask, siginfo_t *info) } } } - if (likely(tsk == current)) - recalc_sigpending(); + recalc_sigpending(); if (signr && unlikely(sig_kernel_stop(signr))) { /* * Set a marker that we have dequeued a stop signal. Our @@ -425,7 +423,7 @@ int dequeue_signal(struct task_struct *tsk, sigset_t *mask, siginfo_t *info) if (!(tsk->signal->flags & SIGNAL_GROUP_EXIT)) tsk->signal->flags |= SIGNAL_STOP_DEQUEUED; } - if (signr && likely(tsk == current) && + if (signr && ((info->si_code & __SI_MASK) == __SI_TIMER) && info->si_sys_private){ /* diff --git a/kernel/sys.c b/kernel/sys.c index 1b33b05..8ae2e63 100644 --- a/kernel/sys.c +++ b/kernel/sys.c @@ -32,6 +32,7 @@ #include <linux/getcpu.h> #include <linux/task_io_accounting_ops.h> #include <linux/seccomp.h> +#include <linux/cpu.h> #include <linux/compat.h> #include <linux/syscalls.h> @@ -878,6 +879,7 @@ void kernel_power_off(void) kernel_shutdown_prepare(SYSTEM_POWER_OFF); if (pm_power_off_prepare) pm_power_off_prepare(); + disable_nonboot_cpus(); sysdev_shutdown(); printk(KERN_EMERG "Power down.\n"); machine_power_off(); diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 6ace893..53a456e 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -303,6 +303,14 @@ static ctl_table kern_table[] = { .proc_handler = &proc_dointvec, }, #endif + { + .ctl_name = CTL_UNNUMBERED, + .procname = "sched_compat_yield", + .data = &sysctl_sched_compat_yield, + .maxlen = sizeof(unsigned int), + .mode = 0644, + .proc_handler = &proc_dointvec, + }, #ifdef CONFIG_PROVE_LOCKING { .ctl_name = CTL_UNNUMBERED, diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c index aab881c..0962e05 100644 --- a/kernel/time/tick-broadcast.c +++ b/kernel/time/tick-broadcast.c @@ -382,23 +382,8 @@ static int tick_broadcast_set_event(ktime_t expires, int force) int tick_resume_broadcast_oneshot(struct clock_event_device *bc) { - int cpu = smp_processor_id(); - - /* - * If the CPU is marked for broadcast, enforce oneshot - * broadcast mode. The jinxed VAIO does not resume otherwise. - * No idea why it ends up in a lower C State during resume - * without notifying the clock events layer. - */ - if (cpu_isset(cpu, tick_broadcast_mask)) - cpu_set(cpu, tick_broadcast_oneshot_mask); - clockevents_set_mode(bc, CLOCK_EVT_MODE_ONESHOT); - - if(!cpus_empty(tick_broadcast_oneshot_mask)) - tick_broadcast_set_event(ktime_get(), 1); - - return cpu_isset(cpu, tick_broadcast_oneshot_mask); + return 0; } /* diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 51e2fd0..39fc6ae 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -284,7 +284,7 @@ config LOCKDEP select KALLSYMS_ALL config LOCK_STAT - bool "Lock usage statisitics" + bool "Lock usage statistics" depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT select LOCKDEP select DEBUG_SPINLOCK diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 84c795e..eab8c42 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -42,7 +42,7 @@ static void clear_huge_page(struct page *page, unsigned long addr) might_sleep(); for (i = 0; i < (HPAGE_SIZE/PAGE_SIZE); i++) { cond_resched(); - clear_user_highpage(page + i, addr); + clear_user_highpage(page + i, addr + i * PAGE_SIZE); } } diff --git a/net/ieee80211/ieee80211_rx.c b/net/ieee80211/ieee80211_rx.c index f2de2e4..6284c99b 100644 --- a/net/ieee80211/ieee80211_rx.c +++ b/net/ieee80211/ieee80211_rx.c @@ -366,6 +366,12 @@ int ieee80211_rx(struct ieee80211_device *ieee, struct sk_buff *skb, frag = WLAN_GET_SEQ_FRAG(sc); hdrlen = ieee80211_get_hdrlen(fc); + if (skb->len < hdrlen) { + printk(KERN_INFO "%s: invalid SKB length %d\n", + dev->name, skb->len); + goto rx_dropped; + } + /* Put this code here so that we avoid duplicating it in all * Rx paths. - Jean II */ #ifdef CONFIG_WIRELESS_EXT diff --git a/net/ieee80211/softmac/ieee80211softmac_assoc.c b/net/ieee80211/softmac/ieee80211softmac_assoc.c index afb6c66..e475f2e 100644 --- a/net/ieee80211/softmac/ieee80211softmac_assoc.c +++ b/net/ieee80211/softmac/ieee80211softmac_assoc.c @@ -273,8 +273,6 @@ ieee80211softmac_assoc_work(struct work_struct *work) ieee80211softmac_notify(mac->dev, IEEE80211SOFTMAC_EVENT_SCAN_FINISHED, ieee80211softmac_assoc_notify_scan, NULL); if (ieee80211softmac_start_scan(mac)) { dprintk(KERN_INFO PFX "Associate: failed to initiate scan. Is device up?\n"); - mac->associnfo.associating = 0; - mac->associnfo.associated = 0; } goto out; } else { diff --git a/net/ieee80211/softmac/ieee80211softmac_wx.c b/net/ieee80211/softmac/ieee80211softmac_wx.c index d054e92..442b987 100644 --- a/net/ieee80211/softmac/ieee80211softmac_wx.c +++ b/net/ieee80211/softmac/ieee80211softmac_wx.c @@ -70,44 +70,30 @@ ieee80211softmac_wx_set_essid(struct net_device *net_dev, char *extra) { struct ieee80211softmac_device *sm = ieee80211_priv(net_dev); - struct ieee80211softmac_network *n; struct ieee80211softmac_auth_queue_item *authptr; int length = 0; check_assoc_again: mutex_lock(&sm->associnfo.mutex); - /* Check if we're already associating to this or another network - * If it's another network, cancel and start over with our new network - * If it's our network, ignore the change, we're already doing it! - */ if((sm->associnfo.associating || sm->associnfo.associated) && (data->essid.flags && data->essid.length)) { - /* Get the associating network */ - n = ieee80211softmac_get_network_by_bssid(sm, sm->associnfo.bssid); - if(n && n->essid.len == data->essid.length && - !memcmp(n->essid.data, extra, n->essid.len)) { - dprintk(KERN_INFO PFX "Already associating or associated to "MAC_FMT"\n", - MAC_ARG(sm->associnfo.bssid)); - goto out; - } else { - dprintk(KERN_INFO PFX "Canceling existing associate request!\n"); - /* Cancel assoc work */ - cancel_delayed_work(&sm->associnfo.work); - /* We don't have to do this, but it's a little cleaner */ - list_for_each_entry(authptr, &sm->auth_queue, list) - cancel_delayed_work(&authptr->work); - sm->associnfo.bssvalid = 0; - sm->associnfo.bssfixed = 0; - sm->associnfo.associating = 0; - sm->associnfo.associated = 0; - /* We must unlock to avoid deadlocks with the assoc workqueue - * on the associnfo.mutex */ - mutex_unlock(&sm->associnfo.mutex); - flush_scheduled_work(); - /* Avoid race! Check assoc status again. Maybe someone started an - * association while we flushed. */ - goto check_assoc_again; - } + dprintk(KERN_INFO PFX "Canceling existing associate request!\n"); + /* Cancel assoc work */ + cancel_delayed_work(&sm->associnfo.work); + /* We don't have to do this, but it's a little cleaner */ + list_for_each_entry(authptr, &sm->auth_queue, list) + cancel_delayed_work(&authptr->work); + sm->associnfo.bssvalid = 0; + sm->associnfo.bssfixed = 0; + sm->associnfo.associating = 0; + sm->associnfo.associated = 0; + /* We must unlock to avoid deadlocks with the assoc workqueue + * on the associnfo.mutex */ + mutex_unlock(&sm->associnfo.mutex); + flush_scheduled_work(); + /* Avoid race! Check assoc status again. Maybe someone started an + * association while we flushed. */ + goto check_assoc_again; } sm->associnfo.static_essid = 0; @@ -153,13 +139,13 @@ ieee80211softmac_wx_get_essid(struct net_device *net_dev, data->essid.length = sm->associnfo.req_essid.len; data->essid.flags = 1; /* active */ memcpy(extra, sm->associnfo.req_essid.data, sm->associnfo.req_essid.len); - } - + dprintk(KERN_INFO PFX "Getting essid from req_essid\n"); + } else if (sm->associnfo.associated || sm->associnfo.associating) { /* If we're associating/associated, return that */ - if (sm->associnfo.associated || sm->associnfo.associating) { data->essid.length = sm->associnfo.associate_essid.len; data->essid.flags = 1; /* active */ memcpy(extra, sm->associnfo.associate_essid.data, sm->associnfo.associate_essid.len); + dprintk(KERN_INFO PFX "Getting essid from associate_essid\n"); } mutex_unlock(&sm->associnfo.mutex); diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 9c94627..e089a97 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -833,8 +833,7 @@ static struct tcp_md5sig_key * return NULL; for (i = 0; i < tp->md5sig_info->entries4; i++) { if (tp->md5sig_info->keys4[i].addr == addr) - return (struct tcp_md5sig_key *) - &tp->md5sig_info->keys4[i]; + return &tp->md5sig_info->keys4[i].base; } return NULL; } @@ -865,9 +864,9 @@ int tcp_v4_md5_do_add(struct sock *sk, __be32 addr, key = (struct tcp4_md5sig_key *)tcp_v4_md5_do_lookup(sk, addr); if (key) { /* Pre-existing entry - just update that one. */ - kfree(key->key); - key->key = newkey; - key->keylen = newkeylen; + kfree(key->base.key); + key->base.key = newkey; + key->base.keylen = newkeylen; } else { struct tcp_md5sig_info *md5sig; @@ -906,9 +905,9 @@ int tcp_v4_md5_do_add(struct sock *sk, __be32 addr, md5sig->alloced4++; } md5sig->entries4++; - md5sig->keys4[md5sig->entries4 - 1].addr = addr; - md5sig->keys4[md5sig->entries4 - 1].key = newkey; - md5sig->keys4[md5sig->entries4 - 1].keylen = newkeylen; + md5sig->keys4[md5sig->entries4 - 1].addr = addr; + md5sig->keys4[md5sig->entries4 - 1].base.key = newkey; + md5sig->keys4[md5sig->entries4 - 1].base.keylen = newkeylen; } return 0; } @@ -930,7 +929,7 @@ int tcp_v4_md5_do_del(struct sock *sk, __be32 addr) for (i = 0; i < tp->md5sig_info->entries4; i++) { if (tp->md5sig_info->keys4[i].addr == addr) { /* Free the key */ - kfree(tp->md5sig_info->keys4[i].key); + kfree(tp->md5sig_info->keys4[i].base.key); tp->md5sig_info->entries4--; if (tp->md5sig_info->entries4 == 0) { @@ -964,7 +963,7 @@ static void tcp_v4_clear_md5_list(struct sock *sk) if (tp->md5sig_info->entries4) { int i; for (i = 0; i < tp->md5sig_info->entries4; i++) - kfree(tp->md5sig_info->keys4[i].key); + kfree(tp->md5sig_info->keys4[i].base.key); tp->md5sig_info->entries4 = 0; tcp_free_md5sig_pool(); } diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index 0f7defb..3e06799 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -539,7 +539,7 @@ static struct tcp_md5sig_key *tcp_v6_md5_do_lookup(struct sock *sk, for (i = 0; i < tp->md5sig_info->entries6; i++) { if (ipv6_addr_cmp(&tp->md5sig_info->keys6[i].addr, addr) == 0) - return (struct tcp_md5sig_key *)&tp->md5sig_info->keys6[i]; + return &tp->md5sig_info->keys6[i].base; } return NULL; } @@ -567,9 +567,9 @@ static int tcp_v6_md5_do_add(struct sock *sk, struct in6_addr *peer, key = (struct tcp6_md5sig_key*) tcp_v6_md5_do_lookup(sk, peer); if (key) { /* modify existing entry - just update that one */ - kfree(key->key); - key->key = newkey; - key->keylen = newkeylen; + kfree(key->base.key); + key->base.key = newkey; + key->base.keylen = newkeylen; } else { /* reallocate new list if current one is full. */ if (!tp->md5sig_info) { @@ -603,8 +603,8 @@ static int tcp_v6_md5_do_add(struct sock *sk, struct in6_addr *peer, ipv6_addr_copy(&tp->md5sig_info->keys6[tp->md5sig_info->entries6].addr, peer); - tp->md5sig_info->keys6[tp->md5sig_info->entries6].key = newkey; - tp->md5sig_info->keys6[tp->md5sig_info->entries6].keylen = newkeylen; + tp->md5sig_info->keys6[tp->md5sig_info->entries6].base.key = newkey; + tp->md5sig_info->keys6[tp->md5sig_info->entries6].base.keylen = newkeylen; tp->md5sig_info->entries6++; } @@ -626,7 +626,7 @@ static int tcp_v6_md5_do_del(struct sock *sk, struct in6_addr *peer) for (i = 0; i < tp->md5sig_info->entries6; i++) { if (ipv6_addr_cmp(&tp->md5sig_info->keys6[i].addr, peer) == 0) { /* Free the key */ - kfree(tp->md5sig_info->keys6[i].key); + kfree(tp->md5sig_info->keys6[i].base.key); tp->md5sig_info->entries6--; if (tp->md5sig_info->entries6 == 0) { @@ -657,7 +657,7 @@ static void tcp_v6_clear_md5_list (struct sock *sk) if (tp->md5sig_info->entries6) { for (i = 0; i < tp->md5sig_info->entries6; i++) - kfree(tp->md5sig_info->keys6[i].key); + kfree(tp->md5sig_info->keys6[i].base.key); tp->md5sig_info->entries6 = 0; tcp_free_md5sig_pool(); } @@ -668,7 +668,7 @@ static void tcp_v6_clear_md5_list (struct sock *sk) if (tp->md5sig_info->entries4) { for (i = 0; i < tp->md5sig_info->entries4; i++) - kfree(tp->md5sig_info->keys4[i].key); + kfree(tp->md5sig_info->keys4[i].base.key); tp->md5sig_info->entries4 = 0; tcp_free_md5sig_pool(); } diff --git a/net/mac80211/ieee80211.c b/net/mac80211/ieee80211.c index 7286c38..ff2172f 100644 --- a/net/mac80211/ieee80211.c +++ b/net/mac80211/ieee80211.c @@ -5259,7 +5259,7 @@ static void __exit ieee80211_exit(void) } -module_init(ieee80211_init); +subsys_initcall(ieee80211_init); module_exit(ieee80211_exit); MODULE_DESCRIPTION("IEEE 802.11 subsystem"); diff --git a/net/mac80211/rc80211_simple.c b/net/mac80211/rc80211_simple.c index f6780d6..17b9f46 100644 --- a/net/mac80211/rc80211_simple.c +++ b/net/mac80211/rc80211_simple.c @@ -431,7 +431,7 @@ static void __exit rate_control_simple_exit(void) } -module_init(rate_control_simple_init); +subsys_initcall(rate_control_simple_init); module_exit(rate_control_simple_exit); MODULE_DESCRIPTION("Simple rate control algorithm for ieee80211"); diff --git a/net/mac80211/wme.c b/net/mac80211/wme.c index 89ce815..7ab82b3 100644 --- a/net/mac80211/wme.c +++ b/net/mac80211/wme.c @@ -424,7 +424,7 @@ static int wme_qdiscop_init(struct Qdisc *qd, struct rtattr *opt) skb_queue_head_init(&q->requeued[i]); q->queues[i] = qdisc_create_dflt(qd->dev, &pfifo_qdisc_ops, qd->handle); - if (q->queues[i] == 0) { + if (!q->queues[i]) { q->queues[i] = &noop_qdisc; printk(KERN_ERR "%s child qdisc %i creation failed", dev->name, i); } diff --git a/net/netfilter/nfnetlink_log.c b/net/netfilter/nfnetlink_log.c index e185a5b..2351533 100644 --- a/net/netfilter/nfnetlink_log.c +++ b/net/netfilter/nfnetlink_log.c @@ -58,7 +58,6 @@ struct nfulnl_instance { unsigned int qlen; /* number of nlmsgs in skb */ struct sk_buff *skb; /* pre-allocatd skb */ - struct nlmsghdr *lastnlh; /* netlink header of last msg in skb */ struct timer_list timer; int peer_pid; /* PID of the peer process */ @@ -345,10 +344,12 @@ static struct sk_buff *nfulnl_alloc_skb(unsigned int inst_size, static int __nfulnl_send(struct nfulnl_instance *inst) { - int status; + int status = -1; if (inst->qlen > 1) - inst->lastnlh->nlmsg_type = NLMSG_DONE; + NLMSG_PUT(inst->skb, 0, 0, + NLMSG_DONE, + sizeof(struct nfgenmsg)); status = nfnetlink_unicast(inst->skb, inst->peer_pid, MSG_DONTWAIT); if (status < 0) { @@ -358,8 +359,8 @@ __nfulnl_send(struct nfulnl_instance *inst) inst->qlen = 0; inst->skb = NULL; - inst->lastnlh = NULL; +nlmsg_failure: return status; } @@ -538,7 +539,6 @@ __build_packet_message(struct nfulnl_instance *inst, } nlh->nlmsg_len = inst->skb->tail - old_tail; - inst->lastnlh = nlh; return 0; nlmsg_failure: @@ -644,7 +644,8 @@ nfulnl_log_packet(unsigned int pf, } if (inst->qlen >= qthreshold || - (inst->skb && size > skb_tailroom(inst->skb))) { + (inst->skb && size > + skb_tailroom(inst->skb) - sizeof(struct nfgenmsg))) { /* either the queue len is too high or we don't have * enough room in the skb left. flush to userspace. */ UDEBUG("flushing old skb\n"); diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c index 9579573..b542c87 100644 --- a/net/sched/sch_sfq.c +++ b/net/sched/sch_sfq.c @@ -19,6 +19,7 @@ #include <linux/init.h> #include <linux/ipv6.h> #include <linux/skbuff.h> +#include <linux/jhash.h> #include <net/ip.h> #include <net/netlink.h> #include <net/pkt_sched.h> @@ -95,7 +96,7 @@ struct sfq_sched_data /* Variables */ struct timer_list perturb_timer; - int perturbation; + u32 perturbation; sfq_index tail; /* Index of current slot in round */ sfq_index max_depth; /* Maximal depth */ @@ -109,12 +110,7 @@ struct sfq_sched_data static __inline__ unsigned sfq_fold_hash(struct sfq_sched_data *q, u32 h, u32 h1) { - int pert = q->perturbation; - - /* Have we any rotation primitives? If not, WHY? */ - h ^= (h1<<pert) ^ (h1>>(0x1F - pert)); - h ^= h>>10; - return h & 0x3FF; + return jhash_2words(h, h1, q->perturbation) & (SFQ_HASH_DIVISOR - 1); } static unsigned sfq_hash(struct sfq_sched_data *q, struct sk_buff *skb) @@ -256,6 +252,13 @@ sfq_enqueue(struct sk_buff *skb, struct Qdisc* sch) q->ht[hash] = x = q->dep[SFQ_DEPTH].next; q->hash[x] = hash; } + /* If selected queue has length q->limit, this means that + * all another queues are empty and that we do simple tail drop, + * i.e. drop _this_ packet. + */ + if (q->qs[x].qlen >= q->limit) + return qdisc_drop(skb, sch); + sch->qstats.backlog += skb->len; __skb_queue_tail(&q->qs[x], skb); sfq_inc(q, x); @@ -270,7 +273,7 @@ sfq_enqueue(struct sk_buff *skb, struct Qdisc* sch) q->tail = x; } } - if (++sch->q.qlen < q->limit-1) { + if (++sch->q.qlen <= q->limit) { sch->bstats.bytes += skb->len; sch->bstats.packets++; return 0; @@ -294,6 +297,19 @@ sfq_requeue(struct sk_buff *skb, struct Qdisc* sch) } sch->qstats.backlog += skb->len; __skb_queue_head(&q->qs[x], skb); + /* If selected queue has length q->limit+1, this means that + * all another queues are empty and we do simple tail drop. + * This packet is still requeued at head of queue, tail packet + * is dropped. + */ + if (q->qs[x].qlen > q->limit) { + skb = q->qs[x].prev; + __skb_unlink(skb, &q->qs[x]); + sch->qstats.drops++; + sch->qstats.backlog -= skb->len; + kfree_skb(skb); + return NET_XMIT_CN; + } sfq_inc(q, x); if (q->qs[x].qlen == 1) { /* The flow is new */ if (q->tail == SFQ_DEPTH) { /* It is the first flow */ @@ -306,7 +322,7 @@ sfq_requeue(struct sk_buff *skb, struct Qdisc* sch) q->tail = x; } } - if (++sch->q.qlen < q->limit - 1) { + if (++sch->q.qlen <= q->limit) { sch->qstats.requeues++; return 0; } @@ -370,12 +386,10 @@ static void sfq_perturbation(unsigned long arg) struct Qdisc *sch = (struct Qdisc*)arg; struct sfq_sched_data *q = qdisc_priv(sch); - q->perturbation = net_random()&0x1F; + get_random_bytes(&q->perturbation, 4); - if (q->perturb_period) { - q->perturb_timer.expires = jiffies + q->perturb_period; - add_timer(&q->perturb_timer); - } + if (q->perturb_period) + mod_timer(&q->perturb_timer, jiffies + q->perturb_period); } static int sfq_change(struct Qdisc *sch, struct rtattr *opt) @@ -391,17 +405,17 @@ static int sfq_change(struct Qdisc *sch, struct rtattr *opt) q->quantum = ctl->quantum ? : psched_mtu(sch->dev); q->perturb_period = ctl->perturb_period*HZ; if (ctl->limit) - q->limit = min_t(u32, ctl->limit, SFQ_DEPTH); + q->limit = min_t(u32, ctl->limit, SFQ_DEPTH - 1); qlen = sch->q.qlen; - while (sch->q.qlen >= q->limit-1) + while (sch->q.qlen > q->limit) sfq_drop(sch); qdisc_tree_decrease_qlen(sch, qlen - sch->q.qlen); del_timer(&q->perturb_timer); if (q->perturb_period) { - q->perturb_timer.expires = jiffies + q->perturb_period; - add_timer(&q->perturb_timer); + mod_timer(&q->perturb_timer, jiffies + q->perturb_period); + get_random_bytes(&q->perturbation, 4); } sch_tree_unlock(sch); return 0; @@ -423,12 +437,13 @@ static int sfq_init(struct Qdisc *sch, struct rtattr *opt) q->dep[i+SFQ_DEPTH].next = i+SFQ_DEPTH; q->dep[i+SFQ_DEPTH].prev = i+SFQ_DEPTH; } - q->limit = SFQ_DEPTH; + q->limit = SFQ_DEPTH - 1; q->max_depth = 0; q->tail = SFQ_DEPTH; if (opt == NULL) { q->quantum = psched_mtu(sch->dev); q->perturb_period = 0; + get_random_bytes(&q->perturbation, 4); } else { int err = sfq_change(sch, opt); if (err) diff --git a/net/sctp/bind_addr.c b/net/sctp/bind_addr.c index d35cbf5..dfffa94 100644 --- a/net/sctp/bind_addr.c +++ b/net/sctp/bind_addr.c @@ -181,7 +181,7 @@ int sctp_add_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *new, * structure. */ int sctp_del_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *del_addr, - void (*rcu_call)(struct rcu_head *head, + void fastcall (*rcu_call)(struct rcu_head *head, void (*func)(struct rcu_head *head))) { struct sctp_sockaddr_entry *addr, *temp; diff --git a/net/sctp/input.c b/net/sctp/input.c index 47e5601..f9a0c92 100644 --- a/net/sctp/input.c +++ b/net/sctp/input.c @@ -622,6 +622,14 @@ static int sctp_rcv_ootb(struct sk_buff *skb) if (SCTP_CID_SHUTDOWN_COMPLETE == ch->type) goto discard; + /* RFC 4460, 2.11.2 + * This will discard packets with INIT chunk bundled as + * subsequent chunks in the packet. When INIT is first, + * the normal INIT processing will discard the chunk. + */ + if (SCTP_CID_INIT == ch->type && (void *)ch != skb->data) + goto discard; + /* RFC 8.4, 7) If the packet contains a "Stale cookie" ERROR * or a COOKIE ACK the SCTP Packet should be silently * discarded. diff --git a/net/sctp/inqueue.c b/net/sctp/inqueue.c index 88aa224..e4ea7fd 100644 --- a/net/sctp/inqueue.c +++ b/net/sctp/inqueue.c @@ -130,6 +130,14 @@ struct sctp_chunk *sctp_inq_pop(struct sctp_inq *queue) /* Force chunk->skb->data to chunk->chunk_end. */ skb_pull(chunk->skb, chunk->chunk_end - chunk->skb->data); + + /* Verify that we have at least chunk headers + * worth of buffer left. + */ + if (skb_headlen(chunk->skb) < sizeof(sctp_chunkhdr_t)) { + sctp_chunk_free(chunk); + chunk = queue->in_progress = NULL; + } } } diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c index 2e34220..23ae37e 100644 --- a/net/sctp/sm_make_chunk.c +++ b/net/sctp/sm_make_chunk.c @@ -2499,6 +2499,52 @@ static __be16 sctp_process_asconf_param(struct sctp_association *asoc, return SCTP_ERROR_NO_ERROR; } +/* Verify the ASCONF packet before we process it. */ +int sctp_verify_asconf(const struct sctp_association *asoc, + struct sctp_paramhdr *param_hdr, void *chunk_end, + struct sctp_paramhdr **errp) { + sctp_addip_param_t *asconf_param; + union sctp_params param; + int length, plen; + + param.v = (sctp_paramhdr_t *) param_hdr; + while (param.v <= chunk_end - sizeof(sctp_paramhdr_t)) { + length = ntohs(param.p->length); + *errp = param.p; + + if (param.v > chunk_end - length || + length < sizeof(sctp_paramhdr_t)) + return 0; + + switch (param.p->type) { + case SCTP_PARAM_ADD_IP: + case SCTP_PARAM_DEL_IP: + case SCTP_PARAM_SET_PRIMARY: + asconf_param = (sctp_addip_param_t *)param.v; + plen = ntohs(asconf_param->param_hdr.length); + if (plen < sizeof(sctp_addip_param_t) + + sizeof(sctp_paramhdr_t)) + return 0; + break; + case SCTP_PARAM_SUCCESS_REPORT: + case SCTP_PARAM_ADAPTATION_LAYER_IND: + if (length != sizeof(sctp_addip_param_t)) + return 0; + + break; + default: + break; + } + + param.v += WORD_ROUND(length); + } + + if (param.v != chunk_end) + return 0; + + return 1; +} + /* Process an incoming ASCONF chunk with the next expected serial no. and * return an ASCONF_ACK chunk to be sent in response. */ diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c index 177528e..a583d67 100644 --- a/net/sctp/sm_statefuns.c +++ b/net/sctp/sm_statefuns.c @@ -90,6 +90,11 @@ static sctp_disposition_t sctp_sf_shut_8_4_5(const struct sctp_endpoint *ep, const sctp_subtype_t type, void *arg, sctp_cmd_seq_t *commands); +static sctp_disposition_t sctp_sf_tabort_8_4_8(const struct sctp_endpoint *ep, + const struct sctp_association *asoc, + const sctp_subtype_t type, + void *arg, + sctp_cmd_seq_t *commands); static struct sctp_sackhdr *sctp_sm_pull_sack(struct sctp_chunk *chunk); static sctp_disposition_t sctp_stop_t1_and_abort(sctp_cmd_seq_t *commands, @@ -98,6 +103,7 @@ static sctp_disposition_t sctp_stop_t1_and_abort(sctp_cmd_seq_t *commands, struct sctp_transport *transport); static sctp_disposition_t sctp_sf_abort_violation( + const struct sctp_endpoint *ep, const struct sctp_association *asoc, void *arg, sctp_cmd_seq_t *commands, @@ -111,6 +117,13 @@ static sctp_disposition_t sctp_sf_violation_chunklen( void *arg, sctp_cmd_seq_t *commands); +static sctp_disposition_t sctp_sf_violation_paramlen( + const struct sctp_endpoint *ep, + const struct sctp_association *asoc, + const sctp_subtype_t type, + void *arg, + sctp_cmd_seq_t *commands); + static sctp_disposition_t sctp_sf_violation_ctsn( const struct sctp_endpoint *ep, const struct sctp_association *asoc, @@ -118,6 +131,13 @@ static sctp_disposition_t sctp_sf_violation_ctsn( void *arg, sctp_cmd_seq_t *commands); +static sctp_disposition_t sctp_sf_violation_chunk( + const struct sctp_endpoint *ep, + const struct sctp_association *asoc, + const sctp_subtype_t type, + void *arg, + sctp_cmd_seq_t *commands); + /* Small helper function that checks if the chunk length * is of the appropriate length. The 'required_length' argument * is set to be the size of a specific chunk we are testing. @@ -181,16 +201,21 @@ sctp_disposition_t sctp_sf_do_4_C(const struct sctp_endpoint *ep, struct sctp_chunk *chunk = arg; struct sctp_ulpevent *ev; + if (!sctp_vtag_verify_either(chunk, asoc)) + return sctp_sf_pdiscard(ep, asoc, type, arg, commands); + /* RFC 2960 6.10 Bundling * * An endpoint MUST NOT bundle INIT, INIT ACK or * SHUTDOWN COMPLETE with any other chunks. */ if (!chunk->singleton) - return SCTP_DISPOSITION_VIOLATION; + return sctp_sf_violation_chunk(ep, asoc, type, arg, commands); - if (!sctp_vtag_verify_either(chunk, asoc)) - return sctp_sf_pdiscard(ep, asoc, type, arg, commands); + /* Make sure that the SHUTDOWN_COMPLETE chunk has a valid length. */ + if (!sctp_chunk_length_valid(chunk, sizeof(sctp_chunkhdr_t))) + return sctp_sf_violation_chunklen(ep, asoc, type, arg, + commands); /* RFC 2960 10.2 SCTP-to-ULP * @@ -450,17 +475,17 @@ sctp_disposition_t sctp_sf_do_5_1C_ack(const struct sctp_endpoint *ep, if (!sctp_vtag_verify(chunk, asoc)) return sctp_sf_pdiscard(ep, asoc, type, arg, commands); - /* Make sure that the INIT-ACK chunk has a valid length */ - if (!sctp_chunk_length_valid(chunk, sizeof(sctp_initack_chunk_t))) - return sctp_sf_violation_chunklen(ep, asoc, type, arg, - commands); /* 6.10 Bundling * An endpoint MUST NOT bundle INIT, INIT ACK or * SHUTDOWN COMPLETE with any other chunks. */ if (!chunk->singleton) - return SCTP_DISPOSITION_VIOLATION; + return sctp_sf_violation_chunk(ep, asoc, type, arg, commands); + /* Make sure that the INIT-ACK chunk has a valid length */ + if (!sctp_chunk_length_valid(chunk, sizeof(sctp_initack_chunk_t))) + return sctp_sf_violation_chunklen(ep, asoc, type, arg, + commands); /* Grab the INIT header. */ chunk->subh.init_hdr = (sctp_inithdr_t *) chunk->skb->data; @@ -585,7 +610,7 @@ sctp_disposition_t sctp_sf_do_5_1D_ce(const struct sctp_endpoint *ep, * control endpoint, respond with an ABORT. */ if (ep == sctp_sk((sctp_get_ctl_sock()))->ep) - return sctp_sf_ootb(ep, asoc, type, arg, commands); + return sctp_sf_tabort_8_4_8(ep, asoc, type, arg, commands); /* Make sure that the COOKIE_ECHO chunk has a valid length. * In this case, we check that we have enough for at least a @@ -2496,6 +2521,11 @@ sctp_disposition_t sctp_sf_do_9_2_reshutack(const struct sctp_endpoint *ep, struct sctp_chunk *chunk = (struct sctp_chunk *) arg; struct sctp_chunk *reply; + /* Make sure that the chunk has a valid length */ + if (!sctp_chunk_length_valid(chunk, sizeof(sctp_chunkhdr_t))) + return sctp_sf_violation_chunklen(ep, asoc, type, arg, + commands); + /* Since we are not going to really process this INIT, there * is no point in verifying chunk boundries. Just generate * the SHUTDOWN ACK. @@ -2929,7 +2959,7 @@ sctp_disposition_t sctp_sf_eat_sack_6_2(const struct sctp_endpoint *ep, * * The return value is the disposition of the chunk. */ -sctp_disposition_t sctp_sf_tabort_8_4_8(const struct sctp_endpoint *ep, +static sctp_disposition_t sctp_sf_tabort_8_4_8(const struct sctp_endpoint *ep, const struct sctp_association *asoc, const sctp_subtype_t type, void *arg, @@ -2965,6 +2995,7 @@ sctp_disposition_t sctp_sf_tabort_8_4_8(const struct sctp_endpoint *ep, SCTP_INC_STATS(SCTP_MIB_OUTCTRLCHUNKS); + sctp_sf_pdiscard(ep, asoc, type, arg, commands); return SCTP_DISPOSITION_CONSUME; } @@ -3125,14 +3156,14 @@ sctp_disposition_t sctp_sf_ootb(const struct sctp_endpoint *ep, ch = (sctp_chunkhdr_t *) chunk->chunk_hdr; do { - /* Break out if chunk length is less then minimal. */ + /* Report violation if the chunk is less then minimal */ if (ntohs(ch->length) < sizeof(sctp_chunkhdr_t)) - break; - - ch_end = ((__u8 *)ch) + WORD_ROUND(ntohs(ch->length)); - if (ch_end > skb_tail_pointer(skb)) - break; + return sctp_sf_violation_chunklen(ep, asoc, type, arg, + commands); + /* Now that we know we at least have a chunk header, + * do things that are type appropriate. + */ if (SCTP_CID_SHUTDOWN_ACK == ch->type) ootb_shut_ack = 1; @@ -3144,15 +3175,19 @@ sctp_disposition_t sctp_sf_ootb(const struct sctp_endpoint *ep, if (SCTP_CID_ABORT == ch->type) return sctp_sf_pdiscard(ep, asoc, type, arg, commands); + /* Report violation if chunk len overflows */ + ch_end = ((__u8 *)ch) + WORD_ROUND(ntohs(ch->length)); + if (ch_end > skb_tail_pointer(skb)) + return sctp_sf_violation_chunklen(ep, asoc, type, arg, + commands); + ch = (sctp_chunkhdr_t *) ch_end; } while (ch_end < skb_tail_pointer(skb)); if (ootb_shut_ack) - sctp_sf_shut_8_4_5(ep, asoc, type, arg, commands); + return sctp_sf_shut_8_4_5(ep, asoc, type, arg, commands); else - sctp_sf_tabort_8_4_8(ep, asoc, type, arg, commands); - - return sctp_sf_pdiscard(ep, asoc, type, arg, commands); + return sctp_sf_tabort_8_4_8(ep, asoc, type, arg, commands); } /* @@ -3218,7 +3253,11 @@ static sctp_disposition_t sctp_sf_shut_8_4_5(const struct sctp_endpoint *ep, if (!sctp_chunk_length_valid(chunk, sizeof(sctp_chunkhdr_t))) return sctp_sf_pdiscard(ep, asoc, type, arg, commands); - return SCTP_DISPOSITION_CONSUME; + /* We need to discard the rest of the packet to prevent + * potential bomming attacks from additional bundled chunks. + * This is documented in SCTP Threats ID. + */ + return sctp_sf_pdiscard(ep, asoc, type, arg, commands); } return SCTP_DISPOSITION_NOMEM; @@ -3241,6 +3280,13 @@ sctp_disposition_t sctp_sf_do_8_5_1_E_sa(const struct sctp_endpoint *ep, void *arg, sctp_cmd_seq_t *commands) { + struct sctp_chunk *chunk = arg; + + /* Make sure that the SHUTDOWN_ACK chunk has a valid length. */ + if (!sctp_chunk_length_valid(chunk, sizeof(sctp_chunkhdr_t))) + return sctp_sf_violation_chunklen(ep, asoc, type, arg, + commands); + /* Although we do have an association in this case, it corresponds * to a restarted association. So the packet is treated as an OOTB * packet and the state function that handles OOTB SHUTDOWN_ACK is @@ -3257,8 +3303,11 @@ sctp_disposition_t sctp_sf_do_asconf(const struct sctp_endpoint *ep, { struct sctp_chunk *chunk = arg; struct sctp_chunk *asconf_ack = NULL; + struct sctp_paramhdr *err_param = NULL; sctp_addiphdr_t *hdr; + union sctp_addr_param *addr_param; __u32 serial; + int length; if (!sctp_vtag_verify(chunk, asoc)) { sctp_add_cmd_sf(commands, SCTP_CMD_REPORT_BAD_TAG, @@ -3274,6 +3323,20 @@ sctp_disposition_t sctp_sf_do_asconf(const struct sctp_endpoint *ep, hdr = (sctp_addiphdr_t *)chunk->skb->data; serial = ntohl(hdr->serial); + addr_param = (union sctp_addr_param *)hdr->params; + length = ntohs(addr_param->p.length); + if (length < sizeof(sctp_paramhdr_t)) + return sctp_sf_violation_paramlen(ep, asoc, type, + (void *)addr_param, commands); + + /* Verify the ASCONF chunk before processing it. */ + if (!sctp_verify_asconf(asoc, + (sctp_paramhdr_t *)((void *)addr_param + length), + (void *)chunk->chunk_end, + &err_param)) + return sctp_sf_violation_paramlen(ep, asoc, type, + (void *)&err_param, commands); + /* ADDIP 4.2 C1) Compare the value of the serial number to the value * the endpoint stored in a new association variable * 'Peer-Serial-Number'. @@ -3328,6 +3391,7 @@ sctp_disposition_t sctp_sf_do_asconf_ack(const struct sctp_endpoint *ep, struct sctp_chunk *asconf_ack = arg; struct sctp_chunk *last_asconf = asoc->addip_last_asconf; struct sctp_chunk *abort; + struct sctp_paramhdr *err_param = NULL; sctp_addiphdr_t *addip_hdr; __u32 sent_serial, rcvd_serial; @@ -3345,6 +3409,14 @@ sctp_disposition_t sctp_sf_do_asconf_ack(const struct sctp_endpoint *ep, addip_hdr = (sctp_addiphdr_t *)asconf_ack->skb->data; rcvd_serial = ntohl(addip_hdr->serial); + /* Verify the ASCONF-ACK chunk before processing it. */ + if (!sctp_verify_asconf(asoc, + (sctp_paramhdr_t *)addip_hdr->params, + (void *)asconf_ack->chunk_end, + &err_param)) + return sctp_sf_violation_paramlen(ep, asoc, type, + (void *)&err_param, commands); + if (last_asconf) { addip_hdr = (sctp_addiphdr_t *)last_asconf->subh.addip_hdr; sent_serial = ntohl(addip_hdr->serial); @@ -3655,6 +3727,16 @@ sctp_disposition_t sctp_sf_discard_chunk(const struct sctp_endpoint *ep, void *arg, sctp_cmd_seq_t *commands) { + struct sctp_chunk *chunk = arg; + + /* Make sure that the chunk has a valid length. + * Since we don't know the chunk type, we use a general + * chunkhdr structure to make a comparison. + */ + if (!sctp_chunk_length_valid(chunk, sizeof(sctp_chunkhdr_t))) + return sctp_sf_violation_chunklen(ep, asoc, type, arg, + commands); + SCTP_DEBUG_PRINTK("Chunk %d is discarded\n", type.chunk); return SCTP_DISPOSITION_DISCARD; } @@ -3710,6 +3792,13 @@ sctp_disposition_t sctp_sf_violation(const struct sctp_endpoint *ep, void *arg, sctp_cmd_seq_t *commands) { + struct sctp_chunk *chunk = arg; + + /* Make sure that the chunk has a valid length. */ + if (!sctp_chunk_length_valid(chunk, sizeof(sctp_chunkhdr_t))) + return sctp_sf_violation_chunklen(ep, asoc, type, arg, + commands); + return SCTP_DISPOSITION_VIOLATION; } @@ -3717,12 +3806,14 @@ sctp_disposition_t sctp_sf_violation(const struct sctp_endpoint *ep, * Common function to handle a protocol violation. */ static sctp_disposition_t sctp_sf_abort_violation( + const struct sctp_endpoint *ep, const struct sctp_association *asoc, void *arg, sctp_cmd_seq_t *commands, const __u8 *payload, const size_t paylen) { + struct sctp_packet *packet = NULL; struct sctp_chunk *chunk = arg; struct sctp_chunk *abort = NULL; @@ -3731,30 +3822,51 @@ static sctp_disposition_t sctp_sf_abort_violation( if (!abort) goto nomem; - sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(abort)); - SCTP_INC_STATS(SCTP_MIB_OUTCTRLCHUNKS); + if (asoc) { + sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(abort)); + SCTP_INC_STATS(SCTP_MIB_OUTCTRLCHUNKS); - if (asoc->state <= SCTP_STATE_COOKIE_ECHOED) { - sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_STOP, - SCTP_TO(SCTP_EVENT_TIMEOUT_T1_INIT)); - sctp_add_cmd_sf(commands, SCTP_CMD_SET_SK_ERR, - SCTP_ERROR(ECONNREFUSED)); - sctp_add_cmd_sf(commands, SCTP_CMD_INIT_FAILED, - SCTP_PERR(SCTP_ERROR_PROTO_VIOLATION)); + if (asoc->state <= SCTP_STATE_COOKIE_ECHOED) { + sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_STOP, + SCTP_TO(SCTP_EVENT_TIMEOUT_T1_INIT)); + sctp_add_cmd_sf(commands, SCTP_CMD_SET_SK_ERR, + SCTP_ERROR(ECONNREFUSED)); + sctp_add_cmd_sf(commands, SCTP_CMD_INIT_FAILED, + SCTP_PERR(SCTP_ERROR_PROTO_VIOLATION)); + } else { + sctp_add_cmd_sf(commands, SCTP_CMD_SET_SK_ERR, + SCTP_ERROR(ECONNABORTED)); + sctp_add_cmd_sf(commands, SCTP_CMD_ASSOC_FAILED, + SCTP_PERR(SCTP_ERROR_PROTO_VIOLATION)); + SCTP_DEC_STATS(SCTP_MIB_CURRESTAB); + } } else { - sctp_add_cmd_sf(commands, SCTP_CMD_SET_SK_ERR, - SCTP_ERROR(ECONNABORTED)); - sctp_add_cmd_sf(commands, SCTP_CMD_ASSOC_FAILED, - SCTP_PERR(SCTP_ERROR_PROTO_VIOLATION)); - SCTP_DEC_STATS(SCTP_MIB_CURRESTAB); + packet = sctp_ootb_pkt_new(asoc, chunk); + + if (!packet) + goto nomem_pkt; + + if (sctp_test_T_bit(abort)) + packet->vtag = ntohl(chunk->sctp_hdr->vtag); + + abort->skb->sk = ep->base.sk; + + sctp_packet_append_chunk(packet, abort); + + sctp_add_cmd_sf(commands, SCTP_CMD_SEND_PKT, + SCTP_PACKET(packet)); + + SCTP_INC_STATS(SCTP_MIB_OUTCTRLCHUNKS); } - sctp_add_cmd_sf(commands, SCTP_CMD_DISCARD_PACKET, SCTP_NULL()); + sctp_sf_pdiscard(ep, asoc, SCTP_ST_CHUNK(0), arg, commands); SCTP_INC_STATS(SCTP_MIB_ABORTEDS); return SCTP_DISPOSITION_ABORT; +nomem_pkt: + sctp_chunk_free(abort); nomem: return SCTP_DISPOSITION_NOMEM; } @@ -3787,7 +3899,24 @@ static sctp_disposition_t sctp_sf_violation_chunklen( { char err_str[]="The following chunk had invalid length:"; - return sctp_sf_abort_violation(asoc, arg, commands, err_str, + return sctp_sf_abort_violation(ep, asoc, arg, commands, err_str, + sizeof(err_str)); +} + +/* + * Handle a protocol violation when the parameter length is invalid. + * "Invalid" length is identified as smaller then the minimal length a + * given parameter can be. + */ +static sctp_disposition_t sctp_sf_violation_paramlen( + const struct sctp_endpoint *ep, + const struct sctp_association *asoc, + const sctp_subtype_t type, + void *arg, + sctp_cmd_seq_t *commands) { + char err_str[] = "The following parameter had invalid length:"; + + return sctp_sf_abort_violation(ep, asoc, arg, commands, err_str, sizeof(err_str)); } @@ -3806,10 +3935,31 @@ static sctp_disposition_t sctp_sf_violation_ctsn( { char err_str[]="The cumulative tsn ack beyond the max tsn currently sent:"; - return sctp_sf_abort_violation(asoc, arg, commands, err_str, + return sctp_sf_abort_violation(ep, asoc, arg, commands, err_str, sizeof(err_str)); } +/* Handle protocol violation of an invalid chunk bundling. For example, + * when we have an association and we recieve bundled INIT-ACK, or + * SHUDOWN-COMPLETE, our peer is clearly violationg the "MUST NOT bundle" + * statement from the specs. Additinally, there might be an attacker + * on the path and we may not want to continue this communication. + */ +static sctp_disposition_t sctp_sf_violation_chunk( + const struct sctp_endpoint *ep, + const struct sctp_association *asoc, + const sctp_subtype_t type, + void *arg, + sctp_cmd_seq_t *commands) +{ + char err_str[]="The following chunk violates protocol:"; + + if (!asoc) + return sctp_sf_violation(ep, asoc, type, arg, commands); + + return sctp_sf_abort_violation(ep, asoc, arg, commands, err_str, + sizeof(err_str)); +} /*************************************************************************** * These are the state functions for handling primitive (Section 10) events. ***************************************************************************/ @@ -5176,7 +5326,22 @@ static struct sctp_packet *sctp_ootb_pkt_new(const struct sctp_association *asoc * association exists, otherwise, use the peer's vtag. */ if (asoc) { - vtag = asoc->peer.i.init_tag; + /* Special case the INIT-ACK as there is no peer's vtag + * yet. + */ + switch(chunk->chunk_hdr->type) { + case SCTP_CID_INIT_ACK: + { + sctp_initack_chunk_t *initack; + + initack = (sctp_initack_chunk_t *)chunk->chunk_hdr; + vtag = ntohl(initack->init_hdr.init_tag); + break; + } + default: + vtag = asoc->peer.i.init_tag; + break; + } } else { /* Special case the INIT and stale COOKIE_ECHO as there is no * vtag yet. diff --git a/net/sctp/sm_statetable.c b/net/sctp/sm_statetable.c index 70a91ec..ddb0ba3 100644 --- a/net/sctp/sm_statetable.c +++ b/net/sctp/sm_statetable.c @@ -110,7 +110,7 @@ const sctp_sm_table_entry_t *sctp_sm_lookup_event(sctp_event_t event_type, /* SCTP_STATE_EMPTY */ \ TYPE_SCTP_FUNC(sctp_sf_ootb), \ /* SCTP_STATE_CLOSED */ \ - TYPE_SCTP_FUNC(sctp_sf_tabort_8_4_8), \ + TYPE_SCTP_FUNC(sctp_sf_ootb), \ /* SCTP_STATE_COOKIE_WAIT */ \ TYPE_SCTP_FUNC(sctp_sf_discard_chunk), \ /* SCTP_STATE_COOKIE_ECHOED */ \ @@ -173,7 +173,7 @@ const sctp_sm_table_entry_t *sctp_sm_lookup_event(sctp_event_t event_type, /* SCTP_STATE_EMPTY */ \ TYPE_SCTP_FUNC(sctp_sf_ootb), \ /* SCTP_STATE_CLOSED */ \ - TYPE_SCTP_FUNC(sctp_sf_tabort_8_4_8), \ + TYPE_SCTP_FUNC(sctp_sf_ootb), \ /* SCTP_STATE_COOKIE_WAIT */ \ TYPE_SCTP_FUNC(sctp_sf_discard_chunk), \ /* SCTP_STATE_COOKIE_ECHOED */ \ @@ -194,7 +194,7 @@ const sctp_sm_table_entry_t *sctp_sm_lookup_event(sctp_event_t event_type, /* SCTP_STATE_EMPTY */ \ TYPE_SCTP_FUNC(sctp_sf_ootb), \ /* SCTP_STATE_CLOSED */ \ - TYPE_SCTP_FUNC(sctp_sf_tabort_8_4_8), \ + TYPE_SCTP_FUNC(sctp_sf_ootb), \ /* SCTP_STATE_COOKIE_WAIT */ \ TYPE_SCTP_FUNC(sctp_sf_discard_chunk), \ /* SCTP_STATE_COOKIE_ECHOED */ \ @@ -216,7 +216,7 @@ const sctp_sm_table_entry_t *sctp_sm_lookup_event(sctp_event_t event_type, /* SCTP_STATE_EMPTY */ \ TYPE_SCTP_FUNC(sctp_sf_ootb), \ /* SCTP_STATE_CLOSED */ \ - TYPE_SCTP_FUNC(sctp_sf_tabort_8_4_8), \ + TYPE_SCTP_FUNC(sctp_sf_ootb), \ /* SCTP_STATE_COOKIE_WAIT */ \ TYPE_SCTP_FUNC(sctp_sf_violation), \ /* SCTP_STATE_COOKIE_ECHOED */ \ @@ -258,7 +258,7 @@ const sctp_sm_table_entry_t *sctp_sm_lookup_event(sctp_event_t event_type, /* SCTP_STATE_EMPTY */ \ TYPE_SCTP_FUNC(sctp_sf_ootb), \ /* SCTP_STATE_CLOSED */ \ - TYPE_SCTP_FUNC(sctp_sf_tabort_8_4_8), \ + TYPE_SCTP_FUNC(sctp_sf_ootb), \ /* SCTP_STATE_COOKIE_WAIT */ \ TYPE_SCTP_FUNC(sctp_sf_discard_chunk), \ /* SCTP_STATE_COOKIE_ECHOED */ \ @@ -300,7 +300,7 @@ const sctp_sm_table_entry_t *sctp_sm_lookup_event(sctp_event_t event_type, /* SCTP_STATE_EMPTY */ \ TYPE_SCTP_FUNC(sctp_sf_ootb), \ /* SCTP_STATE_CLOSED */ \ - TYPE_SCTP_FUNC(sctp_sf_tabort_8_4_8), \ + TYPE_SCTP_FUNC(sctp_sf_ootb), \ /* SCTP_STATE_COOKIE_WAIT */ \ TYPE_SCTP_FUNC(sctp_sf_discard_chunk), \ /* SCTP_STATE_COOKIE_ECHOED */ \ @@ -499,7 +499,7 @@ static const sctp_sm_table_entry_t addip_chunk_event_table[SCTP_NUM_ADDIP_CHUNK_ /* SCTP_STATE_EMPTY */ \ TYPE_SCTP_FUNC(sctp_sf_ootb), \ /* SCTP_STATE_CLOSED */ \ - TYPE_SCTP_FUNC(sctp_sf_tabort_8_4_8), \ + TYPE_SCTP_FUNC(sctp_sf_ootb), \ /* SCTP_STATE_COOKIE_WAIT */ \ TYPE_SCTP_FUNC(sctp_sf_discard_chunk), \ /* SCTP_STATE_COOKIE_ECHOED */ \ @@ -528,7 +528,7 @@ chunk_event_table_unknown[SCTP_STATE_NUM_STATES] = { /* SCTP_STATE_EMPTY */ TYPE_SCTP_FUNC(sctp_sf_ootb), /* SCTP_STATE_CLOSED */ - TYPE_SCTP_FUNC(sctp_sf_tabort_8_4_8), + TYPE_SCTP_FUNC(sctp_sf_ootb), /* SCTP_STATE_COOKIE_WAIT */ TYPE_SCTP_FUNC(sctp_sf_unk_chunk), /* SCTP_STATE_COOKIE_ECHOED */ diff --git a/net/socket.c b/net/socket.c index 7d44453..b09eb90 100644 --- a/net/socket.c +++ b/net/socket.c @@ -777,9 +777,6 @@ static ssize_t sock_aio_write(struct kiocb *iocb, const struct iovec *iov, if (pos != 0) return -ESPIPE; - if (iocb->ki_left == 0) /* Match SYS5 behaviour */ - return 0; - x = alloc_sock_iocb(iocb, &siocb); if (!x) return -ENOMEM; diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index 1a89992..036ab52 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -1110,7 +1110,8 @@ svc_tcp_accept(struct svc_sock *svsk) serv->sv_name); printk(KERN_NOTICE "%s: last TCP connect from %s\n", - serv->sv_name, buf); + serv->sv_name, __svc_print_addr(sin, + buf, sizeof(buf))); } /* * Always select the oldest socket. It's not fair, diff --git a/net/wireless/core.c b/net/wireless/core.c index 7eabd55..9771451 100644 --- a/net/wireless/core.c +++ b/net/wireless/core.c @@ -213,7 +213,7 @@ out_fail_notifier: out_fail_sysfs: return err; } -module_init(cfg80211_init); +subsys_initcall(cfg80211_init); static void cfg80211_exit(void) { diff --git a/net/wireless/sysfs.c b/net/wireless/sysfs.c index 88aaacd..2d5d225 100644 --- a/net/wireless/sysfs.c +++ b/net/wireless/sysfs.c @@ -52,12 +52,14 @@ static void wiphy_dev_release(struct device *dev) cfg80211_dev_free(rdev); } +#ifdef CONFIG_HOTPLUG static int wiphy_uevent(struct device *dev, char **envp, int num_envp, char *buf, int size) { /* TODO, we probably need stuff here */ return 0; } +#endif struct class ieee80211_class = { .name = "ieee80211", diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c index 3694662..0753b20 100644 --- a/security/selinux/hooks.c +++ b/security/selinux/hooks.c @@ -316,6 +316,7 @@ static inline int inode_doinit(struct inode *inode) } enum { + Opt_error = -1, Opt_context = 1, Opt_fscontext = 2, Opt_defcontext = 4, @@ -327,6 +328,7 @@ static match_table_t tokens = { {Opt_fscontext, "fscontext=%s"}, {Opt_defcontext, "defcontext=%s"}, {Opt_rootcontext, "rootcontext=%s"}, + {Opt_error, NULL}, }; #define SEL_MOUNT_FAIL_MSG "SELinux: duplicate or incompatible mount options\n" diff --git a/sound/core/memalloc.c b/sound/core/memalloc.c index f057430..9b5656d 100644 --- a/sound/core/memalloc.c +++ b/sound/core/memalloc.c @@ -27,6 +27,7 @@ #include <linux/pci.h> #include <linux/slab.h> #include <linux/mm.h> +#include <linux/seq_file.h> #include <asm/uaccess.h> #include <linux/dma-mapping.h> #include <linux/moduleparam.h> @@ -481,53 +482,54 @@ static void free_all_reserved_pages(void) #define SND_MEM_PROC_FILE "driver/snd-page-alloc" static struct proc_dir_entry *snd_mem_proc; -static int snd_mem_proc_read(char *page, char **start, off_t off, - int count, int *eof, void *data) +static int snd_mem_proc_read(struct seq_file *seq, void *offset) { - int len = 0; long pages = snd_allocated_pages >> (PAGE_SHIFT-12); struct snd_mem_list *mem; int devno; static char *types[] = { "UNKNOWN", "CONT", "DEV", "DEV-SG", "SBUS" }; mutex_lock(&list_mutex); - len += snprintf(page + len, count - len, - "pages : %li bytes (%li pages per %likB)\n", - pages * PAGE_SIZE, pages, PAGE_SIZE / 1024); + seq_printf(seq, "pages : %li bytes (%li pages per %likB)\n", + pages * PAGE_SIZE, pages, PAGE_SIZE / 1024); devno = 0; list_for_each_entry(mem, &mem_list_head, list) { devno++; - len += snprintf(page + len, count - len, - "buffer %d : ID %08x : type %s\n", - devno, mem->id, types[mem->buffer.dev.type]); - len += snprintf(page + len, count - len, - " addr = 0x%lx, size = %d bytes\n", - (unsigned long)mem->buffer.addr, (int)mem->buffer.bytes); + seq_printf(seq, "buffer %d : ID %08x : type %s\n", + devno, mem->id, types[mem->buffer.dev.type]); + seq_printf(seq, " addr = 0x%lx, size = %d bytes\n", + (unsigned long)mem->buffer.addr, + (int)mem->buffer.bytes); } mutex_unlock(&list_mutex); - return len; + return 0; +} + +static int snd_mem_proc_open(struct inode *inode, struct file *file) +{ + return single_open(file, snd_mem_proc_read, NULL); } /* FIXME: for pci only - other bus? */ #ifdef CONFIG_PCI #define gettoken(bufp) strsep(bufp, " \t\n") -static int snd_mem_proc_write(struct file *file, const char __user *buffer, - unsigned long count, void *data) +static ssize_t snd_mem_proc_write(struct file *file, const char __user * buffer, + size_t count, loff_t * ppos) { char buf[128]; char *token, *p; - if (count > ARRAY_SIZE(buf) - 1) - count = ARRAY_SIZE(buf) - 1; + if (count > sizeof(buf) - 1) + return -EINVAL; if (copy_from_user(buf, buffer, count)) return -EFAULT; - buf[ARRAY_SIZE(buf) - 1] = '\0'; + buf[count] = '\0'; p = buf; token = gettoken(&p); if (! token || *token == '#') - return (int)count; + return count; if (strcmp(token, "add") == 0) { char *endp; int vendor, device, size, buffers; @@ -548,7 +550,7 @@ static int snd_mem_proc_write(struct file *file, const char __user *buffer, (buffers = simple_strtol(token, NULL, 0)) <= 0 || buffers > 4) { printk(KERN_ERR "snd-page-alloc: invalid proc write format\n"); - return (int)count; + return count; } vendor &= 0xffff; device &= 0xffff; @@ -560,7 +562,7 @@ static int snd_mem_proc_write(struct file *file, const char __user *buffer, if (pci_set_dma_mask(pci, mask) < 0 || pci_set_consistent_dma_mask(pci, mask) < 0) { printk(KERN_ERR "snd-page-alloc: cannot set DMA mask %lx for pci %04x:%04x\n", mask, vendor, device); - return (int)count; + return count; } } for (i = 0; i < buffers; i++) { @@ -570,7 +572,7 @@ static int snd_mem_proc_write(struct file *file, const char __user *buffer, size, &dmab) < 0) { printk(KERN_ERR "snd-page-alloc: cannot allocate buffer pages (size = %d)\n", size); pci_dev_put(pci); - return (int)count; + return count; } snd_dma_reserve_buf(&dmab, snd_dma_pci_buf_id(pci)); } @@ -596,9 +598,21 @@ static int snd_mem_proc_write(struct file *file, const char __user *buffer, free_all_reserved_pages(); else printk(KERN_ERR "snd-page-alloc: invalid proc cmd\n"); - return (int)count; + return count; } #endif /* CONFIG_PCI */ + +static const struct file_operations snd_mem_proc_fops = { + .owner = THIS_MODULE, + .open = snd_mem_proc_open, + .read = seq_read, +#ifdef CONFIG_PCI + .write = snd_mem_proc_write, +#endif + .llseek = seq_lseek, + .release = single_release, +}; + #endif /* CONFIG_PROC_FS */ /* @@ -609,12 +623,8 @@ static int __init snd_mem_init(void) { #ifdef CONFIG_PROC_FS snd_mem_proc = create_proc_entry(SND_MEM_PROC_FILE, 0644, NULL); - if (snd_mem_proc) { - snd_mem_proc->read_proc = snd_mem_proc_read; -#ifdef CONFIG_PCI - snd_mem_proc->write_proc = snd_mem_proc_write; -#endif - } + if (snd_mem_proc) + snd_mem_proc->proc_fops = &snd_mem_proc_fops; #endif return 0; } |