| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
Structure the code a bit more and move all PCIe code
including the hardware configuration files into a
PCIe specific subdirectory.
Reviewed-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Mostly clean up indentation around parentheses
after if, function calls, etc. and also a few
unneeded line breaks and some other things.
Reviewed-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Reviewed-by: Wey-Yi W Guy <wey-yi.w.guy@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
|
|
|
|
|
| |
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Reviewed-by: Wey-Yi W Guy <wey-yi.w.guy@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The logic that allows to have a short TFD queue was completely wrong.
We do maintain 256 Transmit Frame Descriptors, but they point to
recycled buffers. We used to attach and de-attach different TFDs for
the same buffer and it worked since they pointed to the same buffer.
Also zero the number of BDs after unmapping a TFD. This seems not
necessary since we don't reclaim the same TFD twice, but I like
housekeeping.
This patch solves this warning:
[ 6427.079855] WARNING: at lib/dma-debug.c:866 check_unmap+0x727/0x7a0()
[ 6427.079859] Hardware name: Latitude E6410
[ 6427.079865] iwlwifi 0000:02:00.0: DMA-API: device driver tries to free DMA memory it has not allocated [device address=0x00000000296d393c] [size=8 bytes]
[ 6427.079870] Modules linked in: ...
[ 6427.079950] Pid: 6613, comm: ifconfig Tainted: G O 3.3.3 #5
[ 6427.079954] Call Trace:
[ 6427.079963] [<c10337a2>] warn_slowpath_common+0x72/0xa0
[ 6427.079982] [<c1033873>] warn_slowpath_fmt+0x33/0x40
[ 6427.079988] [<c12dcb77>] check_unmap+0x727/0x7a0
[ 6427.079995] [<c12dcdaa>] debug_dma_unmap_page+0x5a/0x80
[ 6427.080024] [<fe2312ac>] iwlagn_unmap_tfd+0x12c/0x180 [iwlwifi]
[ 6427.080048] [<fe231349>] iwlagn_txq_free_tfd+0x49/0xb0 [iwlwifi]
[ 6427.080071] [<fe228e37>] iwl_tx_queue_unmap+0x67/0x90 [iwlwifi]
[ 6427.080095] [<fe22d221>] iwl_trans_pcie_stop_device+0x341/0x7b0 [iwlwifi]
[ 6427.080113] [<fe204b0e>] iwl_down+0x17e/0x260 [iwlwifi]
[ 6427.080132] [<fe20efec>] iwlagn_mac_stop+0x6c/0xf0 [iwlwifi]
[ 6427.080168] [<fd8480ce>] ieee80211_stop_device+0x5e/0x190 [mac80211]
[ 6427.080198] [<fd833208>] ieee80211_do_stop+0x288/0x620 [mac80211]
[ 6427.080243] [<fd8335b7>] ieee80211_stop+0x17/0x20 [mac80211]
[ 6427.080250] [<c148dac1>] __dev_close_many+0x81/0xd0
[ 6427.080270] [<c148db3d>] __dev_close+0x2d/0x50
[ 6427.080276] [<c148d152>] __dev_change_flags+0x82/0x150
[ 6427.080282] [<c148e3e3>] dev_change_flags+0x23/0x60
[ 6427.080289] [<c14f6320>] devinet_ioctl+0x6a0/0x770
[ 6427.080296] [<c14f8705>] inet_ioctl+0x95/0xb0
[ 6427.080304] [<c147a0f0>] sock_ioctl+0x70/0x270
Cc: stable@vger.kernel.org
Reported-by: Antonio Quartulli <ordex@autistici.org>
Tested-by: Antonio Quartulli <ordex@autistici.org>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Reviewed-by: Wey-Yi W Guy <wey-yi.w.guy@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
|
|
|
|
|
|
|
|
| |
Since the transport allocates and frees itself in
the transport specific code, there's no need for
virtual functions for it. Remove the free method
and call the correct functions directly.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
|
|
|
|
|
| |
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Having cmd[], meta[] and skbs[] as separate arrays
in the TX queue structure is cache inefficient as
we need the data for a given entry together.
To improve this, create an array with these three
members (allocate meta as part of that struct) so
we have the data we need together located together
improving cache footprint.
The downside is that we need to allocate a lot of
memory in one chunk, about 10KiB (on 64-bit) which
isn't very efficient.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
|
|
|
|
|
| |
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
|
|
|
|
|
|
|
|
| |
The driver layer now holds a pointer to the transport,
and shrd->drv is not needed any more, so kill it.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
|
|
|
|
|
|
|
|
|
|
| |
The command strings are needed through the layers for
debug and error messages, but can differ with opmode.
As a result, we need to give the command names to the
transport layer as configuration.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
|
|
|
|
|
|
|
|
| |
Having a u32 before a potential 64-bit value is
not very efficient, move it last.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
|
|
|
|
|
|
|
|
| |
The code has been changed, move the definitions to the proper file
being used by the code.
Signed-off-by: Don Fry <donald.h.fry@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This removes one of the two sources of device
restarts in the upper layer -- those are a bit
inconvenient because normal restarts originate
in the transport. By moving the watchdog down
it can be treated the same.
Also rewrite the watchdog logic. Timers are
much more efficient when they never fire, so
instead firing a timer every 500ms set up a
timer for each TX queue and fire it only when
the queue is really stuck. This avoids the CPU
waking up when everything is working well.
While at it, remove the wd_disable config item
and replace it by simply setting wd_timeout to
IWL_WATCHHDOG_DISABLED (0).
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
|
|
|
|
| |
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
That way it isn't needed in hw_params, which
is shared data. It also isn't really what we
should configure in the transport, that is
better just 4k/8k, so configure a bool and
derive the page order in the transport. This
also means the transport doesn't need access
to the module parameter any more.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Error log reporting does not belong to the
transport layer, but to the op_mode loading
the ucode, as it is the entity which knows
about the ucode loaded, and what the error
information means.
Move device logging pointers from the
transport layer to op_mode.
With this change, transport layer only
reports an error to the op_mode, which will
figure out what to do with the error. This
causes the driver to now dump out error logs
when the command queue is stuck as well.
Also, move the debugfs entry for event logs
out of the transport layer and into op_mode.
Signed-off-by: Meenakshi Venkataraman <meenakshi.venkataraman@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The queue mapping is not only dynamic, it
is also dependent on the uCode, as we can
already see today with the dual-mode and
non-dual-mode being different.
Move the queue mapping out of the transport
layer and let the higher layer manage it.
Part of the transport configuration is how
to set up the queues.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Mohammed Shafi ran into [1] the SEQ_RX_FRAME workaround
warning with a statistics notification, this means we
can't just remove it as we'd hoped.
Abstract it out so that the higher layer can configure
this as a kind of "filter" in the transport.
[1] http://mid.gmane.org/CAD2nsn1_DzbRHuSbS_1rFNzuux_9pW1-pABEasQ01_y7-ndO5w@mail.gmail.com
Reported-by: Mohammed Shafi <shafi.wireless@gmail.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
|
|
|
|
|
|
|
|
|
| |
The command queue number is required by the transport
layer, but it can be determined only by the op mode.
Move this parameter to the dvm op mode, and configure
the transport layer using an API.
Signed-off-by: Meenakshi Venkataraman <meenakshi.venkataraman@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
|
|
|
|
|
|
|
|
|
| |
The only reason we ever stop/wake queues at
the transport level is now that they become
full (or non-full), so the messages aren't
useful any more -- remove them.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
|
|
|
|
|
|
|
| |
Continue splitting the status bits between transport and op_mode.
All but a few are separated.
Signed-off-by: Don Fry <donald.h.fry@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
|
|
|
|
|
|
|
|
| |
The shared status bits are a mixture of transport and op mode bits.
Some are used just by one or the other, some are shared. Begin the
de-tangling of these bits.
Signed-off-by: Don Fry <donald.h.fry@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
If device is disabled by rfkill switch, do not enable all interrupts,
but only CSR_INT_BIT_RF_KILL to receive rfkill state change. Unblocking
other interrupts might cause problems, since driver can not be prepared
for receive them.
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The PASSIVE_NO_RX workaround currently crosses
through the op_mode and transport layers, which
is a bit odd. This also isn't necessary, if the
transport simply reports when queues are full
(or no longer full) the op_mode can keep track
of this state, and report to mac80211 only what
*it* thinks is appropriate. What is appropriate
can then be based on whether queues should be
stopped to wait for RX or not.
This significantly simplifies the transport API,
it no longer needs to expose anything to stop a
queue, nor to wake "any" queue, this can all be
handled in the upper layer completely.
Also simplify the handling to not be dependent
on the context, that makes little sense as the
queues are shared and both contexts have to be
on the same channel anyway.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Even if the variable might also be used by other
transports, there's no need for anything outside
of the transport itself to access it, so move it
into the private area.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
|
|
|
|
|
|
|
|
|
| |
All variables related to uCode loading (the
waitqueue and done indication) should be in
the PCI-E transport's private data as this
is transport specific. Move them there.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
|
|
|
|
|
|
|
|
| |
iwl_queue_inc_wrap/iwl_queue_dec_wrap aren't
shared functions, they are PCI-E specific,
so move them into the appropriate header.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
struct iwl_rx_mem_buffer implementation details
(DMA address, list pointers) that the upper
layers don't need. Introduce iwl_rx_cmd_buffer
that is passed upstream and only contains the
needed data (the page). Additionally, access
this data only via accessor functions, allowing
us to change the implementation in the future.
These accessors are rxb_addr() (as before) and
rxb_steal_page() to take ownership of the data.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of (ab)using the sta_lock, make the
transport layer lock its own TX queue data
structures with a lock per queue. This also
unifies with the cmd queue lock.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
|
|
|
|
|
| |
Export them as "queue_full" and "queue_not_full" notification.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of using a global lock, the PCIe transport
can use an own lock for its IRQ. This will make it
possible to not disable IRQs for the shared lock.
The lock is currently used throughout the code but
this can be improved even further by splitting up
the locking for the queues.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Reviewed-by: Wey-Yi W Guy <wey-yi.w.guy@intel.com>
|
|
|
|
|
|
|
| |
This handler will become thicker, reflect its real role now.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
|
|
|
|
|
|
|
|
|
| |
From now on, the transport layer in charge of providing access to the
device. So change all the driver to give a pointer to the transport
to all the low level functions that actually access the device.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
|
|
|
|
|
|
|
|
| |
All the bus configuration is now done in the transport
allocation fucntion.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
|
|
|
|
|
|
|
|
|
| |
Define a new handler in the transport layer API: fw_alive.
Move iwl_reset_ict to this new handler, and move the content
of tx_start to this handler.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
|
|
|
|
|
|
| |
Update Copyright to 2012
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The ICT code erroneously uses PAGE_SIZE. The bug
is that PAGE_SIZE isn't necessarily 4096, so on
such platforms this code will not work correctly
as we'll try to attempt to read an index in the
table that the device never wrote, it always has
4096-byte pages.
Additionally, the manual alignment code here is
unnecessary -- Documentation/DMA-API-HOWTO.txt
states:
The cpu return address and the DMA bus master address are both
guaranteed to be aligned to the smallest PAGE_SIZE order which
is greater than or equal to the requested size. This invariant
exists (for example) to guarantee that if you allocate a chunk
which is smaller than or equal to 64 kilobytes, the extent of the
buffer you receive will not cross a 64K boundary.
Just use appropriate new constants and get rid of
the alignment code.
Cc: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Cc: stable@vger.kernel.org
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
|
|
|
|
|
|
|
| |
The tid_data is not related to the transport layer, so move
the logic that depends on it to the upper layer.
This patch deals with the mapping of RA / TID to HW queues in AGG.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
|
|
|
|
|
|
|
|
|
| |
The tid_data is not related to the transport layer, so move
the logic that depends on it to the upper layer.
This patch deals with tx AGG setup.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
|
|
|
|
|
|
|
|
|
| |
The tid_data is not related to the transport layer, so move
the logic that depends on it to the upper layer.
This patch deals with tx AGG alloc.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
|
|
|
|
|
|
|
|
|
| |
The tid_data is not related to the transport layer, so move
the logic that depends on it to the upper layer.
This patch deals with tx AGG stop.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
|
|
|
|
|
|
|
|
| |
Some information was redundation, other was missing.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
|
|
|
|
|
|
|
| |
Users complain that the traffic gets stalled sometimes. This will
allow easier debugging.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
|
|
|
|
|
|
| |
This has been removed but the declaration hasn't.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before this patch, the upper layer could register a callback for each
host command. This mechanism allowed the upper layer to have
different callbacks for the same command ID. In fact, it wasn't used
and the rx_handlers is enough: same callback for all the command with
a specific command ID.
The iwl_send_add_station needs the access the command that was sent
while handling the response (regardless if the command was sent in
SYNC or ASYNC mode). So now, all the handlers receive the host
command that was sent. This implies a change in the handler signature.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
Move all the PCI-E specific transport files to
be iwl-trans-pcie*; specifically iwl-trans.c
which is really iwl-trans-pcie.c.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Wey-Yi Guy <wey-yi.w.guy@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|