Age | Commit message (Collapse) | Author | Files | Lines |
|
commit 8188a18ee2e48c9a7461139838048363bfce3fef upstream
We don't handle failures in the rb_allocator workqueue allocation
correctly. To fix that, move the code earlier so the cleanup is
easier and we don't have to undo all the interrupt allocations in
this case.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
[Ajay: Rewrote this patch for v4.9.y, as 4.9.y codebase is different from mainline]
Signed-off-by: Ajay Kaher <akaher@vmware.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
|
|
commit 87e7e25aee6b59fef740856f4e86d4b60496c9e1 upstream.
In order to remember how to unmap a memory (as single or
as page), we maintain a bit per Transmit Buffer (TBs) in
the meta data (structure iwl_cmd_meta).
We maintain a bitmap: 1 bit per TB.
If the TB is set, we will free the memory as a page.
This bitmap was never cleared. Fix this.
Cc: stable@vger.kernel.org
Fixes: 3cd1980b0cdf ("iwlwifi: pcie: introduce new tfd and tb formats")
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 3b57a10ca14c619707398dc58fe5ece18c95b20b upstream.
Sometimes the register status can include interrupts that
were masked. We can, for example, get the RF-Kill bit set
in the interrupt status register although this interrupt
was masked. Then if we get the ALIVE interrupt (for example)
that was not masked, we need to *not* service the RF-Kill
interrupt.
Fix this in the MSI-X interrupt handler.
Cc: stable@vger.kernel.org
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit 30f24eabab8cd801064c5c37589d803cb4341929 ]
If for some reason the device gives us an RX interrupt before we're
ready for it, perhaps during device power-on with misconfigured IRQ
causes mapping or so, we can crash trying to access the queues.
Prevent that by checking that we actually have RXQs and that they
were properly allocated.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
|
|
[ Upstream commit c6ac9f9fb98851f47b978a9476594fc3c477a34d ]
Allocator swaps the pending requests with 0 when it starts
working. This means that relying on it n RX path to decide if
to move to emergency is not always a good idea, since it may
be zero, but there are still a lot of unallocated RBs in the
system. Change allocator to decrement the pending requests on
real time. It is more expensive since it accesses the atomic
variable more times, but it gives the RX path a better idea
of the system's status.
Reported-by: Ilan Peer <ilan.peer@intel.com>
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Fixes: 868a1e863f95 ("iwlwifi: pcie: avoid empty free RB queue")
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
|
|
[ Upstream commit 868a1e863f95183f00809363fefba6d4f5bcd116 ]
If all free RB queues are empty, the driver will never restock the
free RB queue. That's because the restocking happens in the Rx flow,
and if the free queue is empty there will be no Rx.
Although there's a background worker (a.k.a. allocator) allocating
memory for RBs so that the Rx handler can restock them, the worker may
run only after the free queue has become empty (and then it is too
late for restocking as explained above).
There is a solution for that called 'emergency': If the number of used
RB's reaches half the amount of all RB's, the Rx handler will not wait
for the allocator but immediately allocate memory for the used RB's
and restock the free queue.
But, since the used RB's is per queue, it may happen that the used
RB's are spread between the queues such that the emergency check will
fail for each of the queues
(and still run out of RBs, causing the above symptom).
To fix it, move to emergency mode if the sum of *all* used RBs (for
all Rx queues) reaches half the amount of all RB's
Signed-off-by: Shaul Triebitz <shaul.triebitz@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit 0f22e40053bd5378ad1e3250e65c574fd61c0cd6 ]
Make sure the rx_allocator worker is canceled before running the
rx_init routine. rx_init frees and re-allocates all rxb's pages. The
rx_allocator worker also allocates pages for the used rxb's. Running
rx_init and rx_allocator simultaniously causes a kernel panic. Fix
that by canceling the work in rx_init.
Signed-off-by: Shaul Triebitz <shaul.triebitz@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit ab1068d6866e28bf6427ceaea681a381e5870a4a ]
When there are 16 or more logical CPUs, we request for
`IWL_MAX_RX_HW_QUEUES` (16) IRQs only as we limit to that number of
IRQs, but later on we compare the number of IRQs returned to
nr_online_cpus+2 instead of max_irqs, the latter being what we
actually asked for. This ends up setting num_rx_queues to 17 which
causes lots of out-of-bounds array accesses later on.
Compare to max_irqs instead, and also add an assertion in case
num_rx_queues > IWM_MAX_RX_HW_QUEUES.
This fixes https://bugzilla.kernel.org/show_bug.cgi?id=199551
Fixes: 2e5d4a8f61dc ("iwlwifi: pcie: Add new configuration to enable MSIX")
Signed-off-by: Hao Wei Tee <angelsl@in04.sg>
Tested-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit e4c49c4937951de1cdbe35572ade40c948dec1e1 ]
We only need to handle d0i3 entry and exit during suspend resume if
system_pm is set to IWL_PLAT_PM_MODE_D0I3, otherwise d0i3 entry
failures will cause suspend to fail.
This fixes https://bugzilla.kernel.org/show_bug.cgi?id=194791
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 3f7a5e13e85026b6e460bbd6e87f87379421d272 upstream.
We have a new PCI subsystem ID for 7265D. Add it to the list.
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 05e5a7e58d3f8f597ebe6f78aaa13a2656b78239 upstream.
Instead of setting the tx_cmd length in the mvm code, which is
complicated by the fact that DQA may want to temporarily store
the SKB on the side, adjust the length in the PCIe code which
also knows about this since it's responsible for duplicating
all those headers that are account for in this code.
As the PCIe code already relies on the tx_cmd->len field, this
doesn't really introduce any new dependencies.
To make this possible we need to move the memcpy() of the TX
command until after it was updated.
This does even simplify the code though, since the PCIe code
already does a lot of manipulations to build A-MSDUs correctly
and changing the length becomes a simple operation to see how
much was added/removed, rather than predicting it.
Fixes: 24afba7690e4 ("iwlwifi: mvm: support bss dynamic alloc/dealloc of queues")
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 2c6262b754f3c3338cb40b23880a3ac1f4693b25 upstream.
Our 9000 device supports 64 bit DMA address for RX only, and
not for TX.
Setting DMA mask to 64 for the whole device is erroneous - we
can do it only for a000 devices where device is capable of
both RX & TX DMA with 64 bit address space.
Fixes: 96a6497bc3ed ("iwlwifi: pcie: add 9000 series multi queue rx DMA support")
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 3ce4a03852d6dd3fd28c2fb2ee9f89bb9ccf9a9b upstream.
shift_param is defined and set in iwl_pcie_load_cpu_sections but not
used. Fix this to avoid -Wunused-but-set-variable warning.
The code using it turned into dead code with commit dcab8ecd5617
("iwlwifi: mvm: support ucode load for family_8000 B0 only") which
added a separate function iwl_pcie_load_given_ucode_8000 (then 8000b)
for IWL_DEVICE_FAMILY_8000. Commit 76f8c0e17edc ("iwlwifi: pcie:
remove dead code") removed the dead code but left shift_param as is.
iwlwifi/pcie/trans.c: In function ‘iwl_pcie_load_cpu_sections’:
iwlwifi/pcie/trans.c:871:6: warning: variable ‘shift_param’ set but not used [-Wunused-but-set-variable]
Fixes: dcab8ecd5617 ("iwlwifi: mvm: support ucode load for family_8000 B0 only")
Fixes: 76f8c0e17edc ("iwlwifi: pcie: remove dead code")
Signed-off-by: Kirtika Ruchandani <kirtika@google.com>
Cc: Sara Sharon <sara.sharon@intel.com>
Cc: Luca Coelho <luciano.coelho@intel.com>
Cc: Liad Kaufman <liad.kaufman@intel.com>
Cc: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
[removed some unnecessary braces]
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 04fa3e680b4dd2fdd11d0152fb9b6067e7aac140 upstream.
David reported that the code I added uses the decrement
and increment operator on a boolean variable.
Fix that.
Fixes: 0cd58eaab148 ("iwlwifi: pcie: allow the op_mode to block the tx queues")
Reported-by: David Binderman <dcb314@hotmail.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
The SPLC data parsing is too restrictive and was not trying find the
correct element for WiFi. This causes problems with some BIOSes where
the SPLC method exists, but doesn't have a WiFi entry on the first
element of the list. The domain type values are also incorrect
according to the specification.
Fix this by complying with the actual specification.
Additionally, replace all occurrences of SPLX to SPLC, since SPLX is
only a structure internal to the ACPI tables, and may not even exist.
Fixes: bcb079a14d75 ("iwlwifi: pcie: retrieve and parse ACPI power limitations")
Reported-by: Chris Rorvick <chris@rorvick.com>
Tested-by: Paul Bolle <pebolle@tiscali.nl>
Tested-by: Chris Rorvick <chris@rorvick.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Emmanuel reports that when CMD_WANT_ASYNC_CALLBACK is used by mvm,
the callback will be called with the command queue lock held, and
mvm will try to stop all (other) TX queues, which acquires their
locks - this caused a false lockdep recursive locking report.
Suppress this report by marking the command queue lock with a new,
separate, lock class so lockdep can tell the difference between
the two types of queues.
Fixes: 156f92f2b471 ("iwlwifi: block the queues when we send ADD_STA for uAPSD")
Reported-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
The various TFD/TB helpers have two code paths depending on the
type of TFD supported, with variable shadowing due to the new if
branches. Move the fall-through code into else branches to avoid
variable shadowing. While doing so, rename some of the variables
and do some other cleanups (like removing void * casts of void *
pointers.)
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Add two new PCI IDs for the 9560 series.
Signed-off-by: Oren Givon <oren.givon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
There's no need to declare a list and then init it manually,
just use the LIST_HEAD() macro.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Add a new PCI ID for the 8265 series.
Signed-off-by: Oren Givon <oren.givon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Change PCIE and trans resource allocations to managed resources.
Signed-off-by: Sharon Dvir <sharon.dvir@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Due to firmware design considerations, move to wide ID for
all commands.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Log group as well. Remove 0x prefix to match TX logging.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
commit 3cd1980b0cdf ("iwlwifi: pcie: introduce new tfd and tb formats")
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
In MSIX mode the number of irq depends on the number of
possible cpus existing on the host.
This cause to bug in case there are offline cores.
Take into account only the online CPUs instead.
Also save it in temporary variable.
Fixes: commit 2e5d4a8f61dc ("iwlwifi: pcie: Add new configuration to enable MSIX")
Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Function is very indented. Go to msi section if needed to avoid
it and by that make the code more readable.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Add a new config struct for the new 8275 series and add
the first PCI ID for it.
Signed-off-by: Oren Givon <oren.givon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Add a new config struct for the new 9560 series and add
the 4 new PCI IDs for it.
Signed-off-by: Oren Givon <oren.givon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
In order to utilize the host's CPUs in the most efficient way
we bind each rx interrupt vector to each CPU on the host.
Each rx interrupt is prioritized to execute only on the designated CPU
rather than any CPU.
Processor affinity takes advantage of the fact that some remnants of
a process that was run on a given processor may remain in that
processor's memory state for example, data in the CPU cache after
another process is run on that CPU. Scheduling that process to execute
on the same processor could result in an efficient use of process by
reducing performance-degrading situations such as cache misses
and parallel processing.
Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
In case the OS provides fewer interrupts than requested, different
causes will share the same interrupt vector as follow:
1.One interrupt less: non rx causes shared with FBQ.
2.Two interrupts less: non rx causes shared with FBQ and RSS.
3.More than two interrupts: we will use fewer RSS queues.
Also make the request depend on the number of online CPUs
instead of possible CPUs.
Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
The original intent was to have the general iwl_queue shared
between RX and TX queues, but it is not the actual status.
Since it is not shared with any struct but iwl_txq, it adds
unnecessary complexity. Merge those structs.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Since TFD was enlarged to 256 bytes, the fetch of the TFD
itself is very expensive.
To make DRAM to SRAM more efficient, bits 12-13 will indicate
the number of 64 byte chunks that should be transferred to
SRAM.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Previous patch introduced the new formats. This patch
allocates the new structures and adjusts code accordingly.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
In future HW the byte count table address will be configured
by ucode per queue. Add API to expose the byte count table to
the opmode
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
New hardware supports bigger TFDs and TBs.
Introduce the new formats and adjust defines and code
relying on old format.
Changing the actual TFD allocation is trickier and
deferred to the next patch.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
If device family is 8000 then iwl_pcie_load_cpu_sections()
won't be called at all (iwl_pcie_load_cpu_sections_8000() is
called in that case) so this piece of code never gets called.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Turns out we should access TFH relative addresses.
Also, the FH_UCODE_LOAD_STATUS was replaced by
UREG_UCODE_LOAD_STATUS.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Up till now we accessed SCD configuration only for initial
configuration and for enabling command queue.
For a000 generation the command queue is open by default
and firmware configures the rest. No driver SCD accesses
are expected. Make sure this is the case.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Add a new config struct for the new 9170 series and add
the first PCI ID for it.
Signed-off-by: Oren Givon <oren.givon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Add a new config struct for the new 9270 series and add
the first PCI ID for it.
Signed-off-by: Oren Givon <oren.givon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Add 4 more new 9460 series PCI IDs.
Signed-off-by: Oren Givon <oren.givon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Add a new series to the 9000 series called 9460.
In addition, add a new PCI ID that is the 9460 new series.
Signed-off-by: Oren Givon <oren.givon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Rename and reorder the 9000 series configuration structs:
- struct containing configuration of 5165 was renamed to 9000.
Signed-off-by: Oren Givon <oren.givon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
All transports has this structure. By moving it to be
shared, we can get rid of casting to the specific transport
in probe and remove.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Centralize the logging of SCD status. The motivation is
that for a000 devices we will have new SCD HW, but this
code was duplicate anyway, so it is a proper cleanup.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Update the firmware load flow for TFH hardware.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
For a000 device the FH was replaced by the TFH.
This is the first patch in a series introducing the
changes stemming from this change.
This patch initializes the TFQ queue table with the new
64 bit register and the relevant TFH configuration
registers.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Move the write_prph_64 of pcie to be transport agnostic.
Add direct write as well, as it is needed for a000 HW.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
Currently the scratch buffer is set to 16 bytes and indicates
the size of the bi-directional DMA.
However, next HW generation will perform additional offloading,
and will write the result in the key location of the TX command,
so the size of the bi-directional consistent memory should grow
accordingly - increase it to 40.
Generalize the code to get rid of now irrelevant scratch references.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|
|
In MQ environment and new architecture in early stages
we may encounter DMA issues. Track RXB status and bail
out in case we receive index to an RXB that was not
mapped and handed over to HW.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
|