diff options
| author | Jakub Kicinski <jakub.kicinski@netronome.com> | 2017-10-23 21:58:10 +0300 |
|---|---|---|
| committer | David S. Miller <davem@davemloft.net> | 2017-10-24 11:38:37 +0300 |
| commit | 9a90c83c09874a2fd03905ef0f73512c9de18799 (patch) | |
| tree | daa0ea3071024b29de93719e3357994588d99578 /scripts/patch-kernel | |
| parent | a82b23fb38eaaaad89332b90029fc4cd7c3f2545 (diff) | |
| download | linux-9a90c83c09874a2fd03905ef0f73512c9de18799.tar.xz | |
nfp: bpf: optimize the RMW for stack accesses
When we are performing unaligned stack accesses in the 32-64B window
we have to do a read-modify-write cycle. E.g. for reading 8 bytes
from address 17:
0: tmp = stack[16]
1: gprLo = tmp >> 8
2: tmp = stack[20]
3: gprLo |= tmp << 24
4: tmp = stack[20]
5: gprHi = tmp >> 8
6: tmp = stack[24]
7: gprHi |= tmp << 24
The load on line 4 is unnecessary, because tmp already contains data
from stack[20].
For write we can optimize both loads and writebacks away.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'scripts/patch-kernel')
0 files changed, 0 insertions, 0 deletions
