this bug was introduced with I9493f28c30356a10eccb320e0a2d1a141388af9a
Change-Id: Ib4e9866eb56f4370ca8d33761cf0cd4f795db5be
Signed-off-by: M1cha <sigmaepsilon92@gmail.com>
If we don't do that, then the poison value is left in the ->pprev
backlink.
This can cause crashes if we do a disconnect, followed by a connect().
Change-Id: I021261c6639339a0d027ee5cb086ba7301db334a
Tested-by: Linus Torvalds <torvalds@linux-foundation.org>
Reported-by: Wen Xu <hotdog3645@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
A local route may have a lower hop_limit set than global routes do.
RFC 3756, Section 4.2.7, "Parameter Spoofing"
> 1. The attacker includes a Current Hop Limit of one or another small
> number which the attacker knows will cause legitimate packets to
> be dropped before they reach their destination.
> As an example, one possible approach to mitigate this threat is to
> ignore very small hop limits. The nodes could implement a
> configurable minimum hop limit, and ignore attempts to set it below
> said limit.
Change-Id: Ie70d7d35ea94738209b525872e7e9022676e05b3
Signed-off-by: D.S. Ljungmark <ljungmark@modio.se>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit 8740944a02)
The use_optimistic sysctl makes optimistic IPv6 addresses
equivalent to preferred addresses for source address selection
(e.g., when calling connect()), but it does not allow an
application to bind to optimistic addresses. This behaviour is
inconsistent - for example, it doesn't make sense for bind() to
an optimistic address fail with EADDRNOTAVAIL, but connect() to
choose that address outgoing address on the same socket.
Bug: 17769720
Bug: 18609055
Change-Id: I9de0d6c92ac45e29d28e318ac626c71806666f13
Signed-off-by: Erik Kline <ek@google.com>
Signed-off-by: Lorenzo Colitti <lorenzo@google.com>
Add a sysctl that causes an interface's optimistic addresses
to be considered equivalent to other non-deprecated addresses
for source address selection purposes. Preferred addresses
will still take precedence over optimistic addresses, subject
to other ranking in the source address selection algorithm.
This is useful where different interfaces are connected to
different networks from different ISPs (e.g., a cell network
and a home wifi network).
The current behaviour complies with RFC 3484/6724, and it
makes sense if the host has only one interface, or has
multiple interfaces on the same network (same or cooperating
administrative domain(s), but not in the multiple distinct
networks case.
For example, if a mobile device has an IPv6 address on an LTE
network and then connects to IPv6-enabled wifi, while the wifi
IPv6 address is undergoing DAD, IPv6 connections will try use
the wifi default route with the LTE IPv6 address, and will get
stuck until they time out.
Also, because optimistic nodes can receive frames, issue
an RTM_NEWADDR as soon as DAD starts (with the IFA_F_OPTIMSTIC
flag appropriately set). A second RTM_NEWADDR is sent if DAD
completes (the address flags have changed), otherwise an
RTM_DELADDR is sent.
Also: add an entry in ip-sysctl.txt for optimistic_dad.
[backport of net-next 7fd2561e4ebdd070ebba6d3326c4c5b13942323f]
Signed-off-by: Erik Kline <ek@google.com>
Acked-by: Lorenzo Colitti <lorenzo@google.com>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Bug: 17769720
Change-Id: I440a9b8c788db6767d191bbebfd2dff481aa9e0d
Sasha Levin found a NULL pointer dereference that is due to a missing
page table lock, which in turn is due to the pmd entry in question being
a transparent huge-table entry.
The code - introduced in commit 1998cc048901 ("mm: make
madvise(MADV_WILLNEED) support swap file prefetch") - correctly checks
for this situation using pmd_none_or_trans_huge_or_clear_bad(), but it
turns out that that function doesn't work correctly.
pmd_none_or_trans_huge_or_clear_bad() expected that pmd_bad() would
trigger if the transparent hugepage bit was set, but it doesn't do that
if pmd_numa() is also set. Note that the NUMA bit only gets set on real
NUMA machines, so people trying to reproduce this on most normal
development systems would never actually trigger this.
Fix it by removing the very subtle (and subtly incorrect) expectation,
and instead just checking pmd_trans_huge() explicitly.
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Acked-by: Andrea Arcangeli <aarcange@redhat.com>
[ Additionally remove the now stale test for pmd_trans_huge() inside the
pmd_bad() case - Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Change-Id: I3f3763f236ef102de735297cd175cf514d40d28f
We used to read file_handle twice. Once to get the amount of extra bytes, and
once to fetch the entire structure.
This may be problematic since we do size verifications only after the first
read, so if the number of extra bytes changes in userspace between the first
and second calls, we'll have an incoherent view of file_handle.
Instead, read the constant size once, and copy that over to the final
structure without having to re-read it again.
Change-Id: Ib05e5129629e27d5a05953098c5bc470fae40d2a
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Add a userspace visible knob to tell the VM to keep an extra amount
of memory free, by increasing the gap between each zone's min and
low watermarks.
This is useful for realtime applications that call system
calls and have a bound on the number of allocations that happen
in any short time period. In this application, extra_free_kbytes
would be left at an amount equal to or larger than than the
maximum number of allocations that happen in any burst.
It may also be useful to reduce the memory use of virtual
machines (temporarily?), in a way that does not cause memory
fragmentation like ballooning does.
[ccross]
Revived for use on old kernels where no other solution exists.
The tunable will be removed on kernels that do better at avoiding
direct reclaim.
Change-Id: I765a42be8e964bfd3e2886d1ca85a29d60c3bb3e
Signed-off-by: Rik van Riel<riel@redhat.com>
Signed-off-by: Colin Cross <ccross@android.com>
Commit 722b3c7469 modified x86 ftrace to
avoid tracing all functions called from irqs when function graph was
used with a filter. Port the same fix to ARM.
Change-Id: I3facbb85447effe54ff92db206068087c892fc28
Signed-off-by: Colin Cross <ccross@android.com>
Creation of procfs cpu/vfp_bounce fails because we're initialized too early. Fix
this by creating it on rootfs_initcall as before the NEON patches.
<6>[ 0.130770] VFP support v0.3: implementor 51 architecture 64 part 6f varia
nt 2 rev 0
<4>[ 0.130795] ------------[ cut here ]------------
<4>[ 0.130813] WARNING: at fs/proc/generic.c:323 __xlate_proc_name+0xac/0xcc(
)
<4>[ 0.130822] name 'cpu/vfp_bounce'
<4>[ 0.130855] [<c010e26c>] (unwind_backtrace+0x0/0x144) from [<c0a20f58>] (d
ump_stack+0x20/0x24)
<4>[ 0.130879] [<c0a20f58>] (dump_stack+0x20/0x24) from [<c019b670>] (warn_sl
owpath_common+0x58/0x70)
<4>[ 0.130899] [<c019b670>] (warn_slowpath_common+0x58/0x70) from [<c019b704>
] (warn_slowpath_fmt+0x40/0x48)
<4>[ 0.130919] [<c019b704>] (warn_slowpath_fmt+0x40/0x48) from [<c02c2ad8>] (
__xlate_proc_name+0xac/0xcc)
<4>[ 0.130938] [<c02c2ad8>] (__xlate_proc_name+0xac/0xcc) from [<c02c2b50>] (
__proc_create+0x58/0x100)
<4>[ 0.130956] [<c02c2b50>] (__proc_create+0x58/0x100) from [<c02c2ed0>] (pro
c_create_data+0x5c/0xc0)
<4>[ 0.130979] [<c02c2ed0>] (proc_create_data+0x5c/0xc0) from [<c0f03484>] (v
fp_init+0x19c/0x200)
<4>[ 0.131000] [<c0f03484>] (vfp_init+0x19c/0x200) from [<c0f00c98>] (do_one_
initcall+0x98/0x168)
<4>[ 0.131020] [<c0f00c98>] (do_one_initcall+0x98/0x168) from [<c0f00e60>] (k
ernel_init+0xf8/0x1b4)
<4>[ 0.131043] [<c0f00e60>] (kernel_init+0xf8/0x1b4) from [<c01081a0>] (kerne
l_thread_exit+0x0/0x8)
<4>[ 0.131076] ---[ end trace ea6d9a9b5e947151 ]---
<3>[ 0.131086] Failed to create procfs node for VFP bounce reporting
Change-Id: Ic9bbe46ee026df0af76df70fd95275f284c8dd63
Signed-off-by: Paul Reioux <reioux@gmail.com>
In order to use the NEON unit in the kernel, we should
initialize it a bit earlier in the boot process so NEON users
that like to do a quick benchmark at load time (like the
xor_blocks or RAID-6 code) find the NEON/VFP unit already
enabled.
Replaced late_initcall() with core_initcall().
Change-Id: I89b7a5a79dddf6083766a87f9d082c7ee61a54e0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
imx_v6_v7_defconfig handles both multi-core and single-core SoCs, and it has CONFIG_SMP=y selected by default.
With such config we cannot select ARM_ERRATA_364296, as it depends on !SMP.
Let ARM_ERRATA_364296 be undependent on CONFIG_SMP, so that we can select this erratum for the ARM1136 SoCs, even if CONFIG_SMP=y is enabled.
Change-Id: Ia96ea3d8e9468884962793cc4a6b19f62e1adc59
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Macro __INIT is used to place various code in head-common.S into the init
section. This should be matched by a closing __FINIT. Also, add an
explicit ".text" to ensure subsequent code is placed into the correct
section; __FINIT is simply a closing marker to match __INIT and doesn't
guarantee to revert to .text.
This historically caused no problem, because macro __CPUINIT was used at
the exact location where __FINIT was missing, which then placed following
code into the cpuinit section. However, with commit 22f0a2736 "init.h:
remove __cpuinit sections from the kernel" applied, __CPUINIT becomes a
no-op, thus leaving all this code in the init section, rather than the
regular text section. This caused issues such as secondary CPU boot
failures or crashes.
Change-Id: I2c346b58c4c4d50a162b19e9967d44ec5e97c7d4
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add a source file xor-neon.c (which is really just the reference
C implementation passed through the GCC vectorizer) and hook it
up to the XOR framework.
Change-Id: If251e5a16d628ddfdd8d6ce40b78014cd2962d7e
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
In order to safely support the use of NEON instructions in
kernel mode, some precautions need to be taken:
- the userland context that may be present in the registers (even
if the NEON/VFP is currently disabled) must be stored under the
correct task (which may not be 'current' in the UP case),
- to avoid having to keep track of additional vfpstates for the
kernel side, disallow the use of NEON in interrupt context
and run with preemption disabled,
- after use, re-enable preemption and re-enable the lazy restore
machinery by disabling the NEON/VFP unit.
This patch adds the functions kernel_neon_begin() and
kernel_neon_end() which take care of the above. It also adds
the Kconfig symbol KERNEL_MODE_NEON to enable it.
Change-Id: Iddde22ad8684e92e5f0aa4999962ad743eff8f7c
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Add crypto_[un]register_shashes() to allow simplifying init/exit code of shash
crypto modules that register multiple algorithms.
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Other SHA512 routines may need to use the generic routine when
FPU is not available.
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Other SHA256 routine may need to use the generic routine when
FPU is not available.
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The support code in vfp_support_entry does not care whether the
exception that caused it to be invoked occurred in kernel mode or
in user mode. However, neither condition that could trigger this
exception (lazy restore and VFP bounce to support code) is
currently allowable in kernel mode.
In either case, print a message describing the condition before
letting the undefined instruction handler run its course and trigger
an oops.
Change-Id: I1a9508f3adfb0264cac1984d6080d117bfcb39f6
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Common SHA-1 structures are defined in <crypto/sha.h> for code sharing.
This patch changes SHA-1/ARM glue code to use these structures.
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Fix the same alignment bug as in arm64 - we need to pass residue
unprocessed bytes as the last argument to blkcipher_walk_done.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Cc: stable@vger.kernel.org # 3.13+
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Building a multi-arch kernel results in:
arch/arm/crypto/built-in.o: In function `aesbs_xts_decrypt':
sha1_glue.c:(.text+0x15c8): undefined reference to `bsaes_xts_decrypt'
arch/arm/crypto/built-in.o: In function `aesbs_xts_encrypt':
sha1_glue.c:(.text+0x1664): undefined reference to `bsaes_xts_encrypt'
arch/arm/crypto/built-in.o: In function `aesbs_ctr_encrypt':
sha1_glue.c:(.text+0x184c): undefined reference to `bsaes_ctr32_encrypt_blocks'
arch/arm/crypto/built-in.o: In function `aesbs_cbc_decrypt':
sha1_glue.c:(.text+0x19b4): undefined reference to `bsaes_cbc_encrypt'
This code is already runtime-conditional on NEON being supported, so
there's no point compiling it out depending on the minimum build
architecture.
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Bit sliced AES gives around 45% speedup on Cortex-A15 for encryption
and around 25% for decryption. This implementation of the AES algorithm
does not rely on any lookup tables so it is believed to be invulnerable
to cache timing attacks.
This algorithm processes up to 8 blocks in parallel in constant time. This
means that it is not usable by chaining modes that are strictly sequential
in nature, such as CBC encryption. CBC decryption, however, can benefit from
this implementation and runs about 25% faster. The other chaining modes
implemented in this module, XTS and CTR, can execute fully in parallel in
both directions.
The core code has been adopted from the OpenSSL project (in collaboration
with the original author, on cc). For ease of maintenance, this version is
identical to the upstream OpenSSL code, i.e., all modifications that were
required to make it suitable for inclusion into the kernel have been made
upstream. The original can be found here:
http://git.openssl.org/gitweb/?p=openssl.git;a=commit;h=6f6a6130
Note to integrators:
While this implementation is significantly faster than the existing table
based ones (generic or ARM asm), especially in CTR mode, the effects on
power efficiency are unclear as of yet. This code does fundamentally more
work, by calculating values that the table based code obtains by a simple
lookup; only by doing all of that work in a SIMD fashion, it manages to
perform better.
Cc: Andy Polyakov <appro@openssl.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Conflicts:
arch/arm/crypto/Makefile
Change-Id: Iefcf746f1caefd7a5874d74ec78ebb781311e26d
Put the struct definitions for AES keys and the asm function prototypes in a
separate header and export the asm functions from the module.
This allows other drivers to use them directly.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Create a generic version of ablk_helper so it can be reused
by other architectures.
Acked-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Conflicts:
crypto/Kconfig
crypto/Makefile
Change-Id: I01a0d1192383c9a0287590a2cef1424e4675c531
When HAS_MACH_MEMUTILS is defined, two version of memcpy.S would
be built at the same time, fix the issue here.
Signed-off-by: Hong-Mei Li <a21834@motorola.com>
Reviewed-by: Yi-Wei Zhao <gbjc64@motorola.com>
Preload farther to take advantage of the memory bus, and assume
64-byte cache lines. Unroll some pairs of ldm/stm as well, for
unexplainable reasons.
Future enhancements should include,
- #define for how far to preload, possibly defined separately for
memcpy, copy_*_user
- Tuning for misaligned buffers
- Tuning for memmove
- Tuning for small buffers
- Understanding mechanism behind ldm/stm unroll causing some gains
in copy_to_user
BASELINE (msm8960pro):
======================================================================
memcpy 1000MB at 5MB : took 808850 usec, bandwidth 1236.236 MB/s
copy_to_user 1000MB at 5MB : took 810071 usec, bandwidth 1234.234 MB/s
copy_from_user 1000MB at 5M: took 942926 usec, bandwidth 1060.060 MB/s
memmove 1000GB at 5MB : took 848588 usec, bandwidth 1178.178 MB/s
copy_to_user 1000GB at 4kB : took 847916 usec, bandwidth 1179.179 MB/s
copy_from_user 1000GB at 4k: took 935113 usec, bandwidth 1069.069 MB/s
copy_page 1000GB at 4kB : took 779459 usec, bandwidth 1282.282 MB/s
THIS PATCH:
======================================================================
memcpy 1000MB at 5MB : took 346223 usec, bandwidth 2888.888 MB/s
copy_to_user 1000MB at 5MB : took 348084 usec, bandwidth 2872.872 MB/s
copy_from_user 1000MB at 5M: took 348176 usec, bandwidth 2872.872 MB/s
memmove 1000GB at 5MB : took 348267 usec, bandwidth 2871.871 MB/s
copy_to_user 1000GB at 4kB : took 377018 usec, bandwidth 2652.652 MB/s
copy_from_user 1000GB at 4k: took 371829 usec, bandwidth 2689.689 MB/s
copy_page 1000GB at 4kB : took 383763 usec, bandwidth 2605.605 MB/s
Signed-off-by: Chris Fries <C.Fries@motorola.com>
Reviewed-by: Christopher Fries <cfries@motorola.com>
Reviewed-by: Yi-Wei Zhao <gbjc64@motorola.com>
Conflicts:
arch/arm/lib/Makefile
arch/arm/mach-msm/Kconfig
Change-Id: I8400f3f9f4b27920d56761923b836cd51793f741
KSM thread to scan pages is getting schedule on definite timeout.
That wakes up CPU from idle state and hence may affect the power
consumption. Provide an optional support to use deferred timer
which suites low-power use-cases.
To enable deferred timers,
$ echo 1 > /sys/kernel/mm/ksm/deferred_timer
Change-Id: I07fe199f97fe1f72f9a9e1b0b757a3ac533719e8
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
Until today all gamepad input drivers report their data differently. It is
nearly impossible to write applications for more than one device in a
generic way. Therefore, this patch introduces a uniform gamepad API which
will be used for all new drivers.
Instead of mapping buttons by their labels, we now map them by position.
This allows applications to work with any gamepad regardless of the labels
on the buttons. Furthermore, we standardize the ABS_* codes for analog
triggers and sticks.
For D-Pads the long overdue BTN_DPAD_* codes are introduced. They should
be fairly obvious how to use. To avoid confusion, the action buttons now
have BTN_EAST/SOUTH/WEST/NORTH aliases.
Reported-by: Todd Showalter <todd@electronjump.com>
Acked-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Conflicts:
include/uapi/linux/input.h
Change-Id: If65177b7e1abb6868a9042ad1d31c47cdf0ed882
Conflicts:
include/linux/input.h
Instead of passing each byte through stack let's use %*phC specifier to dump
buffer as a hex string.
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Until today all gamepad input drivers report their data differently. It is
nearly impossible to write applications for more than one device in a
generic way. Therefore, this patch introduces a uniform gamepad API which
will be used for all new drivers.
Instead of mapping buttons by their labels, we now map them by position.
This allows applications to work with any gamepad regardless of the labels
on the buttons. Furthermore, we standardize the ABS_* codes for analog
triggers and sticks.
For D-Pads the long overdue BTN_DPAD_* codes are introduced. They should
be fairly obvious how to use. To avoid confusion, the action buttons now
have BTN_EAST/SOUTH/WEST/NORTH aliases.
Reported-by: Todd Showalter <todd@electronjump.com>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Acked-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
The analog sticks of the pro-controller might report slightly off values.
To guarantee a uniform setup, we now calibrate analog-stick values during
pro-controller setup.
Unfortunately, the pro-controller fails during normal EEPROM reads and I
couldn't figure out whether there are any calibration values stored on the
device. Therefore, we now use the first values reported by the device (iff
they are not _way_ off, which would indicate movement) to initialize the
calibration values. To allow users to change this calibration data, we
provide a pro_calib sysfs attribute.
We also change the "flat" values so user-space correctly smoothes our
data. It makes slightly off zero-positions less visible while still
guaranteeing highly precise movement reports. Note that the pro controller
reports zero-positions in a quite huge range (at least: -100 to +100).
Reported-by: Rafael Brune <mail@rbrune.de>
Tested-by: Rafael Brune <mail@rbrune.de>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
The input core has an internal spinlock that is acquired during event
injection via input_event() and friends but also held during FF callbacks.
That means, there is no way to share a lock between event-injection and FF
handling. Unfortunately, this is what is required for wiimote state
tracking and what we do with state.lock and input->lock.
This deadlock can be triggered when using continuous data reporting and FF
on a wiimote device at the same time. I takes me at least 30m of
stress-testing to trigger it but users reported considerably shorter
times (http://bpaste.net/show/132504/) when using some gaming-console
emulators.
The real problem is that we have two copies of internal state, one in the
wiimote objects and the other in the input device. As the input-lock is
not supposed to be accessed from outside of input-core, we have no other
chance than offloading FF handling into a worker. This actually works
pretty nice and also allows to implictly merge fast rumble changes into a
single request.
Due to the 3-layered workers (rumble+queue+l2cap) this might reduce FF
responsiveness. Initial tests were fine so lets fix the race first and if
it turns out to be too slow we can always handle FF out-of-band and skip
the queue-worker.
Cc: <stable@vger.kernel.org> # 3.11+
Reported-by: Thomas Schneider
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
The analog-stick vertical axes are inverted. Fix that! Otherwise, games
and other gamepad applications need to carry their own fixups (which they
thankfully haven't done, yet).
Cc: <stable@vger.kernel.org> # 3.11+
Reported-by: Rafael Brune <mail@rbrune.de>
Tested-by: Rafael Brune <mail@rbrune.de>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>