Commit Graph

328917 Commits

Author SHA1 Message Date
M1cha 47fd65420a ptrace: fix ptrace defect cause by a merge fail
this bug was introduced with I9493f28c30356a10eccb320e0a2d1a141388af9a

Change-Id: Ib4e9866eb56f4370ca8d33761cf0cd4f795db5be
Signed-off-by: M1cha <sigmaepsilon92@gmail.com>
2015-08-26 07:23:20 -07:00
Dan Pasanen 4a740604d9 Revert "Revert "CHROMIUM: arch/arm: move secure_computing into trace; respect return code""
This reverts commit a25e94221e.

Change-Id: Ic0f0e2a217656f5500058760cbde35277f50c09c
2015-08-26 07:20:59 -07:00
Dan Pasanen 487905d4b3 Revert "CHROMIUM: arch/arm: move secure_computing into trace; respect return code"
This reverts commit a651024793.

Change-Id: Ic21913b9ca930244de4cc52f5bbc2939793bcae7
(cherry picked from commit a25e94221e)
2015-08-25 19:42:39 -07:00
David S. Miller d34f22b184 ipv4: Missing sk_nulls_node_init() in ping_unhash().
If we don't do that, then the poison value is left in the ->pprev
backlink.

This can cause crashes if we do a disconnect, followed by a connect().

Change-Id: I021261c6639339a0d027ee5cb086ba7301db334a
Tested-by: Linus Torvalds <torvalds@linux-foundation.org>
Reported-by: Wen Xu <hotdog3645@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-05-02 20:50:43 +00:00
Dan Pasanen edff3cf62c jf: set proper compass calibration point for vzw model
Change-Id: I4634d9cdb436efca15a3698402a87a8d2995551e
2015-04-28 00:59:25 +00:00
D.S. Ljungmark 91f93dbc96 ipv6: Don't reduce hop limit for an interface
A local route may have a lower hop_limit set than global routes do.

RFC 3756, Section 4.2.7, "Parameter Spoofing"

>   1.  The attacker includes a Current Hop Limit of one or another small
>       number which the attacker knows will cause legitimate packets to
>       be dropped before they reach their destination.

>   As an example, one possible approach to mitigate this threat is to
>   ignore very small hop limits.  The nodes could implement a
>   configurable minimum hop limit, and ignore attempts to set it below
>   said limit.

Change-Id: Ie70d7d35ea94738209b525872e7e9022676e05b3
Signed-off-by: D.S. Ljungmark <ljungmark@modio.se>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit 8740944a02)
2015-04-08 22:44:18 +00:00
Erik Kline 2b03d6bc81 net: ipv6: allow choosing optimistic addresses with use_optimistic
The use_optimistic sysctl makes optimistic IPv6 addresses
equivalent to preferred addresses for source address selection
(e.g., when calling connect()), but it does not allow an
application to bind to optimistic addresses. This behaviour is
inconsistent - for example, it doesn't make sense for bind() to
an optimistic address fail with EADDRNOTAVAIL, but connect() to
choose that address outgoing address on the same socket.

Bug: 17769720
Bug: 18609055
Change-Id: I9de0d6c92ac45e29d28e318ac626c71806666f13
Signed-off-by: Erik Kline <ek@google.com>
Signed-off-by: Lorenzo Colitti <lorenzo@google.com>
2015-03-29 22:05:29 -05:00
Erik Kline 51f1831d20 net: ipv6: Add a sysctl to make optimistic addresses useful candidates
Add a sysctl that causes an interface's optimistic addresses
to be considered equivalent to other non-deprecated addresses
for source address selection purposes.  Preferred addresses
will still take precedence over optimistic addresses, subject
to other ranking in the source address selection algorithm.

This is useful where different interfaces are connected to
different networks from different ISPs (e.g., a cell network
and a home wifi network).

The current behaviour complies with RFC 3484/6724, and it
makes sense if the host has only one interface, or has
multiple interfaces on the same network (same or cooperating
administrative domain(s), but not in the multiple distinct
networks case.

For example, if a mobile device has an IPv6 address on an LTE
network and then connects to IPv6-enabled wifi, while the wifi
IPv6 address is undergoing DAD, IPv6 connections will try use
the wifi default route with the LTE IPv6 address, and will get
stuck until they time out.

Also, because optimistic nodes can receive frames, issue
an RTM_NEWADDR as soon as DAD starts (with the IFA_F_OPTIMSTIC
flag appropriately set).  A second RTM_NEWADDR is sent if DAD
completes (the address flags have changed), otherwise an
RTM_DELADDR is sent.

Also: add an entry in ip-sysctl.txt for optimistic_dad.

[backport of net-next 7fd2561e4ebdd070ebba6d3326c4c5b13942323f]

Signed-off-by: Erik Kline <ek@google.com>
Acked-by: Lorenzo Colitti <lorenzo@google.com>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Bug: 17769720
Change-Id: I440a9b8c788db6767d191bbebfd2dff481aa9e0d
2015-03-29 22:05:29 -05:00
Dan Pasanen dd44326030 Revert "smart_mtp_se6e8fa: add gamma control"
This reverts commit 354aebb812.

Change-Id: I7ed9c1b3bae80ba711cedd9daacc4bb5ee6c1ce5
2015-03-20 21:31:41 +00:00
Kirill A. Shutemov 99d34150a9 mm: Fix NULL pointer dereference in madvise(MADV_WILLNEED) support
Sasha Levin found a NULL pointer dereference that is due to a missing
page table lock, which in turn is due to the pmd entry in question being
a transparent huge-table entry.

The code - introduced in commit 1998cc048901 ("mm: make
madvise(MADV_WILLNEED) support swap file prefetch") - correctly checks
for this situation using pmd_none_or_trans_huge_or_clear_bad(), but it
turns out that that function doesn't work correctly.

pmd_none_or_trans_huge_or_clear_bad() expected that pmd_bad() would
trigger if the transparent hugepage bit was set, but it doesn't do that
if pmd_numa() is also set. Note that the NUMA bit only gets set on real
NUMA machines, so people trying to reproduce this on most normal
development systems would never actually trigger this.

Fix it by removing the very subtle (and subtly incorrect) expectation,
and instead just checking pmd_trans_huge() explicitly.

Reported-by: Sasha Levin <sasha.levin@oracle.com>
Acked-by: Andrea Arcangeli <aarcange@redhat.com>
[ Additionally remove the now stale test for pmd_trans_huge() inside the
  pmd_bad() case - Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Change-Id: I3f3763f236ef102de735297cd175cf514d40d28f
2015-03-19 18:26:17 +00:00
Sasha Levin 17b7c3e40c vfs: read file_handle only once in handle_to_path
We used to read file_handle twice. Once to get the amount of extra bytes, and
once to fetch the entire structure.

This may be problematic since we do size verifications only after the first
read, so if the number of extra bytes changes in userspace between the first
and second calls, we'll have an incoherent view of file_handle.

Instead, read the constant size once, and copy that over to the final
structure without having to re-read it again.

Change-Id: Ib05e5129629e27d5a05953098c5bc470fae40d2a
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
2015-03-19 18:25:33 +00:00
Rik van Riel 04e6498839 add extra free kbytes tunable
Add a userspace visible knob to tell the VM to keep an extra amount
of memory free, by increasing the gap between each zone's min and
low watermarks.

This is useful for realtime applications that call system
calls and have a bound on the number of allocations that happen
in any short time period.  In this application, extra_free_kbytes
would be left at an amount equal to or larger than than the
maximum number of allocations that happen in any burst.

It may also be useful to reduce the memory use of virtual
machines (temporarily?), in a way that does not cause memory
fragmentation like ballooning does.

[ccross]
Revived for use on old kernels where no other solution exists.
The tunable will be removed on kernels that do better at avoiding
direct reclaim.

Change-Id: I765a42be8e964bfd3e2886d1ca85a29d60c3bb3e
Signed-off-by: Rik van Riel<riel@redhat.com>
Signed-off-by: Colin Cross <ccross@android.com>
2015-03-18 14:01:23 -05:00
Spegelius ece85e221b Jactive defconfig up to date... again
Change-Id: I70e19838100dc57c17b6fbd17c420054553d5669
2015-03-10 17:41:10 +02:00
Colin Cross cf358d5d5b ARM: ftrace: Trace function entry before updating index
Commit 722b3c7469 modified x86 ftrace to
avoid tracing all functions called from irqs when function graph was
used with a filter.  Port the same fix to ARM.

Change-Id: I3facbb85447effe54ff92db206068087c892fc28
Signed-off-by: Colin Cross <ccross@android.com>
2015-03-03 17:35:25 -06:00
myfluxi 40ff230a03 arm: vfpmodule: Fix warning procfs vfp_bounce reporting failed
Creation of procfs cpu/vfp_bounce fails because we're initialized too early. Fix
this by creating it on rootfs_initcall as before the NEON patches.

<6>[    0.130770] VFP support v0.3: implementor 51 architecture 64 part 6f varia
nt 2 rev 0
<4>[    0.130795] ------------[ cut here ]------------
<4>[    0.130813] WARNING: at fs/proc/generic.c:323 __xlate_proc_name+0xac/0xcc(
)
<4>[    0.130822] name 'cpu/vfp_bounce'
<4>[    0.130855] [<c010e26c>] (unwind_backtrace+0x0/0x144) from [<c0a20f58>] (d
ump_stack+0x20/0x24)
<4>[    0.130879] [<c0a20f58>] (dump_stack+0x20/0x24) from [<c019b670>] (warn_sl
owpath_common+0x58/0x70)
<4>[    0.130899] [<c019b670>] (warn_slowpath_common+0x58/0x70) from [<c019b704>
] (warn_slowpath_fmt+0x40/0x48)
<4>[    0.130919] [<c019b704>] (warn_slowpath_fmt+0x40/0x48) from [<c02c2ad8>] (
__xlate_proc_name+0xac/0xcc)
<4>[    0.130938] [<c02c2ad8>] (__xlate_proc_name+0xac/0xcc) from [<c02c2b50>] (
__proc_create+0x58/0x100)
<4>[    0.130956] [<c02c2b50>] (__proc_create+0x58/0x100) from [<c02c2ed0>] (pro
c_create_data+0x5c/0xc0)
<4>[    0.130979] [<c02c2ed0>] (proc_create_data+0x5c/0xc0) from [<c0f03484>] (v
fp_init+0x19c/0x200)
<4>[    0.131000] [<c0f03484>] (vfp_init+0x19c/0x200) from [<c0f00c98>] (do_one_
initcall+0x98/0x168)
<4>[    0.131020] [<c0f00c98>] (do_one_initcall+0x98/0x168) from [<c0f00e60>] (k
ernel_init+0xf8/0x1b4)
<4>[    0.131043] [<c0f00e60>] (kernel_init+0xf8/0x1b4) from [<c01081a0>] (kerne
l_thread_exit+0x0/0x8)
<4>[    0.131076] ---[ end trace ea6d9a9b5e947151 ]---
<3>[    0.131086] Failed to create procfs node for VFP bounce reporting

Change-Id: Ic9bbe46ee026df0af76df70fd95275f284c8dd63
Signed-off-by: Paul Reioux <reioux@gmail.com>
2015-03-03 17:35:25 -06:00
Ard Biesheuvel f239e05b8b ARM: move VFP init to an earlier boot stage
In order to use the NEON unit in the kernel, we should
initialize it a bit earlier in the boot process so NEON users
that like to do a quick benchmark at load time (like the
xor_blocks or RAID-6 code) find the NEON/VFP unit already
enabled.

Replaced late_initcall() with core_initcall().

Change-Id: I89b7a5a79dddf6083766a87f9d082c7ee61a54e0
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
2015-03-03 17:35:25 -06:00
Fabio Estevam 5b2f3fb2d9 ARM: 7782/1: Kconfig: Let ARM_ERRATA_364296 not depend on CONFIG_SMP
imx_v6_v7_defconfig handles both multi-core and single-core SoCs, and it has CONFIG_SMP=y selected by default.

With such config we cannot select ARM_ERRATA_364296, as it depends on !SMP.

Let ARM_ERRATA_364296 be undependent on CONFIG_SMP, so that we can select this erratum for the ARM1136 SoCs, even if CONFIG_SMP=y is enabled.

Change-Id: Ia96ea3d8e9468884962793cc4a6b19f62e1adc59
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-03-03 17:35:25 -06:00
Stephen Warren 610f458143 ARM: 7780/1: add missing linker section markup to head-common.S
Macro __INIT is used to place various code in head-common.S into the init
section. This should be matched by a closing __FINIT. Also, add an
explicit ".text" to ensure subsequent code is placed into the correct
section; __FINIT is simply a closing marker to match __INIT and doesn't
guarantee to revert to .text.

This historically caused no problem, because macro __CPUINIT was used at
the exact location where __FINIT was missing, which then placed following
code into the cpuinit section. However, with commit 22f0a2736 "init.h:
remove __cpuinit sections from the kernel" applied, __CPUINIT becomes a
no-op, thus leaving all this code in the init section, rather than the
regular text section. This caused issues such as secondary CPU boot
failures or crashes.

Change-Id: I2c346b58c4c4d50a162b19e9967d44ec5e97c7d4
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-03-03 17:35:25 -06:00
Ard Biesheuvel 8ea8881e1c ARM: crypto: add NEON accelerated XOR implementation
Add a source file xor-neon.c (which is really just the reference
C implementation passed through the GCC vectorizer) and hook it
up to the XOR framework.

Change-Id: If251e5a16d628ddfdd8d6ce40b78014cd2962d7e
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
2015-03-03 17:35:24 -06:00
Dan Pasanen 682cbf797f defconfig: jf: : enable arm neon optimized crypto
Change-Id: I63fa532efe278710829db0fbf2eb3440c6f3dad8
2015-03-03 17:35:24 -06:00
Ard Biesheuvel b962d86f6f ARM: add support for kernel mode NEON
In order to safely support the use of NEON instructions in
kernel mode, some precautions need to be taken:
- the userland context that may be present in the registers (even
  if the NEON/VFP is currently disabled) must be stored under the
  correct task (which may not be 'current' in the UP case),
- to avoid having to keep track of additional vfpstates for the
  kernel side, disallow the use of NEON in interrupt context
  and run with preemption disabled,
- after use, re-enable preemption and re-enable the lazy restore
  machinery by disabling the NEON/VFP unit.

This patch adds the functions kernel_neon_begin() and
kernel_neon_end() which take care of the above. It also adds
the Kconfig symbol KERNEL_MODE_NEON to enable it.

Change-Id: Iddde22ad8684e92e5f0aa4999962ad743eff8f7c
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
2015-03-03 17:35:24 -06:00
Jussi Kivilinna 4546e1b478 crypto: add crypto_[un]register_shashes for [un]registering multiple shash entries at once
Add crypto_[un]register_shashes() to allow simplifying init/exit code of shash
crypto modules that register multiple algorithms.

Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-03-03 17:35:24 -06:00
Tim Chen aef3356787 crypto: sha512 - Expose generic sha512 routine to be callable from other modules
Other SHA512 routines may need to use the generic routine when
FPU is not available.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-03-03 17:35:24 -06:00
Tim Chen 732dbdb935 crypto: sha256 - Expose SHA256 generic routine to be callable externally.
Other SHA256 routine may need to use the generic routine when
FPU is not available.

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-03-03 17:35:24 -06:00
Ard Biesheuvel 513e686767 ARM: pull in <asm/simd.h> from asm-generic
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2015-03-03 17:35:23 -06:00
Ard Biesheuvel d74abc2b4f ARM: be strict about FP exceptions in kernel mode
The support code in vfp_support_entry does not care whether the
exception that caused it to be invoked occurred in kernel mode or
in user mode. However, neither condition that could trigger this
exception (lazy restore and VFP bounce to support code) is
currently allowable in kernel mode.

In either case, print a message describing the condition before
letting the undefined instruction handler run its course and trigger
an oops.

Change-Id: I1a9508f3adfb0264cac1984d6080d117bfcb39f6
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
2015-03-03 17:35:23 -06:00
Jussi Kivilinna 3dab984695 ARM: 8120/1: crypto: sha512: add ARM NEON implementation
This patch adds ARM NEON assembly implementation of SHA-512 and SHA-384
algorithms.

tcrypt benchmark results on Cortex-A8, sha512-generic vs sha512-neon-asm:

block-size      bytes/update    old-vs-new
16              16              2.99x
64              16              2.67x
64              64              3.00x
256             16              2.64x
256             64              3.06x
256             256             3.33x
1024            16              2.53x
1024            256             3.39x
1024            1024            3.52x
2048            16              2.50x
2048            256             3.41x
2048            1024            3.54x
2048            2048            3.57x
4096            16              2.49x
4096            256             3.42x
4096            1024            3.56x
4096            4096            3.59x
8192            16              2.48x
8192            256             3.42x
8192            1024            3.56x
8192            4096            3.60x
8192            8192            3.60x

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>

Conflicts:
	crypto/Kconfig

Change-Id: I949f87031d2c49cb7a0474aae4cf83da2154749a
2015-03-03 17:35:23 -06:00
Jussi Kivilinna 9037d7066a ARM: 8119/1: crypto: sha1: add ARM NEON implementation
This patch adds ARM NEON assembly implementation of SHA-1 algorithm.

tcrypt benchmark results on Cortex-A8, sha1-arm-asm vs sha1-neon-asm:

block-size      bytes/update    old-vs-new
16              16              1.04x
64              16              1.02x
64              64              1.05x
256             16              1.03x
256             64              1.04x
256             256             1.30x
1024            16              1.03x
1024            256             1.36x
1024            1024            1.52x
2048            16              1.03x
2048            256             1.39x
2048            1024            1.55x
2048            2048            1.59x
4096            16              1.03x
4096            256             1.40x
4096            1024            1.57x
4096            4096            1.62x
8192            16              1.03x
8192            256             1.40x
8192            1024            1.58x
8192            4096            1.63x
8192            8192            1.63x

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>

Conflicts:
	crypto/Kconfig

Change-Id: I4e40ca3e67b85b15ef21c28d91946d6b64d820c8
2015-03-03 17:35:23 -06:00
Jussi Kivilinna 3e1d945f8e ARM: 8118/1: crypto: sha1/make use of common SHA-1 structures
Common SHA-1 structures are defined in <crypto/sha.h> for code sharing.

This patch changes SHA-1/ARM glue code to use these structures.

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-03-03 17:35:23 -06:00
Mikulas Patocka aa811077a6 crypto: arm-aes - fix encryption of unaligned data
Fix the same alignment bug as in arm64 - we need to pass residue
unprocessed bytes as the last argument to blkcipher_walk_done.

Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Cc: stable@vger.kernel.org	# 3.13+
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-03-03 17:35:23 -06:00
Russell King 5cc2eec90d CRYPTO: Fix more AES build errors
Building a multi-arch kernel results in:

arch/arm/crypto/built-in.o: In function `aesbs_xts_decrypt':
sha1_glue.c:(.text+0x15c8): undefined reference to `bsaes_xts_decrypt'
arch/arm/crypto/built-in.o: In function `aesbs_xts_encrypt':
sha1_glue.c:(.text+0x1664): undefined reference to `bsaes_xts_encrypt'
arch/arm/crypto/built-in.o: In function `aesbs_ctr_encrypt':
sha1_glue.c:(.text+0x184c): undefined reference to `bsaes_ctr32_encrypt_blocks'
arch/arm/crypto/built-in.o: In function `aesbs_cbc_decrypt':
sha1_glue.c:(.text+0x19b4): undefined reference to `bsaes_cbc_encrypt'

This code is already runtime-conditional on NEON being supported, so
there's no point compiling it out depending on the minimum build
architecture.

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-03-03 17:35:23 -06:00
Russell King 8fba7e1a0e ARM: add .gitignore entry for aesbs-core.S
This avoids this file being incorrectly added to git.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-03-03 17:35:22 -06:00
Ard Biesheuvel c50b4e07e2 ARM: add support for bit sliced AES using NEON instructions
Bit sliced AES gives around 45% speedup on Cortex-A15 for encryption
and around 25% for decryption. This implementation of the AES algorithm
does not rely on any lookup tables so it is believed to be invulnerable
to cache timing attacks.

This algorithm processes up to 8 blocks in parallel in constant time. This
means that it is not usable by chaining modes that are strictly sequential
in nature, such as CBC encryption. CBC decryption, however, can benefit from
this implementation and runs about 25% faster. The other chaining modes
implemented in this module, XTS and CTR, can execute fully in parallel in
both directions.

The core code has been adopted from the OpenSSL project (in collaboration
with the original author, on cc). For ease of maintenance, this version is
identical to the upstream OpenSSL code, i.e., all modifications that were
required to make it suitable for inclusion into the kernel have been made
upstream. The original can be found here:

    http://git.openssl.org/gitweb/?p=openssl.git;a=commit;h=6f6a6130

Note to integrators:
While this implementation is significantly faster than the existing table
based ones (generic or ARM asm), especially in CTR mode, the effects on
power efficiency are unclear as of yet. This code does fundamentally more
work, by calculating values that the table based code obtains by a simple
lookup; only by doing all of that work in a SIMD fashion, it manages to
perform better.

Cc: Andy Polyakov <appro@openssl.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

Conflicts:
	arch/arm/crypto/Makefile

Change-Id: Iefcf746f1caefd7a5874d74ec78ebb781311e26d
2015-03-03 17:35:22 -06:00
Ard Biesheuvel cc0c195442 ARM: move AES typedefs and function prototypes to separate header
Put the struct definitions for AES keys and the asm function prototypes in a
separate header and export the asm functions from the module.
This allows other drivers to use them directly.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2015-03-03 17:35:22 -06:00
kbuild test robot fcb4621f6a crypto: ablk_helper - Replace memcpy with struct assignment
tree:   git://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master
head:   48e6dc1b2a1ad8186d48968d5018912bdacac744
commit: a62b01cd6cc1feb5e80d64d6937c291473ed82cb [20/24] crypto: create generic version of ablk_helper

coccinelle warnings: (new ones prefixed by >>)

>> crypto/ablk_helper.c:97:2-8: Replace memcpy with struct assignment
>> crypto/ablk_helper.c:78:2-8: Replace memcpy with struct assignment

Please consider folding the attached diff :-)

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-03-03 17:35:22 -06:00
Ard Biesheuvel 57c0041670 crypto: create generic version of ablk_helper
Create a generic version of ablk_helper so it can be reused
by other architectures.

Acked-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

Conflicts:
	crypto/Kconfig
	crypto/Makefile

Change-Id: I01a0d1192383c9a0287590a2cef1424e4675c531
2015-03-03 17:35:22 -06:00
Dan Pasanen 37722558b4 defconfig: jf: enable Motorola's MSM memcpy enhancements
Change-Id: I56fd31c1ab0b9f558b854cb5ad948bd618b3ff9b
2015-03-03 17:35:22 -06:00
Hong-Mei Li 13217d1939 arm: lib: Fix makefile bug
When HAS_MACH_MEMUTILS is defined, two version of memcpy.S would
be built at the same time, fix the issue here.

Signed-off-by: Hong-Mei Li <a21834@motorola.com>
Reviewed-by: Yi-Wei Zhao <gbjc64@motorola.com>
2015-03-03 17:35:21 -06:00
Jason Hrycay a0b1532c9f msm: memutils: memcpy, memmove, copy_page optimization
Preload farther to take advantage of the memory bus, and assume
64-byte cache lines.  Unroll some pairs of ldm/stm as well, for
unexplainable reasons.

Future enhancements should include,

- #define for how far to preload, possibly defined separately for
  memcpy, copy_*_user
- Tuning for misaligned buffers
- Tuning for memmove
- Tuning for small buffers
- Understanding mechanism behind ldm/stm unroll causing some gains
  in copy_to_user

BASELINE (msm8960pro):
======================================================================
memcpy 1000MB at 5MB       : took 808850 usec, bandwidth 1236.236 MB/s
copy_to_user 1000MB at 5MB : took 810071 usec, bandwidth 1234.234 MB/s
copy_from_user 1000MB at 5M: took 942926 usec, bandwidth 1060.060 MB/s
memmove 1000GB at 5MB      : took 848588 usec, bandwidth 1178.178 MB/s
copy_to_user 1000GB at 4kB : took 847916 usec, bandwidth 1179.179 MB/s
copy_from_user 1000GB at 4k: took 935113 usec, bandwidth 1069.069 MB/s
copy_page 1000GB at 4kB    : took 779459 usec, bandwidth 1282.282 MB/s

THIS PATCH:
======================================================================
memcpy 1000MB at 5MB       : took 346223 usec, bandwidth 2888.888 MB/s
copy_to_user 1000MB at 5MB : took 348084 usec, bandwidth 2872.872 MB/s
copy_from_user 1000MB at 5M: took 348176 usec, bandwidth 2872.872 MB/s
memmove 1000GB at 5MB      : took 348267 usec, bandwidth 2871.871 MB/s
copy_to_user 1000GB at 4kB : took 377018 usec, bandwidth 2652.652 MB/s
copy_from_user 1000GB at 4k: took 371829 usec, bandwidth 2689.689 MB/s
copy_page 1000GB at 4kB    : took 383763 usec, bandwidth 2605.605 MB/s

Signed-off-by: Chris Fries <C.Fries@motorola.com>
Reviewed-by: Christopher Fries <cfries@motorola.com>
Reviewed-by: Yi-Wei Zhao <gbjc64@motorola.com>

Conflicts:
	arch/arm/lib/Makefile
	arch/arm/mach-msm/Kconfig

Change-Id: I8400f3f9f4b27920d56761923b836cd51793f741
2015-03-03 17:35:21 -06:00
Chintan Pandya ab0ce816e0 ksm: Provide support to use deferred timers for scanner thread
KSM thread to scan pages is getting schedule on definite timeout.
That wakes up CPU from idle state and hence may affect the power
consumption. Provide an optional support to use deferred timer
which suites low-power use-cases.

To enable deferred timers,
$ echo 1 > /sys/kernel/mm/ksm/deferred_timer

Change-Id: I07fe199f97fe1f72f9a9e1b0b757a3ac533719e8
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
2015-03-03 17:34:32 -06:00
Dan Pasanen e3154fc46d defconfig: jf: enable ksm
Change-Id: Ifdf117556b320110e33c8ea823faf067855a7934
2015-03-03 20:10:19 +00:00
Dan Pasanen 6f49548522 Merge remote-tracking branch 'caf/LA.AF.1.1_rb1.13' into cm-12.0
Conflicts:
	drivers/misc/qseecom.c
	kernel/cgroup.c

Change-Id: I7049c69ae3279b4165b74a7283263358cdd787fc
2015-03-03 08:57:49 -06:00
Dan Pasanen 7c87167efa HID: wiimote: fix warnings
Change-Id: I4da92db6a86c50562864520a2721b7411c661be9
2015-03-02 16:08:59 -06:00
David Herrmann 3f91b05a20 input: document gamepad API and add extra keycodes
Until today all gamepad input drivers report their data differently. It is
nearly impossible to write applications for more than one device in a
generic way. Therefore, this patch introduces a uniform gamepad API which
will be used for all new drivers.

Instead of mapping buttons by their labels, we now map them by position.
This allows applications to work with any gamepad regardless of the labels
on the buttons. Furthermore, we standardize the ABS_* codes for analog
triggers and sticks.

For D-Pads the long overdue BTN_DPAD_* codes are introduced. They should
be fairly obvious how to use. To avoid confusion, the action buttons now
have BTN_EAST/SOUTH/WEST/NORTH aliases.

Reported-by: Todd Showalter <todd@electronjump.com>
Acked-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>

Conflicts:
	include/uapi/linux/input.h

Change-Id: If65177b7e1abb6868a9042ad1d31c47cdf0ed882

Conflicts:
	include/linux/input.h
2015-03-02 15:55:56 -06:00
Dan Pasanen 9e9ba1fed6 jf: enable wiimote support
Change-Id: I10d151b7cfc81fffbf365764c44a754da8d0eab7
2015-03-02 15:43:40 -06:00
Andy Shevchenko d3b59ccc59 HID: hid-wiimote: print small buffers via %*phC
Instead of passing each byte through stack let's use %*phC specifier to dump
buffer as a hex string.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2015-03-02 15:40:41 -06:00
David Herrmann 69c6ee4edd input: document gamepad API and add extra keycodes
Until today all gamepad input drivers report their data differently. It is
nearly impossible to write applications for more than one device in a
generic way. Therefore, this patch introduces a uniform gamepad API which
will be used for all new drivers.

Instead of mapping buttons by their labels, we now map them by position.
This allows applications to work with any gamepad regardless of the labels
on the buttons. Furthermore, we standardize the ABS_* codes for analog
triggers and sticks.

For D-Pads the long overdue BTN_DPAD_* codes are introduced. They should
be fairly obvious how to use. To avoid confusion, the action buttons now
have BTN_EAST/SOUTH/WEST/NORTH aliases.

Reported-by: Todd Showalter <todd@electronjump.com>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Acked-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2015-03-02 15:39:24 -06:00
David Herrmann 9fa35c750a HID: wiimote: add pro-controller analog stick calibration
The analog sticks of the pro-controller might report slightly off values.
To guarantee a uniform setup, we now calibrate analog-stick values during
pro-controller setup.

Unfortunately, the pro-controller fails during normal EEPROM reads and I
couldn't figure out whether there are any calibration values stored on the
device. Therefore, we now use the first values reported by the device (iff
they are not _way_ off, which would indicate movement) to initialize the
calibration values. To allow users to change this calibration data, we
provide a pro_calib sysfs attribute.

We also change the "flat" values so user-space correctly smoothes our
data. It makes slightly off zero-positions less visible while still
guaranteeing highly precise movement reports. Note that the pro controller
reports zero-positions in a quite huge range (at least: -100 to +100).

Reported-by: Rafael Brune <mail@rbrune.de>
Tested-by: Rafael Brune <mail@rbrune.de>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2015-03-02 15:38:52 -06:00
David Herrmann 1e576648e5 HID: wiimote: fix FF deadlock
The input core has an internal spinlock that is acquired during event
injection via input_event() and friends but also held during FF callbacks.
That means, there is no way to share a lock between event-injection and FF
handling. Unfortunately, this is what is required for wiimote state
tracking and what we do with state.lock and input->lock.

This deadlock can be triggered when using continuous data reporting and FF
on a wiimote device at the same time. I takes me at least 30m of
stress-testing to trigger it but users reported considerably shorter
times (http://bpaste.net/show/132504/) when using some gaming-console
emulators.

The real problem is that we have two copies of internal state, one in the
wiimote objects and the other in the input device. As the input-lock is
not supposed to be accessed from outside of input-core, we have no other
chance than offloading FF handling into a worker. This actually works
pretty nice and also allows to implictly merge fast rumble changes into a
single request.

Due to the 3-layered workers (rumble+queue+l2cap) this might reduce FF
responsiveness. Initial tests were fine so lets fix the race first and if
it turns out to be too slow we can always handle FF out-of-band and skip
the queue-worker.

Cc: <stable@vger.kernel.org> # 3.11+
Reported-by: Thomas Schneider
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2015-03-02 15:38:11 -06:00
David Herrmann 2bbab85429 HID: wiimote: fix inverted pro-controller axes
The analog-stick vertical axes are inverted. Fix that! Otherwise, games
and other gamepad applications need to carry their own fixups (which they
thankfully haven't done, yet).

Cc: <stable@vger.kernel.org> # 3.11+
Reported-by: Rafael Brune <mail@rbrune.de>
Tested-by: Rafael Brune <mail@rbrune.de>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2015-03-02 15:36:07 -06:00