Merge android-4.9.181 (6ccab84
) into msm-4.9
* refs/heads/tmp-6ccab84: Linux 4.9.181 ethtool: check the return value of get_regs_len ipv4: Define __ipv4_neigh_lookup_noref when CONFIG_INET is disabled fuse: Add FOPEN_STREAM to use stream_open() fs: stream_open - opener for stream-like files so that read and write can run simultaneously without deadlock TTY: serial_core, add ->install drm/i915: Fix I915_EXEC_RING_MASK drm/radeon: prefer lower reference dividers drm/gma500/cdv: Check vbt config bits when detecting lvds panels genwqe: Prevent an integer overflow in the ioctl Revert "MIPS: perf: ath79: Fix perfcount IRQ assignment" MIPS: pistachio: Build uImage.gz by default x86/power: Fix 'nosmt' vs hibernation triple fault during resume fuse: fallocate: fix return with locked inode parisc: Use implicit space register selection for loading the coherence index of I/O pdirs rcu: locking and unlocking need to always be at least barriers Revert "fib_rules: return 0 directly if an exactly same rule exists when NLM_F_EXCL not supplied" Revert "fib_rules: fix error in backport of e9919a24d302 ("fib_rules: return 0...")" ipv6: use READ_ONCE() for inet->hdrincl as in ipv4 ipv6: fix EFAULT on sendto with icmpv6 and hdrincl pktgen: do not sleep with the thread lock held. net: rds: fix memory leak in rds_ib_flush_mr_pool net/mlx4_en: ethtool, Remove unsupported SFP EEPROM high pages query neighbor: Call __ipv4_neigh_lookup_noref in neigh_xmit ethtool: fix potential userspace buffer overflow media: uvcvideo: Fix uvc_alloc_entity() allocation alignment efi/libstub: Unify command line param parsing Revert "x86/build: Move _etext to actual end of .text" mm: make page ref count overflow check tighter and more explicit mm: prevent get_user_pages() from overflowing page refcount mm, gup: ensure real head page is ref-counted when using hugepages mm, gup: remove broken VM_BUG_ON_PAGE compound check for hugepages fs: prevent page refcount overflow in pipe_buf_get binder: replace "%p" with "%pK" binder: Replace "%p" with "%pK" for stable brcmfmac: add subtype check for event handling in data path brcmfmac: assure SSID length from firmware is limited brcmfmac: add length checks in scheduled scan result handler drm/vmwgfx: Don't send drm sysfs hotplug events on initial master set gcc-plugins: Fix build failures under Darwin host CIFS: cifs_read_allocate_pages: don't iterate through whole page array on ENOMEM staging: vc04_services: prevent integer overflow in create_pagelist() docs: Fix conf.py for Sphinx 2.0 kernel/signal.c: trace_signal_deliver when signal_group_exit memcg: make it work on sparse non-0-node systems tty: max310x: Fix external crystal register setup tty: serial: msm_serial: Fix XON/XOFF drm/nouveau/i2c: Disable i2c bus access after ->fini() ALSA: hda/realtek - Set default power save node to 0 powerpc/perf: Fix MMCRA corruption by bhrb_filter Btrfs: fix race updating log root item during fsync scsi: zfcp: fix to prevent port_remove with pure auto scan LUNs (only sdevs) scsi: zfcp: fix missing zfcp_port reference put on -EBUSY from port_remove media: smsusb: better handle optional alignment media: usb: siano: Fix false-positive "uninitialized variable" warning media: usb: siano: Fix general protection fault in smsusb USB: rio500: fix memory leak in close after disconnect USB: rio500: refuse more than one device at a time USB: Add LPM quirk for Surface Dock GigE adapter USB: sisusbvga: fix oops in error path of sisusb_probe USB: Fix slab-out-of-bounds write in usb_get_bos_descriptor usbip: usbip_host: fix stub_dev lock context imbalance regression usbip: usbip_host: fix BUG: sleeping function called from invalid context usb: xhci: avoid null pointer deref when bos field is NULL xhci: Convert xhci_handshake() to use readl_poll_timeout_atomic() xhci: Use %zu for printing size_t type xhci: update bounce buffer with correct sg num include/linux/bitops.h: sanitize rotate primitives sparc64: Fix regression in non-hypervisor TLB flush xcall tipc: fix modprobe tipc failed after switch order of device registration Revert "tipc: fix modprobe tipc failed after switch order of device registration" xen/pciback: Don't disable PCI_COMMAND on PCI device reset. crypto: vmx - ghash: do nosimd fallback manually net: mvpp2: fix bad MVPP2_TXQ_SCHED_TOKEN_CNTR_REG queue value net: mvneta: Fix err code path of probe net: dsa: mv88e6xxx: fix handling of upper half of STATS_TYPE_PORT ipv4/igmp: fix build error if !CONFIG_IP_MULTICAST ipv4/igmp: fix another memory leak in igmpv3_del_delrec() bnxt_en: Fix aggregation buffer leak under OOM condition. tipc: Avoid copying bytes beyond the supplied data usbnet: fix kernel crash after disconnect net: stmmac: fix reset gpio free missing net-gro: fix use-after-free read in napi_gro_frags() net: fec: fix the clk mismatch in failed_reset path llc: fix skb leak in llc_build_and_send_ui_pkt() ipv6: Consider sk_bound_dev_if when binding a raw socket to an address Revert "fib_rules: return 0 directly if an exactly same rule exists when NLM_F_EXCL not supplied" Revert "fib_rules: fix error in backport of e9919a24d302 ("fib_rules: return 0...")" Revert "x86/build: Move _etext to actual end of .text" Change-Id: I0a273429558465e31453b6f46f4acc74c97d576a Signed-off-by: jianzhou <jianzhou@codeaurora.org>
This commit is contained in:
commit
555bd7ee98
|
@ -37,7 +37,7 @@ from load_config import loadConfig
|
|||
extensions = ['kernel-doc', 'rstFlatTable', 'kernel_include', 'cdomain']
|
||||
|
||||
# The name of the math extension changed on Sphinx 1.4
|
||||
if major == 1 and minor > 3:
|
||||
if (major == 1 and minor > 3) or (major > 1):
|
||||
extensions.append("sphinx.ext.imgmath")
|
||||
else:
|
||||
extensions.append("sphinx.ext.pngmath")
|
||||
|
|
2
Makefile
2
Makefile
|
@ -1,6 +1,6 @@
|
|||
VERSION = 4
|
||||
PATCHLEVEL = 9
|
||||
SUBLEVEL = 180
|
||||
SUBLEVEL = 181
|
||||
EXTRAVERSION =
|
||||
NAME = Roaring Lionus
|
||||
|
||||
|
|
|
@ -183,6 +183,12 @@ const char *get_system_type(void)
|
|||
return ath79_sys_type;
|
||||
}
|
||||
|
||||
int get_c0_perfcount_int(void)
|
||||
{
|
||||
return ATH79_MISC_IRQ(5);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(get_c0_perfcount_int);
|
||||
|
||||
unsigned int get_c0_compare_int(void)
|
||||
{
|
||||
return CP0_LEGACY_COMPARE_IRQ;
|
||||
|
|
|
@ -6,3 +6,4 @@ cflags-$(CONFIG_MACH_PISTACHIO) += \
|
|||
-I$(srctree)/arch/mips/include/asm/mach-pistachio
|
||||
load-$(CONFIG_MACH_PISTACHIO) += 0xffffffff80400000
|
||||
zload-$(CONFIG_MACH_PISTACHIO) += 0xffffffff81000000
|
||||
all-$(CONFIG_MACH_PISTACHIO) := uImage.gz
|
||||
|
|
|
@ -1800,6 +1800,7 @@ static int power_pmu_event_init(struct perf_event *event)
|
|||
int n;
|
||||
int err;
|
||||
struct cpu_hw_events *cpuhw;
|
||||
u64 bhrb_filter;
|
||||
|
||||
if (!ppmu)
|
||||
return -ENOENT;
|
||||
|
@ -1896,13 +1897,14 @@ static int power_pmu_event_init(struct perf_event *event)
|
|||
err = power_check_constraints(cpuhw, events, cflags, n + 1);
|
||||
|
||||
if (has_branch_stack(event)) {
|
||||
cpuhw->bhrb_filter = ppmu->bhrb_filter_map(
|
||||
bhrb_filter = ppmu->bhrb_filter_map(
|
||||
event->attr.branch_sample_type);
|
||||
|
||||
if (cpuhw->bhrb_filter == -1) {
|
||||
if (bhrb_filter == -1) {
|
||||
put_cpu_var(cpu_hw_events);
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
cpuhw->bhrb_filter = bhrb_filter;
|
||||
}
|
||||
|
||||
put_cpu_var(cpu_hw_events);
|
||||
|
|
|
@ -29,6 +29,7 @@ enum {
|
|||
#define POWER8_MMCRA_IFM1 0x0000000040000000UL
|
||||
#define POWER8_MMCRA_IFM2 0x0000000080000000UL
|
||||
#define POWER8_MMCRA_IFM3 0x00000000C0000000UL
|
||||
#define POWER8_MMCRA_BHRB_MASK 0x00000000C0000000UL
|
||||
|
||||
/* Table of alternatives, sorted by column 0 */
|
||||
static const unsigned int event_alternatives[][MAX_ALT] = {
|
||||
|
@ -262,6 +263,8 @@ static u64 power8_bhrb_filter_map(u64 branch_sample_type)
|
|||
|
||||
static void power8_config_bhrb(u64 pmu_bhrb_filter)
|
||||
{
|
||||
pmu_bhrb_filter &= POWER8_MMCRA_BHRB_MASK;
|
||||
|
||||
/* Enable BHRB filter in PMU */
|
||||
mtspr(SPRN_MMCRA, (mfspr(SPRN_MMCRA) | pmu_bhrb_filter));
|
||||
}
|
||||
|
|
|
@ -30,6 +30,7 @@ enum {
|
|||
#define POWER9_MMCRA_IFM1 0x0000000040000000UL
|
||||
#define POWER9_MMCRA_IFM2 0x0000000080000000UL
|
||||
#define POWER9_MMCRA_IFM3 0x00000000C0000000UL
|
||||
#define POWER9_MMCRA_BHRB_MASK 0x00000000C0000000UL
|
||||
|
||||
GENERIC_EVENT_ATTR(cpu-cycles, PM_CYC);
|
||||
GENERIC_EVENT_ATTR(stalled-cycles-frontend, PM_ICT_NOSLOT_CYC);
|
||||
|
@ -177,6 +178,8 @@ static u64 power9_bhrb_filter_map(u64 branch_sample_type)
|
|||
|
||||
static void power9_config_bhrb(u64 pmu_bhrb_filter)
|
||||
{
|
||||
pmu_bhrb_filter &= POWER9_MMCRA_BHRB_MASK;
|
||||
|
||||
/* Enable BHRB filter in PMU */
|
||||
mtspr(SPRN_MMCRA, (mfspr(SPRN_MMCRA) | pmu_bhrb_filter));
|
||||
}
|
||||
|
|
|
@ -586,7 +586,7 @@ xcall_flush_tlb_kernel_range: /* 44 insns */
|
|||
sub %g7, %g1, %g3
|
||||
srlx %g3, 18, %g2
|
||||
brnz,pn %g2, 2f
|
||||
add %g2, 1, %g2
|
||||
sethi %hi(PAGE_SIZE), %g2
|
||||
sub %g3, %g2, %g3
|
||||
or %g1, 0x20, %g1 ! Nucleus
|
||||
1: stxa %g0, [%g1 + %g3] ASI_DMMU_DEMAP
|
||||
|
@ -750,7 +750,7 @@ __cheetah_xcall_flush_tlb_kernel_range: /* 44 insns */
|
|||
sub %g7, %g1, %g3
|
||||
srlx %g3, 18, %g2
|
||||
brnz,pn %g2, 2f
|
||||
add %g2, 1, %g2
|
||||
sethi %hi(PAGE_SIZE), %g2
|
||||
sub %g3, %g2, %g3
|
||||
or %g1, 0x20, %g1 ! Nucleus
|
||||
1: stxa %g0, [%g1 + %g3] ASI_DMMU_DEMAP
|
||||
|
|
|
@ -111,10 +111,10 @@ SECTIONS
|
|||
*(.text.__x86.indirect_thunk)
|
||||
__indirect_thunk_end = .;
|
||||
#endif
|
||||
} :text = 0x9090
|
||||
|
||||
/* End of text section */
|
||||
_etext = .;
|
||||
/* End of text section */
|
||||
_etext = .;
|
||||
} :text = 0x9090
|
||||
|
||||
NOTES :text :note
|
||||
|
||||
|
|
|
@ -292,7 +292,17 @@ int hibernate_resume_nonboot_cpu_disable(void)
|
|||
* address in its instruction pointer may not be possible to resolve
|
||||
* any more at that point (the page tables used by it previously may
|
||||
* have been overwritten by hibernate image data).
|
||||
*
|
||||
* First, make sure that we wake up all the potentially disabled SMT
|
||||
* threads which have been initially brought up and then put into
|
||||
* mwait/cpuidle sleep.
|
||||
* Those will be put to proper (not interfering with hibernation
|
||||
* resume) sleep afterwards, and the resumed kernel will decide itself
|
||||
* what to do with them.
|
||||
*/
|
||||
ret = cpuhp_smt_enable();
|
||||
if (ret)
|
||||
return ret;
|
||||
smp_ops.play_dead = resume_play_dead;
|
||||
ret = disable_nonboot_cpus();
|
||||
smp_ops.play_dead = play_dead;
|
||||
|
|
|
@ -11,6 +11,7 @@
|
|||
#include <linux/gfp.h>
|
||||
#include <linux/smp.h>
|
||||
#include <linux/suspend.h>
|
||||
#include <linux/cpu.h>
|
||||
|
||||
#include <asm/init.h>
|
||||
#include <asm/proto.h>
|
||||
|
@ -218,3 +219,35 @@ int arch_hibernation_header_restore(void *addr)
|
|||
restore_cr3 = rdr->cr3;
|
||||
return (rdr->magic == RESTORE_MAGIC) ? 0 : -EINVAL;
|
||||
}
|
||||
|
||||
int arch_resume_nosmt(void)
|
||||
{
|
||||
int ret = 0;
|
||||
/*
|
||||
* We reached this while coming out of hibernation. This means
|
||||
* that SMT siblings are sleeping in hlt, as mwait is not safe
|
||||
* against control transition during resume (see comment in
|
||||
* hibernate_resume_nonboot_cpu_disable()).
|
||||
*
|
||||
* If the resumed kernel has SMT disabled, we have to take all the
|
||||
* SMT siblings out of hlt, and offline them again so that they
|
||||
* end up in mwait proper.
|
||||
*
|
||||
* Called with hotplug disabled.
|
||||
*/
|
||||
cpu_hotplug_enable();
|
||||
if (cpu_smt_control == CPU_SMT_DISABLED ||
|
||||
cpu_smt_control == CPU_SMT_FORCE_DISABLED) {
|
||||
enum cpuhp_smt_control old = cpu_smt_control;
|
||||
|
||||
ret = cpuhp_smt_enable();
|
||||
if (ret)
|
||||
goto out;
|
||||
ret = cpuhp_smt_disable(old);
|
||||
if (ret)
|
||||
goto out;
|
||||
}
|
||||
out:
|
||||
cpu_hotplug_disable();
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -1,22 +1,14 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/**
|
||||
* GHASH routines supporting VMX instructions on the Power 8
|
||||
*
|
||||
* Copyright (C) 2015 International Business Machines Inc.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License as published by
|
||||
* the Free Software Foundation; version 2 only.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program; if not, write to the Free Software
|
||||
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
|
||||
* Copyright (C) 2015, 2019 International Business Machines Inc.
|
||||
*
|
||||
* Author: Marcelo Henrique Cerri <mhcerri@br.ibm.com>
|
||||
*
|
||||
* Extended by Daniel Axtens <dja@axtens.net> to replace the fallback
|
||||
* mechanism. The new approach is based on arm64 code, which is:
|
||||
* Copyright (C) 2014 - 2018 Linaro Ltd. <ard.biesheuvel@linaro.org>
|
||||
*/
|
||||
|
||||
#include <linux/types.h>
|
||||
|
@ -39,71 +31,25 @@ void gcm_ghash_p8(u64 Xi[2], const u128 htable[16],
|
|||
const u8 *in, size_t len);
|
||||
|
||||
struct p8_ghash_ctx {
|
||||
/* key used by vector asm */
|
||||
u128 htable[16];
|
||||
struct crypto_shash *fallback;
|
||||
/* key used by software fallback */
|
||||
be128 key;
|
||||
};
|
||||
|
||||
struct p8_ghash_desc_ctx {
|
||||
u64 shash[2];
|
||||
u8 buffer[GHASH_DIGEST_SIZE];
|
||||
int bytes;
|
||||
struct shash_desc fallback_desc;
|
||||
};
|
||||
|
||||
static int p8_ghash_init_tfm(struct crypto_tfm *tfm)
|
||||
{
|
||||
const char *alg = "ghash-generic";
|
||||
struct crypto_shash *fallback;
|
||||
struct crypto_shash *shash_tfm = __crypto_shash_cast(tfm);
|
||||
struct p8_ghash_ctx *ctx = crypto_tfm_ctx(tfm);
|
||||
|
||||
fallback = crypto_alloc_shash(alg, 0, CRYPTO_ALG_NEED_FALLBACK);
|
||||
if (IS_ERR(fallback)) {
|
||||
printk(KERN_ERR
|
||||
"Failed to allocate transformation for '%s': %ld\n",
|
||||
alg, PTR_ERR(fallback));
|
||||
return PTR_ERR(fallback);
|
||||
}
|
||||
|
||||
crypto_shash_set_flags(fallback,
|
||||
crypto_shash_get_flags((struct crypto_shash
|
||||
*) tfm));
|
||||
|
||||
/* Check if the descsize defined in the algorithm is still enough. */
|
||||
if (shash_tfm->descsize < sizeof(struct p8_ghash_desc_ctx)
|
||||
+ crypto_shash_descsize(fallback)) {
|
||||
printk(KERN_ERR
|
||||
"Desc size of the fallback implementation (%s) does not match the expected value: %lu vs %u\n",
|
||||
alg,
|
||||
shash_tfm->descsize - sizeof(struct p8_ghash_desc_ctx),
|
||||
crypto_shash_descsize(fallback));
|
||||
return -EINVAL;
|
||||
}
|
||||
ctx->fallback = fallback;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void p8_ghash_exit_tfm(struct crypto_tfm *tfm)
|
||||
{
|
||||
struct p8_ghash_ctx *ctx = crypto_tfm_ctx(tfm);
|
||||
|
||||
if (ctx->fallback) {
|
||||
crypto_free_shash(ctx->fallback);
|
||||
ctx->fallback = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
static int p8_ghash_init(struct shash_desc *desc)
|
||||
{
|
||||
struct p8_ghash_ctx *ctx = crypto_tfm_ctx(crypto_shash_tfm(desc->tfm));
|
||||
struct p8_ghash_desc_ctx *dctx = shash_desc_ctx(desc);
|
||||
|
||||
dctx->bytes = 0;
|
||||
memset(dctx->shash, 0, GHASH_DIGEST_SIZE);
|
||||
dctx->fallback_desc.tfm = ctx->fallback;
|
||||
dctx->fallback_desc.flags = desc->flags;
|
||||
return crypto_shash_init(&dctx->fallback_desc);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int p8_ghash_setkey(struct crypto_shash *tfm, const u8 *key,
|
||||
|
@ -121,7 +67,51 @@ static int p8_ghash_setkey(struct crypto_shash *tfm, const u8 *key,
|
|||
disable_kernel_vsx();
|
||||
pagefault_enable();
|
||||
preempt_enable();
|
||||
return crypto_shash_setkey(ctx->fallback, key, keylen);
|
||||
|
||||
memcpy(&ctx->key, key, GHASH_BLOCK_SIZE);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void __ghash_block(struct p8_ghash_ctx *ctx,
|
||||
struct p8_ghash_desc_ctx *dctx)
|
||||
{
|
||||
if (!IN_INTERRUPT) {
|
||||
preempt_disable();
|
||||
pagefault_disable();
|
||||
enable_kernel_vsx();
|
||||
gcm_ghash_p8(dctx->shash, ctx->htable,
|
||||
dctx->buffer, GHASH_DIGEST_SIZE);
|
||||
disable_kernel_vsx();
|
||||
pagefault_enable();
|
||||
preempt_enable();
|
||||
} else {
|
||||
crypto_xor((u8 *)dctx->shash, dctx->buffer, GHASH_BLOCK_SIZE);
|
||||
gf128mul_lle((be128 *)dctx->shash, &ctx->key);
|
||||
}
|
||||
}
|
||||
|
||||
static inline void __ghash_blocks(struct p8_ghash_ctx *ctx,
|
||||
struct p8_ghash_desc_ctx *dctx,
|
||||
const u8 *src, unsigned int srclen)
|
||||
{
|
||||
if (!IN_INTERRUPT) {
|
||||
preempt_disable();
|
||||
pagefault_disable();
|
||||
enable_kernel_vsx();
|
||||
gcm_ghash_p8(dctx->shash, ctx->htable,
|
||||
src, srclen);
|
||||
disable_kernel_vsx();
|
||||
pagefault_enable();
|
||||
preempt_enable();
|
||||
} else {
|
||||
while (srclen >= GHASH_BLOCK_SIZE) {
|
||||
crypto_xor((u8 *)dctx->shash, src, GHASH_BLOCK_SIZE);
|
||||
gf128mul_lle((be128 *)dctx->shash, &ctx->key);
|
||||
srclen -= GHASH_BLOCK_SIZE;
|
||||
src += GHASH_BLOCK_SIZE;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static int p8_ghash_update(struct shash_desc *desc,
|
||||
|
@ -131,49 +121,33 @@ static int p8_ghash_update(struct shash_desc *desc,
|
|||
struct p8_ghash_ctx *ctx = crypto_tfm_ctx(crypto_shash_tfm(desc->tfm));
|
||||
struct p8_ghash_desc_ctx *dctx = shash_desc_ctx(desc);
|
||||
|
||||
if (IN_INTERRUPT) {
|
||||
return crypto_shash_update(&dctx->fallback_desc, src,
|
||||
srclen);
|
||||
} else {
|
||||
if (dctx->bytes) {
|
||||
if (dctx->bytes + srclen < GHASH_DIGEST_SIZE) {
|
||||
memcpy(dctx->buffer + dctx->bytes, src,
|
||||
srclen);
|
||||
dctx->bytes += srclen;
|
||||
return 0;
|
||||
}
|
||||
if (dctx->bytes) {
|
||||
if (dctx->bytes + srclen < GHASH_DIGEST_SIZE) {
|
||||
memcpy(dctx->buffer + dctx->bytes, src,
|
||||
GHASH_DIGEST_SIZE - dctx->bytes);
|
||||
preempt_disable();
|
||||
pagefault_disable();
|
||||
enable_kernel_vsx();
|
||||
gcm_ghash_p8(dctx->shash, ctx->htable,
|
||||
dctx->buffer, GHASH_DIGEST_SIZE);
|
||||
disable_kernel_vsx();
|
||||
pagefault_enable();
|
||||
preempt_enable();
|
||||
src += GHASH_DIGEST_SIZE - dctx->bytes;
|
||||
srclen -= GHASH_DIGEST_SIZE - dctx->bytes;
|
||||
dctx->bytes = 0;
|
||||
srclen);
|
||||
dctx->bytes += srclen;
|
||||
return 0;
|
||||
}
|
||||
len = srclen & ~(GHASH_DIGEST_SIZE - 1);
|
||||
if (len) {
|
||||
preempt_disable();
|
||||
pagefault_disable();
|
||||
enable_kernel_vsx();
|
||||
gcm_ghash_p8(dctx->shash, ctx->htable, src, len);
|
||||
disable_kernel_vsx();
|
||||
pagefault_enable();
|
||||
preempt_enable();
|
||||
src += len;
|
||||
srclen -= len;
|
||||
}
|
||||
if (srclen) {
|
||||
memcpy(dctx->buffer, src, srclen);
|
||||
dctx->bytes = srclen;
|
||||
}
|
||||
return 0;
|
||||
memcpy(dctx->buffer + dctx->bytes, src,
|
||||
GHASH_DIGEST_SIZE - dctx->bytes);
|
||||
|
||||
__ghash_block(ctx, dctx);
|
||||
|
||||
src += GHASH_DIGEST_SIZE - dctx->bytes;
|
||||
srclen -= GHASH_DIGEST_SIZE - dctx->bytes;
|
||||
dctx->bytes = 0;
|
||||
}
|
||||
len = srclen & ~(GHASH_DIGEST_SIZE - 1);
|
||||
if (len) {
|
||||
__ghash_blocks(ctx, dctx, src, len);
|
||||
src += len;
|
||||
srclen -= len;
|
||||
}
|
||||
if (srclen) {
|
||||
memcpy(dctx->buffer, src, srclen);
|
||||
dctx->bytes = srclen;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int p8_ghash_final(struct shash_desc *desc, u8 *out)
|
||||
|
@ -182,25 +156,14 @@ static int p8_ghash_final(struct shash_desc *desc, u8 *out)
|
|||
struct p8_ghash_ctx *ctx = crypto_tfm_ctx(crypto_shash_tfm(desc->tfm));
|
||||
struct p8_ghash_desc_ctx *dctx = shash_desc_ctx(desc);
|
||||
|
||||
if (IN_INTERRUPT) {
|
||||
return crypto_shash_final(&dctx->fallback_desc, out);
|
||||
} else {
|
||||
if (dctx->bytes) {
|
||||
for (i = dctx->bytes; i < GHASH_DIGEST_SIZE; i++)
|
||||
dctx->buffer[i] = 0;
|
||||
preempt_disable();
|
||||
pagefault_disable();
|
||||
enable_kernel_vsx();
|
||||
gcm_ghash_p8(dctx->shash, ctx->htable,
|
||||
dctx->buffer, GHASH_DIGEST_SIZE);
|
||||
disable_kernel_vsx();
|
||||
pagefault_enable();
|
||||
preempt_enable();
|
||||
dctx->bytes = 0;
|
||||
}
|
||||
memcpy(out, dctx->shash, GHASH_DIGEST_SIZE);
|
||||
return 0;
|
||||
if (dctx->bytes) {
|
||||
for (i = dctx->bytes; i < GHASH_DIGEST_SIZE; i++)
|
||||
dctx->buffer[i] = 0;
|
||||
__ghash_block(ctx, dctx);
|
||||
dctx->bytes = 0;
|
||||
}
|
||||
memcpy(out, dctx->shash, GHASH_DIGEST_SIZE);
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct shash_alg p8_ghash_alg = {
|
||||
|
@ -215,11 +178,9 @@ struct shash_alg p8_ghash_alg = {
|
|||
.cra_name = "ghash",
|
||||
.cra_driver_name = "p8_ghash",
|
||||
.cra_priority = 1000,
|
||||
.cra_flags = CRYPTO_ALG_TYPE_SHASH | CRYPTO_ALG_NEED_FALLBACK,
|
||||
.cra_flags = CRYPTO_ALG_TYPE_SHASH,
|
||||
.cra_blocksize = GHASH_BLOCK_SIZE,
|
||||
.cra_ctxsize = sizeof(struct p8_ghash_ctx),
|
||||
.cra_module = THIS_MODULE,
|
||||
.cra_init = p8_ghash_init_tfm,
|
||||
.cra_exit = p8_ghash_exit_tfm,
|
||||
},
|
||||
};
|
||||
|
|
|
@ -18,6 +18,7 @@
|
|||
|
||||
#include "efistub.h"
|
||||
|
||||
|
||||
static int efi_get_secureboot(efi_system_table_t *sys_table_arg)
|
||||
{
|
||||
static efi_char16_t const sb_var_name[] = {
|
||||
|
|
|
@ -32,6 +32,13 @@
|
|||
|
||||
static unsigned long __chunk_size = EFI_READ_CHUNK_SIZE;
|
||||
|
||||
static int __section(.data) __nokaslr;
|
||||
|
||||
int __pure nokaslr(void)
|
||||
{
|
||||
return __nokaslr;
|
||||
}
|
||||
|
||||
/*
|
||||
* Allow the platform to override the allocation granularity: this allows
|
||||
* systems that have the capability to run with a larger page size to deal
|
||||
|
@ -41,13 +48,6 @@ static unsigned long __chunk_size = EFI_READ_CHUNK_SIZE;
|
|||
#define EFI_ALLOC_ALIGN EFI_PAGE_SIZE
|
||||
#endif
|
||||
|
||||
static int __section(.data) __nokaslr;
|
||||
|
||||
int __pure nokaslr(void)
|
||||
{
|
||||
return __nokaslr;
|
||||
}
|
||||
|
||||
#define EFI_MMAP_NR_SLACK_SLOTS 8
|
||||
|
||||
struct file_info {
|
||||
|
@ -366,14 +366,6 @@ efi_status_t efi_parse_options(char const *cmdline)
|
|||
if (str == cmdline || (str && str > cmdline && *(str - 1) == ' '))
|
||||
__nokaslr = 1;
|
||||
|
||||
/*
|
||||
* Currently, the only efi= option we look for is 'nochunk', which
|
||||
* is intended to work around known issues on certain x86 UEFI
|
||||
* versions. So ignore for now on other architectures.
|
||||
*/
|
||||
if (!IS_ENABLED(CONFIG_X86))
|
||||
return EFI_SUCCESS;
|
||||
|
||||
/*
|
||||
* If no EFI parameters were specified on the cmdline we've got
|
||||
* nothing to do.
|
||||
|
|
|
@ -609,6 +609,9 @@ void cdv_intel_lvds_init(struct drm_device *dev,
|
|||
int pipe;
|
||||
u8 pin;
|
||||
|
||||
if (!dev_priv->lvds_enabled_in_vbt)
|
||||
return;
|
||||
|
||||
pin = GMBUS_PORT_PANEL;
|
||||
if (!lvds_is_present_in_vbt(dev, &pin)) {
|
||||
DRM_DEBUG_KMS("LVDS is not present in VBT\n");
|
||||
|
|
|
@ -436,6 +436,9 @@ parse_driver_features(struct drm_psb_private *dev_priv,
|
|||
if (driver->lvds_config == BDB_DRIVER_FEATURE_EDP)
|
||||
dev_priv->edp.support = 1;
|
||||
|
||||
dev_priv->lvds_enabled_in_vbt = driver->lvds_config != 0;
|
||||
DRM_DEBUG_KMS("LVDS VBT config bits: 0x%x\n", driver->lvds_config);
|
||||
|
||||
/* This bit means to use 96Mhz for DPLL_A or not */
|
||||
if (driver->primary_lfp_id)
|
||||
dev_priv->dplla_96mhz = true;
|
||||
|
|
|
@ -538,6 +538,7 @@ struct drm_psb_private {
|
|||
int lvds_ssc_freq;
|
||||
bool is_lvds_on;
|
||||
bool is_mipi_on;
|
||||
bool lvds_enabled_in_vbt;
|
||||
u32 mipi_ctrl_display;
|
||||
|
||||
unsigned int core_freq;
|
||||
|
|
|
@ -37,6 +37,7 @@ struct nvkm_i2c_bus {
|
|||
struct mutex mutex;
|
||||
struct list_head head;
|
||||
struct i2c_adapter i2c;
|
||||
u8 enabled;
|
||||
};
|
||||
|
||||
int nvkm_i2c_bus_acquire(struct nvkm_i2c_bus *);
|
||||
|
@ -56,6 +57,7 @@ struct nvkm_i2c_aux {
|
|||
struct mutex mutex;
|
||||
struct list_head head;
|
||||
struct i2c_adapter i2c;
|
||||
u8 enabled;
|
||||
|
||||
u32 intr;
|
||||
};
|
||||
|
|
|
@ -105,9 +105,15 @@ nvkm_i2c_aux_acquire(struct nvkm_i2c_aux *aux)
|
|||
{
|
||||
struct nvkm_i2c_pad *pad = aux->pad;
|
||||
int ret;
|
||||
|
||||
AUX_TRACE(aux, "acquire");
|
||||
mutex_lock(&aux->mutex);
|
||||
ret = nvkm_i2c_pad_acquire(pad, NVKM_I2C_PAD_AUX);
|
||||
|
||||
if (aux->enabled)
|
||||
ret = nvkm_i2c_pad_acquire(pad, NVKM_I2C_PAD_AUX);
|
||||
else
|
||||
ret = -EIO;
|
||||
|
||||
if (ret)
|
||||
mutex_unlock(&aux->mutex);
|
||||
return ret;
|
||||
|
@ -141,6 +147,24 @@ nvkm_i2c_aux_del(struct nvkm_i2c_aux **paux)
|
|||
}
|
||||
}
|
||||
|
||||
void
|
||||
nvkm_i2c_aux_init(struct nvkm_i2c_aux *aux)
|
||||
{
|
||||
AUX_TRACE(aux, "init");
|
||||
mutex_lock(&aux->mutex);
|
||||
aux->enabled = true;
|
||||
mutex_unlock(&aux->mutex);
|
||||
}
|
||||
|
||||
void
|
||||
nvkm_i2c_aux_fini(struct nvkm_i2c_aux *aux)
|
||||
{
|
||||
AUX_TRACE(aux, "fini");
|
||||
mutex_lock(&aux->mutex);
|
||||
aux->enabled = false;
|
||||
mutex_unlock(&aux->mutex);
|
||||
}
|
||||
|
||||
int
|
||||
nvkm_i2c_aux_ctor(const struct nvkm_i2c_aux_func *func,
|
||||
struct nvkm_i2c_pad *pad, int id,
|
||||
|
|
|
@ -14,6 +14,8 @@ int nvkm_i2c_aux_ctor(const struct nvkm_i2c_aux_func *, struct nvkm_i2c_pad *,
|
|||
int nvkm_i2c_aux_new_(const struct nvkm_i2c_aux_func *, struct nvkm_i2c_pad *,
|
||||
int id, struct nvkm_i2c_aux **);
|
||||
void nvkm_i2c_aux_del(struct nvkm_i2c_aux **);
|
||||
void nvkm_i2c_aux_init(struct nvkm_i2c_aux *);
|
||||
void nvkm_i2c_aux_fini(struct nvkm_i2c_aux *);
|
||||
int nvkm_i2c_aux_xfer(struct nvkm_i2c_aux *, bool retry, u8 type,
|
||||
u32 addr, u8 *data, u8 size);
|
||||
|
||||
|
|
|
@ -160,8 +160,18 @@ nvkm_i2c_fini(struct nvkm_subdev *subdev, bool suspend)
|
|||
{
|
||||
struct nvkm_i2c *i2c = nvkm_i2c(subdev);
|
||||
struct nvkm_i2c_pad *pad;
|
||||
struct nvkm_i2c_bus *bus;
|
||||
struct nvkm_i2c_aux *aux;
|
||||
u32 mask;
|
||||
|
||||
list_for_each_entry(aux, &i2c->aux, head) {
|
||||
nvkm_i2c_aux_fini(aux);
|
||||
}
|
||||
|
||||
list_for_each_entry(bus, &i2c->bus, head) {
|
||||
nvkm_i2c_bus_fini(bus);
|
||||
}
|
||||
|
||||
if ((mask = (1 << i2c->func->aux) - 1), i2c->func->aux_stat) {
|
||||
i2c->func->aux_mask(i2c, NVKM_I2C_ANY, mask, 0);
|
||||
i2c->func->aux_stat(i2c, &mask, &mask, &mask, &mask);
|
||||
|
@ -180,6 +190,7 @@ nvkm_i2c_init(struct nvkm_subdev *subdev)
|
|||
struct nvkm_i2c *i2c = nvkm_i2c(subdev);
|
||||
struct nvkm_i2c_bus *bus;
|
||||
struct nvkm_i2c_pad *pad;
|
||||
struct nvkm_i2c_aux *aux;
|
||||
|
||||
list_for_each_entry(pad, &i2c->pad, head) {
|
||||
nvkm_i2c_pad_init(pad);
|
||||
|
@ -189,6 +200,10 @@ nvkm_i2c_init(struct nvkm_subdev *subdev)
|
|||
nvkm_i2c_bus_init(bus);
|
||||
}
|
||||
|
||||
list_for_each_entry(aux, &i2c->aux, head) {
|
||||
nvkm_i2c_aux_init(aux);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -110,6 +110,19 @@ nvkm_i2c_bus_init(struct nvkm_i2c_bus *bus)
|
|||
BUS_TRACE(bus, "init");
|
||||
if (bus->func->init)
|
||||
bus->func->init(bus);
|
||||
|
||||
mutex_lock(&bus->mutex);
|
||||
bus->enabled = true;
|
||||
mutex_unlock(&bus->mutex);
|
||||
}
|
||||
|
||||
void
|
||||
nvkm_i2c_bus_fini(struct nvkm_i2c_bus *bus)
|
||||
{
|
||||
BUS_TRACE(bus, "fini");
|
||||
mutex_lock(&bus->mutex);
|
||||
bus->enabled = false;
|
||||
mutex_unlock(&bus->mutex);
|
||||
}
|
||||
|
||||
void
|
||||
|
@ -126,9 +139,15 @@ nvkm_i2c_bus_acquire(struct nvkm_i2c_bus *bus)
|
|||
{
|
||||
struct nvkm_i2c_pad *pad = bus->pad;
|
||||
int ret;
|
||||
|
||||
BUS_TRACE(bus, "acquire");
|
||||
mutex_lock(&bus->mutex);
|
||||
ret = nvkm_i2c_pad_acquire(pad, NVKM_I2C_PAD_I2C);
|
||||
|
||||
if (bus->enabled)
|
||||
ret = nvkm_i2c_pad_acquire(pad, NVKM_I2C_PAD_I2C);
|
||||
else
|
||||
ret = -EIO;
|
||||
|
||||
if (ret)
|
||||
mutex_unlock(&bus->mutex);
|
||||
return ret;
|
||||
|
|
|
@ -17,6 +17,7 @@ int nvkm_i2c_bus_new_(const struct nvkm_i2c_bus_func *, struct nvkm_i2c_pad *,
|
|||
int id, struct nvkm_i2c_bus **);
|
||||
void nvkm_i2c_bus_del(struct nvkm_i2c_bus **);
|
||||
void nvkm_i2c_bus_init(struct nvkm_i2c_bus *);
|
||||
void nvkm_i2c_bus_fini(struct nvkm_i2c_bus *);
|
||||
|
||||
int nvkm_i2c_bit_xfer(struct nvkm_i2c_bus *, struct i2c_msg *, int);
|
||||
|
||||
|
|
|
@ -935,12 +935,12 @@ static void avivo_get_fb_ref_div(unsigned nom, unsigned den, unsigned post_div,
|
|||
ref_div_max = max(min(100 / post_div, ref_div_max), 1u);
|
||||
|
||||
/* get matching reference and feedback divider */
|
||||
*ref_div = min(max(DIV_ROUND_CLOSEST(den, post_div), 1u), ref_div_max);
|
||||
*ref_div = min(max(den/post_div, 1u), ref_div_max);
|
||||
*fb_div = DIV_ROUND_CLOSEST(nom * *ref_div * post_div, den);
|
||||
|
||||
/* limit fb divider to its maximum */
|
||||
if (*fb_div > fb_div_max) {
|
||||
*ref_div = DIV_ROUND_CLOSEST(*ref_div * fb_div_max, *fb_div);
|
||||
*ref_div = (*ref_div * fb_div_max)/(*fb_div);
|
||||
*fb_div = fb_div_max;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1245,7 +1245,13 @@ static int vmw_master_set(struct drm_device *dev,
|
|||
}
|
||||
|
||||
dev_priv->active_master = vmaster;
|
||||
drm_sysfs_hotplug_event(dev);
|
||||
|
||||
/*
|
||||
* Inform a new master that the layout may have changed while
|
||||
* it was gone.
|
||||
*/
|
||||
if (!from_open)
|
||||
drm_sysfs_hotplug_event(dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -22,15 +22,6 @@
|
|||
#define AR71XX_RESET_REG_MISC_INT_ENABLE 4
|
||||
|
||||
#define ATH79_MISC_IRQ_COUNT 32
|
||||
#define ATH79_MISC_PERF_IRQ 5
|
||||
|
||||
static int ath79_perfcount_irq;
|
||||
|
||||
int get_c0_perfcount_int(void)
|
||||
{
|
||||
return ath79_perfcount_irq;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(get_c0_perfcount_int);
|
||||
|
||||
static void ath79_misc_irq_handler(struct irq_desc *desc)
|
||||
{
|
||||
|
@ -122,8 +113,6 @@ static void __init ath79_misc_intc_domain_init(
|
|||
{
|
||||
void __iomem *base = domain->host_data;
|
||||
|
||||
ath79_perfcount_irq = irq_create_mapping(domain, ATH79_MISC_PERF_IRQ);
|
||||
|
||||
/* Disable and clear all interrupts */
|
||||
__raw_writel(0, base + AR71XX_RESET_REG_MISC_INT_ENABLE);
|
||||
__raw_writel(0, base + AR71XX_RESET_REG_MISC_INT_STATUS);
|
||||
|
|
|
@ -402,6 +402,7 @@ static int smsusb_init_device(struct usb_interface *intf, int board_id)
|
|||
struct smsusb_device_t *dev;
|
||||
void *mdev;
|
||||
int i, rc;
|
||||
int align = 0;
|
||||
|
||||
/* create device object */
|
||||
dev = kzalloc(sizeof(struct smsusb_device_t), GFP_KERNEL);
|
||||
|
@ -413,6 +414,24 @@ static int smsusb_init_device(struct usb_interface *intf, int board_id)
|
|||
dev->udev = interface_to_usbdev(intf);
|
||||
dev->state = SMSUSB_DISCONNECTED;
|
||||
|
||||
for (i = 0; i < intf->cur_altsetting->desc.bNumEndpoints; i++) {
|
||||
struct usb_endpoint_descriptor *desc =
|
||||
&intf->cur_altsetting->endpoint[i].desc;
|
||||
|
||||
if (desc->bEndpointAddress & USB_DIR_IN) {
|
||||
dev->in_ep = desc->bEndpointAddress;
|
||||
align = usb_endpoint_maxp(desc) - sizeof(struct sms_msg_hdr);
|
||||
} else {
|
||||
dev->out_ep = desc->bEndpointAddress;
|
||||
}
|
||||
}
|
||||
|
||||
pr_debug("in_ep = %02x, out_ep = %02x\n", dev->in_ep, dev->out_ep);
|
||||
if (!dev->in_ep || !dev->out_ep || align < 0) { /* Missing endpoints? */
|
||||
smsusb_term_device(intf);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
params.device_type = sms_get_board(board_id)->type;
|
||||
|
||||
switch (params.device_type) {
|
||||
|
@ -427,24 +446,12 @@ static int smsusb_init_device(struct usb_interface *intf, int board_id)
|
|||
/* fall-thru */
|
||||
default:
|
||||
dev->buffer_size = USB2_BUFFER_SIZE;
|
||||
dev->response_alignment =
|
||||
le16_to_cpu(dev->udev->ep_in[1]->desc.wMaxPacketSize) -
|
||||
sizeof(struct sms_msg_hdr);
|
||||
dev->response_alignment = align;
|
||||
|
||||
params.flags |= SMS_DEVICE_FAMILY2;
|
||||
break;
|
||||
}
|
||||
|
||||
for (i = 0; i < intf->cur_altsetting->desc.bNumEndpoints; i++) {
|
||||
if (intf->cur_altsetting->endpoint[i].desc. bEndpointAddress & USB_DIR_IN)
|
||||
dev->in_ep = intf->cur_altsetting->endpoint[i].desc.bEndpointAddress;
|
||||
else
|
||||
dev->out_ep = intf->cur_altsetting->endpoint[i].desc.bEndpointAddress;
|
||||
}
|
||||
|
||||
pr_debug("in_ep = %02x, out_ep = %02x\n",
|
||||
dev->in_ep, dev->out_ep);
|
||||
|
||||
params.device = &dev->udev->dev;
|
||||
params.buffer_size = dev->buffer_size;
|
||||
params.num_buffers = MAX_BUFFERS;
|
||||
|
|
|
@ -868,7 +868,7 @@ static struct uvc_entity *uvc_alloc_entity(u16 type, u8 id,
|
|||
unsigned int size;
|
||||
unsigned int i;
|
||||
|
||||
extra_size = ALIGN(extra_size, sizeof(*entity->pads));
|
||||
extra_size = roundup(extra_size, sizeof(*entity->pads));
|
||||
num_inputs = (type & UVC_TERM_OUTPUT) ? num_pads : num_pads - 1;
|
||||
size = sizeof(*entity) + extra_size + sizeof(*entity->pads) * num_pads
|
||||
+ num_inputs;
|
||||
|
|
|
@ -782,6 +782,8 @@ static int genwqe_pin_mem(struct genwqe_file *cfile, struct genwqe_mem *m)
|
|||
|
||||
if ((m->addr == 0x0) || (m->size == 0))
|
||||
return -EINVAL;
|
||||
if (m->size > ULONG_MAX - PAGE_SIZE - (m->addr & ~PAGE_MASK))
|
||||
return -EINVAL;
|
||||
|
||||
map_addr = (m->addr & PAGE_MASK);
|
||||
map_size = round_up(m->size + (m->addr & ~PAGE_MASK), PAGE_SIZE);
|
||||
|
|
|
@ -582,6 +582,10 @@ int genwqe_user_vmap(struct genwqe_dev *cd, struct dma_mapping *m, void *uaddr,
|
|||
/* determine space needed for page_list. */
|
||||
data = (unsigned long)uaddr;
|
||||
offs = offset_in_page(data);
|
||||
if (size > ULONG_MAX - PAGE_SIZE - offs) {
|
||||
m->size = 0; /* mark unused and not added */
|
||||
return -EINVAL;
|
||||
}
|
||||
m->nr_pages = DIV_ROUND_UP(offs + size, PAGE_SIZE);
|
||||
|
||||
m->page_list = kcalloc(m->nr_pages,
|
||||
|
|
|
@ -789,7 +789,7 @@ static uint64_t _mv88e6xxx_get_ethtool_stat(struct mv88e6xxx_chip *chip,
|
|||
err = mv88e6xxx_port_read(chip, port, s->reg + 1, ®);
|
||||
if (err)
|
||||
return UINT64_MAX;
|
||||
high = reg;
|
||||
low |= ((u32)reg) << 16;
|
||||
}
|
||||
break;
|
||||
case BANK0:
|
||||
|
|
|
@ -1425,6 +1425,8 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_napi *bnapi, u32 *raw_cons,
|
|||
skb = bnxt_copy_skb(bnapi, data, len, dma_addr);
|
||||
bnxt_reuse_rx_data(rxr, cons, data);
|
||||
if (!skb) {
|
||||
if (agg_bufs)
|
||||
bnxt_reuse_rx_agg_bufs(bnapi, cp_cons, agg_bufs);
|
||||
rc = -ENOMEM;
|
||||
goto next_rx;
|
||||
}
|
||||
|
|
|
@ -3508,7 +3508,7 @@ failed_init:
|
|||
if (fep->reg_phy)
|
||||
regulator_disable(fep->reg_phy);
|
||||
failed_reset:
|
||||
pm_runtime_put(&pdev->dev);
|
||||
pm_runtime_put_noidle(&pdev->dev);
|
||||
pm_runtime_disable(&pdev->dev);
|
||||
failed_regulator:
|
||||
failed_clk_ipg:
|
||||
|
|
|
@ -4162,7 +4162,7 @@ static int mvneta_probe(struct platform_device *pdev)
|
|||
err = register_netdev(dev);
|
||||
if (err < 0) {
|
||||
dev_err(&pdev->dev, "failed to register\n");
|
||||
goto err_free_stats;
|
||||
goto err_netdev;
|
||||
}
|
||||
|
||||
netdev_info(dev, "Using %s mac address %pM\n", mac_from,
|
||||
|
@ -4181,13 +4181,11 @@ static int mvneta_probe(struct platform_device *pdev)
|
|||
return 0;
|
||||
|
||||
err_netdev:
|
||||
unregister_netdev(dev);
|
||||
if (pp->bm_priv) {
|
||||
mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_long, 1 << pp->id);
|
||||
mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_short,
|
||||
1 << pp->id);
|
||||
}
|
||||
err_free_stats:
|
||||
free_percpu(pp->stats);
|
||||
err_free_ports:
|
||||
free_percpu(pp->ports);
|
||||
|
|
|
@ -3938,7 +3938,7 @@ static inline void mvpp2_gmac_max_rx_size_set(struct mvpp2_port *port)
|
|||
/* Set defaults to the MVPP2 port */
|
||||
static void mvpp2_defaults_set(struct mvpp2_port *port)
|
||||
{
|
||||
int tx_port_num, val, queue, ptxq, lrxq;
|
||||
int tx_port_num, val, queue, lrxq;
|
||||
|
||||
/* Configure port to loopback if needed */
|
||||
if (port->flags & MVPP2_F_LOOPBACK)
|
||||
|
@ -3958,11 +3958,9 @@ static void mvpp2_defaults_set(struct mvpp2_port *port)
|
|||
mvpp2_write(port->priv, MVPP2_TXP_SCHED_CMD_1_REG, 0);
|
||||
|
||||
/* Close bandwidth for all queues */
|
||||
for (queue = 0; queue < MVPP2_MAX_TXQ; queue++) {
|
||||
ptxq = mvpp2_txq_phys(port->id, queue);
|
||||
for (queue = 0; queue < MVPP2_MAX_TXQ; queue++)
|
||||
mvpp2_write(port->priv,
|
||||
MVPP2_TXQ_SCHED_TOKEN_CNTR_REG(ptxq), 0);
|
||||
}
|
||||
MVPP2_TXQ_SCHED_TOKEN_CNTR_REG(queue), 0);
|
||||
|
||||
/* Set refill period to 1 usec, refill tokens
|
||||
* and bucket size to maximum
|
||||
|
@ -4709,7 +4707,7 @@ static void mvpp2_txq_deinit(struct mvpp2_port *port,
|
|||
txq->descs_phys = 0;
|
||||
|
||||
/* Set minimum bandwidth for disabled TXQs */
|
||||
mvpp2_write(port->priv, MVPP2_TXQ_SCHED_TOKEN_CNTR_REG(txq->id), 0);
|
||||
mvpp2_write(port->priv, MVPP2_TXQ_SCHED_TOKEN_CNTR_REG(txq->log_id), 0);
|
||||
|
||||
/* Set Tx descriptors queue starting address and size */
|
||||
mvpp2_write(port->priv, MVPP2_TXQ_NUM_REG, txq->id);
|
||||
|
|
|
@ -1930,6 +1930,8 @@ static int mlx4_en_set_tunable(struct net_device *dev,
|
|||
return ret;
|
||||
}
|
||||
|
||||
#define MLX4_EEPROM_PAGE_LEN 256
|
||||
|
||||
static int mlx4_en_get_module_info(struct net_device *dev,
|
||||
struct ethtool_modinfo *modinfo)
|
||||
{
|
||||
|
@ -1964,7 +1966,7 @@ static int mlx4_en_get_module_info(struct net_device *dev,
|
|||
break;
|
||||
case MLX4_MODULE_ID_SFP:
|
||||
modinfo->type = ETH_MODULE_SFF_8472;
|
||||
modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN;
|
||||
modinfo->eeprom_len = MLX4_EEPROM_PAGE_LEN;
|
||||
break;
|
||||
default:
|
||||
return -ENOSYS;
|
||||
|
|
|
@ -1960,11 +1960,6 @@ int mlx4_get_module_info(struct mlx4_dev *dev, u8 port,
|
|||
size -= offset + size - I2C_PAGE_SIZE;
|
||||
|
||||
i2c_addr = I2C_ADDR_LOW;
|
||||
if (offset >= I2C_PAGE_SIZE) {
|
||||
/* Reset offset to high page */
|
||||
i2c_addr = I2C_ADDR_HIGH;
|
||||
offset -= I2C_PAGE_SIZE;
|
||||
}
|
||||
|
||||
cable_info = (struct mlx4_cable_info *)inmad->data;
|
||||
cable_info->dev_mem_address = cpu_to_be16(offset);
|
||||
|
|
|
@ -240,7 +240,8 @@ int stmmac_mdio_reset(struct mii_bus *bus)
|
|||
of_property_read_u32_array(np,
|
||||
"snps,reset-delays-us", data->delays, 3);
|
||||
|
||||
if (gpio_request(data->reset_gpio, "mdio-reset"))
|
||||
if (devm_gpio_request(priv->device, data->reset_gpio,
|
||||
"mdio-reset"))
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -508,6 +508,7 @@ static int rx_submit (struct usbnet *dev, struct urb *urb, gfp_t flags)
|
|||
|
||||
if (netif_running (dev->net) &&
|
||||
netif_device_present (dev->net) &&
|
||||
test_bit(EVENT_DEV_OPEN, &dev->flags) &&
|
||||
!test_bit (EVENT_RX_HALT, &dev->flags) &&
|
||||
!test_bit (EVENT_DEV_ASLEEP, &dev->flags)) {
|
||||
switch (retval = usb_submit_urb (urb, GFP_ATOMIC)) {
|
||||
|
@ -1394,6 +1395,11 @@ netdev_tx_t usbnet_start_xmit (struct sk_buff *skb,
|
|||
spin_unlock_irqrestore(&dev->txq.lock, flags);
|
||||
goto drop;
|
||||
}
|
||||
if (netif_queue_stopped(net)) {
|
||||
usb_autopm_put_interface_async(dev->intf);
|
||||
spin_unlock_irqrestore(&dev->txq.lock, flags);
|
||||
goto drop;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
/* if this triggers the device is still a sleep */
|
||||
|
|
|
@ -3222,6 +3222,7 @@ brcmf_notify_sched_scan_results(struct brcmf_if *ifp,
|
|||
struct brcmf_pno_scanresults_le *pfn_result;
|
||||
u32 result_count;
|
||||
u32 status;
|
||||
u32 datalen;
|
||||
|
||||
brcmf_dbg(SCAN, "Enter\n");
|
||||
|
||||
|
@ -3247,6 +3248,14 @@ brcmf_notify_sched_scan_results(struct brcmf_if *ifp,
|
|||
if (result_count > 0) {
|
||||
int i;
|
||||
|
||||
data += sizeof(struct brcmf_pno_scanresults_le);
|
||||
netinfo_start = (struct brcmf_pno_net_info_le *)data;
|
||||
datalen = e->datalen - ((void *)netinfo_start - (void *)pfn_result);
|
||||
if (datalen < result_count * sizeof(*netinfo)) {
|
||||
brcmf_err("insufficient event data\n");
|
||||
goto out_err;
|
||||
}
|
||||
|
||||
request = kzalloc(sizeof(*request), GFP_KERNEL);
|
||||
ssid = kcalloc(result_count, sizeof(*ssid), GFP_KERNEL);
|
||||
channel = kcalloc(result_count, sizeof(*channel), GFP_KERNEL);
|
||||
|
@ -3256,9 +3265,6 @@ brcmf_notify_sched_scan_results(struct brcmf_if *ifp,
|
|||
}
|
||||
|
||||
request->wiphy = wiphy;
|
||||
data += sizeof(struct brcmf_pno_scanresults_le);
|
||||
netinfo_start = (struct brcmf_pno_net_info_le *)data;
|
||||
|
||||
for (i = 0; i < result_count; i++) {
|
||||
netinfo = &netinfo_start[i];
|
||||
if (!netinfo) {
|
||||
|
@ -3268,6 +3274,8 @@ brcmf_notify_sched_scan_results(struct brcmf_if *ifp,
|
|||
goto out_err;
|
||||
}
|
||||
|
||||
if (netinfo->SSID_len > IEEE80211_MAX_SSID_LEN)
|
||||
netinfo->SSID_len = IEEE80211_MAX_SSID_LEN;
|
||||
brcmf_dbg(SCAN, "SSID:%s Channel:%d\n",
|
||||
netinfo->SSID, netinfo->channel);
|
||||
memcpy(ssid[i].ssid, netinfo->SSID, netinfo->SSID_len);
|
||||
|
@ -3573,6 +3581,8 @@ brcmf_wowl_nd_results(struct brcmf_if *ifp, const struct brcmf_event_msg *e,
|
|||
|
||||
data += sizeof(struct brcmf_pno_scanresults_le);
|
||||
netinfo = (struct brcmf_pno_net_info_le *)data;
|
||||
if (netinfo->SSID_len > IEEE80211_MAX_SSID_LEN)
|
||||
netinfo->SSID_len = IEEE80211_MAX_SSID_LEN;
|
||||
memcpy(cfg->wowl.nd->ssid.ssid, netinfo->SSID, netinfo->SSID_len);
|
||||
cfg->wowl.nd->ssid.ssid_len = netinfo->SSID_len;
|
||||
cfg->wowl.nd->n_channels = 1;
|
||||
|
|
|
@ -339,7 +339,8 @@ void brcmf_rx_frame(struct device *dev, struct sk_buff *skb, bool handle_event)
|
|||
} else {
|
||||
/* Process special event packets */
|
||||
if (handle_event)
|
||||
brcmf_fweh_process_skb(ifp->drvr, skb);
|
||||
brcmf_fweh_process_skb(ifp->drvr, skb,
|
||||
BCMILCP_SUBTYPE_VENDOR_LONG);
|
||||
|
||||
brcmf_netif_rx(ifp, skb);
|
||||
}
|
||||
|
@ -356,7 +357,7 @@ void brcmf_rx_event(struct device *dev, struct sk_buff *skb)
|
|||
if (brcmf_rx_hdrpull(drvr, skb, &ifp))
|
||||
return;
|
||||
|
||||
brcmf_fweh_process_skb(ifp->drvr, skb);
|
||||
brcmf_fweh_process_skb(ifp->drvr, skb, 0);
|
||||
brcmu_pkt_buf_free_skb(skb);
|
||||
}
|
||||
|
||||
|
|
|
@ -181,7 +181,7 @@ enum brcmf_fweh_event_code {
|
|||
*/
|
||||
#define BRCM_OUI "\x00\x10\x18"
|
||||
#define BCMILCP_BCM_SUBTYPE_EVENT 1
|
||||
|
||||
#define BCMILCP_SUBTYPE_VENDOR_LONG 32769
|
||||
|
||||
/**
|
||||
* struct brcm_ethhdr - broadcom specific ether header.
|
||||
|
@ -302,10 +302,10 @@ void brcmf_fweh_process_event(struct brcmf_pub *drvr,
|
|||
void brcmf_fweh_p2pdev_setup(struct brcmf_if *ifp, bool ongoing);
|
||||
|
||||
static inline void brcmf_fweh_process_skb(struct brcmf_pub *drvr,
|
||||
struct sk_buff *skb)
|
||||
struct sk_buff *skb, u16 stype)
|
||||
{
|
||||
struct brcmf_event *event_packet;
|
||||
u16 usr_stype;
|
||||
u16 subtype, usr_stype;
|
||||
|
||||
/* only process events when protocol matches */
|
||||
if (skb->protocol != cpu_to_be16(ETH_P_LINK_CTL))
|
||||
|
@ -314,8 +314,16 @@ static inline void brcmf_fweh_process_skb(struct brcmf_pub *drvr,
|
|||
if ((skb->len + ETH_HLEN) < sizeof(*event_packet))
|
||||
return;
|
||||
|
||||
/* check for BRCM oui match */
|
||||
event_packet = (struct brcmf_event *)skb_mac_header(skb);
|
||||
|
||||
/* check subtype if needed */
|
||||
if (unlikely(stype)) {
|
||||
subtype = get_unaligned_be16(&event_packet->hdr.subtype);
|
||||
if (subtype != stype)
|
||||
return;
|
||||
}
|
||||
|
||||
/* check for BRCM oui match */
|
||||
if (memcmp(BRCM_OUI, &event_packet->hdr.oui[0],
|
||||
sizeof(event_packet->hdr.oui)))
|
||||
return;
|
||||
|
|
|
@ -1114,7 +1114,7 @@ static void brcmf_msgbuf_process_event(struct brcmf_msgbuf *msgbuf, void *buf)
|
|||
|
||||
skb->protocol = eth_type_trans(skb, ifp->ndev);
|
||||
|
||||
brcmf_fweh_process_skb(ifp->drvr, skb);
|
||||
brcmf_fweh_process_skb(ifp->drvr, skb, 0);
|
||||
|
||||
exit:
|
||||
brcmu_pkt_buf_free_skb(skb);
|
||||
|
|
|
@ -563,8 +563,6 @@ ccio_io_pdir_entry(u64 *pdir_ptr, space_t sid, unsigned long vba,
|
|||
/* We currently only support kernel addresses */
|
||||
BUG_ON(sid != KERNEL_SPACE);
|
||||
|
||||
mtsp(sid,1);
|
||||
|
||||
/*
|
||||
** WORD 1 - low order word
|
||||
** "hints" parm includes the VALID bit!
|
||||
|
@ -595,7 +593,7 @@ ccio_io_pdir_entry(u64 *pdir_ptr, space_t sid, unsigned long vba,
|
|||
** Grab virtual index [0:11]
|
||||
** Deposit virt_idx bits into I/O PDIR word
|
||||
*/
|
||||
asm volatile ("lci %%r0(%%sr1, %1), %0" : "=r" (ci) : "r" (vba));
|
||||
asm volatile ("lci %%r0(%1), %0" : "=r" (ci) : "r" (vba));
|
||||
asm volatile ("extru %1,19,12,%0" : "+r" (ci) : "r" (ci));
|
||||
asm volatile ("depw %1,15,12,%0" : "+r" (pa) : "r" (ci));
|
||||
|
||||
|
|
|
@ -573,8 +573,7 @@ sba_io_pdir_entry(u64 *pdir_ptr, space_t sid, unsigned long vba,
|
|||
pa = virt_to_phys(vba);
|
||||
pa &= IOVP_MASK;
|
||||
|
||||
mtsp(sid,1);
|
||||
asm("lci 0(%%sr1, %1), %0" : "=r" (ci) : "r" (vba));
|
||||
asm("lci 0(%1), %0" : "=r" (ci) : "r" (vba));
|
||||
pa |= (ci >> PAGE_SHIFT) & 0xff; /* move CI (8 bits) into lowest byte */
|
||||
|
||||
pa |= SBA_PDIR_VALID_BIT; /* set "valid" bit */
|
||||
|
|
|
@ -161,6 +161,7 @@ extern const struct attribute_group *zfcp_port_attr_groups[];
|
|||
extern struct mutex zfcp_sysfs_port_units_mutex;
|
||||
extern struct device_attribute *zfcp_sysfs_sdev_attrs[];
|
||||
extern struct device_attribute *zfcp_sysfs_shost_attrs[];
|
||||
bool zfcp_sysfs_port_is_removing(const struct zfcp_port *const port);
|
||||
|
||||
/* zfcp_unit.c */
|
||||
extern int zfcp_unit_add(struct zfcp_port *, u64);
|
||||
|
|
|
@ -124,6 +124,15 @@ static int zfcp_scsi_slave_alloc(struct scsi_device *sdev)
|
|||
|
||||
zfcp_sdev->erp_action.port = port;
|
||||
|
||||
mutex_lock(&zfcp_sysfs_port_units_mutex);
|
||||
if (zfcp_sysfs_port_is_removing(port)) {
|
||||
/* port is already gone */
|
||||
mutex_unlock(&zfcp_sysfs_port_units_mutex);
|
||||
put_device(&port->dev); /* undo zfcp_get_port_by_wwpn() */
|
||||
return -ENXIO;
|
||||
}
|
||||
mutex_unlock(&zfcp_sysfs_port_units_mutex);
|
||||
|
||||
unit = zfcp_unit_find(port, zfcp_scsi_dev_lun(sdev));
|
||||
if (unit)
|
||||
put_device(&unit->dev);
|
||||
|
|
|
@ -237,6 +237,53 @@ static ZFCP_DEV_ATTR(adapter, port_rescan, S_IWUSR, NULL,
|
|||
|
||||
DEFINE_MUTEX(zfcp_sysfs_port_units_mutex);
|
||||
|
||||
static void zfcp_sysfs_port_set_removing(struct zfcp_port *const port)
|
||||
{
|
||||
lockdep_assert_held(&zfcp_sysfs_port_units_mutex);
|
||||
atomic_set(&port->units, -1);
|
||||
}
|
||||
|
||||
bool zfcp_sysfs_port_is_removing(const struct zfcp_port *const port)
|
||||
{
|
||||
lockdep_assert_held(&zfcp_sysfs_port_units_mutex);
|
||||
return atomic_read(&port->units) == -1;
|
||||
}
|
||||
|
||||
static bool zfcp_sysfs_port_in_use(struct zfcp_port *const port)
|
||||
{
|
||||
struct zfcp_adapter *const adapter = port->adapter;
|
||||
unsigned long flags;
|
||||
struct scsi_device *sdev;
|
||||
bool in_use = true;
|
||||
|
||||
mutex_lock(&zfcp_sysfs_port_units_mutex);
|
||||
if (atomic_read(&port->units) > 0)
|
||||
goto unlock_port_units_mutex; /* zfcp_unit(s) under port */
|
||||
|
||||
spin_lock_irqsave(adapter->scsi_host->host_lock, flags);
|
||||
__shost_for_each_device(sdev, adapter->scsi_host) {
|
||||
const struct zfcp_scsi_dev *zsdev = sdev_to_zfcp(sdev);
|
||||
|
||||
if (sdev->sdev_state == SDEV_DEL ||
|
||||
sdev->sdev_state == SDEV_CANCEL)
|
||||
continue;
|
||||
if (zsdev->port != port)
|
||||
continue;
|
||||
/* alive scsi_device under port of interest */
|
||||
goto unlock_host_lock;
|
||||
}
|
||||
|
||||
/* port is about to be removed, so no more unit_add or slave_alloc */
|
||||
zfcp_sysfs_port_set_removing(port);
|
||||
in_use = false;
|
||||
|
||||
unlock_host_lock:
|
||||
spin_unlock_irqrestore(adapter->scsi_host->host_lock, flags);
|
||||
unlock_port_units_mutex:
|
||||
mutex_unlock(&zfcp_sysfs_port_units_mutex);
|
||||
return in_use;
|
||||
}
|
||||
|
||||
static ssize_t zfcp_sysfs_port_remove_store(struct device *dev,
|
||||
struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
|
@ -259,15 +306,11 @@ static ssize_t zfcp_sysfs_port_remove_store(struct device *dev,
|
|||
else
|
||||
retval = 0;
|
||||
|
||||
mutex_lock(&zfcp_sysfs_port_units_mutex);
|
||||
if (atomic_read(&port->units) > 0) {
|
||||
if (zfcp_sysfs_port_in_use(port)) {
|
||||
retval = -EBUSY;
|
||||
mutex_unlock(&zfcp_sysfs_port_units_mutex);
|
||||
put_device(&port->dev); /* undo zfcp_get_port_by_wwpn() */
|
||||
goto out;
|
||||
}
|
||||
/* port is about to be removed, so no more unit_add */
|
||||
atomic_set(&port->units, -1);
|
||||
mutex_unlock(&zfcp_sysfs_port_units_mutex);
|
||||
|
||||
write_lock_irq(&adapter->port_list_lock);
|
||||
list_del(&port->list);
|
||||
|
|
|
@ -123,7 +123,7 @@ int zfcp_unit_add(struct zfcp_port *port, u64 fcp_lun)
|
|||
int retval = 0;
|
||||
|
||||
mutex_lock(&zfcp_sysfs_port_units_mutex);
|
||||
if (atomic_read(&port->units) == -1) {
|
||||
if (zfcp_sysfs_port_is_removing(port)) {
|
||||
/* port is already gone */
|
||||
retval = -ENODEV;
|
||||
goto out;
|
||||
|
@ -167,8 +167,14 @@ int zfcp_unit_add(struct zfcp_port *port, u64 fcp_lun)
|
|||
write_lock_irq(&port->unit_list_lock);
|
||||
list_add_tail(&unit->list, &port->unit_list);
|
||||
write_unlock_irq(&port->unit_list_lock);
|
||||
/*
|
||||
* lock order: shost->scan_mutex before zfcp_sysfs_port_units_mutex
|
||||
* due to zfcp_unit_scsi_scan() => zfcp_scsi_slave_alloc()
|
||||
*/
|
||||
mutex_unlock(&zfcp_sysfs_port_units_mutex);
|
||||
|
||||
zfcp_unit_scsi_scan(unit);
|
||||
return retval;
|
||||
|
||||
out:
|
||||
mutex_unlock(&zfcp_sysfs_port_units_mutex);
|
||||
|
|
|
@ -381,9 +381,18 @@ create_pagelist(char __user *buf, size_t count, unsigned short type,
|
|||
int run, addridx, actual_pages;
|
||||
unsigned long *need_release;
|
||||
|
||||
if (count >= INT_MAX - PAGE_SIZE)
|
||||
return NULL;
|
||||
|
||||
offset = (unsigned int)buf & (PAGE_SIZE - 1);
|
||||
num_pages = (count + offset + PAGE_SIZE - 1) / PAGE_SIZE;
|
||||
|
||||
if (num_pages > (SIZE_MAX - sizeof(PAGELIST_T) -
|
||||
sizeof(struct vchiq_pagelist_info)) /
|
||||
(sizeof(u32) + sizeof(pages[0]) +
|
||||
sizeof(struct scatterlist)))
|
||||
return NULL;
|
||||
|
||||
*ppagelist = NULL;
|
||||
|
||||
/* Allocate enough storage to hold the page pointers and the page
|
||||
|
|
|
@ -579,7 +579,7 @@ static int max310x_set_ref_clk(struct max310x_port *s, unsigned long freq,
|
|||
}
|
||||
|
||||
/* Configure clock source */
|
||||
clksrc = xtal ? MAX310X_CLKSRC_CRYST_BIT : MAX310X_CLKSRC_EXTCLK_BIT;
|
||||
clksrc = MAX310X_CLKSRC_EXTCLK_BIT | (xtal ? MAX310X_CLKSRC_CRYST_BIT : 0);
|
||||
|
||||
/* Configure PLL */
|
||||
if (pllcfg) {
|
||||
|
|
|
@ -890,6 +890,7 @@ static void msm_handle_tx(struct uart_port *port)
|
|||
struct circ_buf *xmit = &msm_port->uart.state->xmit;
|
||||
struct msm_dma *dma = &msm_port->tx_dma;
|
||||
unsigned int pio_count, dma_count, dma_min;
|
||||
char buf[4] = { 0 };
|
||||
void __iomem *tf;
|
||||
int err = 0;
|
||||
|
||||
|
@ -899,10 +900,12 @@ static void msm_handle_tx(struct uart_port *port)
|
|||
else
|
||||
tf = port->membase + UART_TF;
|
||||
|
||||
buf[0] = port->x_char;
|
||||
|
||||
if (msm_port->is_uartdm)
|
||||
msm_reset_dm_count(port, 1);
|
||||
|
||||
iowrite8_rep(tf, &port->x_char, 1);
|
||||
iowrite32_rep(tf, buf, 1);
|
||||
port->icount.tx++;
|
||||
port->x_char = 0;
|
||||
return;
|
||||
|
|
|
@ -144,9 +144,6 @@ static void uart_start(struct tty_struct *tty)
|
|||
struct uart_port *port;
|
||||
unsigned long flags;
|
||||
|
||||
if (!state)
|
||||
return;
|
||||
|
||||
port = uart_port_lock(state, flags);
|
||||
__uart_start(tty);
|
||||
uart_port_unlock(port, flags);
|
||||
|
@ -1717,11 +1714,8 @@ static void uart_dtr_rts(struct tty_port *port, int onoff)
|
|||
*/
|
||||
static int uart_open(struct tty_struct *tty, struct file *filp)
|
||||
{
|
||||
struct uart_driver *drv = tty->driver->driver_state;
|
||||
int retval, line = tty->index;
|
||||
struct uart_state *state = drv->state + line;
|
||||
|
||||
tty->driver_data = state;
|
||||
struct uart_state *state = tty->driver_data;
|
||||
int retval;
|
||||
|
||||
retval = tty_port_open(&state->port, tty, filp);
|
||||
if (retval > 0)
|
||||
|
@ -2412,9 +2406,6 @@ static void uart_poll_put_char(struct tty_driver *driver, int line, char ch)
|
|||
struct uart_state *state = drv->state + line;
|
||||
struct uart_port *port;
|
||||
|
||||
if (!state)
|
||||
return;
|
||||
|
||||
port = uart_port_ref(state);
|
||||
if (!port)
|
||||
return;
|
||||
|
@ -2426,7 +2417,18 @@ static void uart_poll_put_char(struct tty_driver *driver, int line, char ch)
|
|||
}
|
||||
#endif
|
||||
|
||||
static int uart_install(struct tty_driver *driver, struct tty_struct *tty)
|
||||
{
|
||||
struct uart_driver *drv = driver->driver_state;
|
||||
struct uart_state *state = drv->state + tty->index;
|
||||
|
||||
tty->driver_data = state;
|
||||
|
||||
return tty_standard_install(driver, tty);
|
||||
}
|
||||
|
||||
static const struct tty_operations uart_ops = {
|
||||
.install = uart_install,
|
||||
.open = uart_open,
|
||||
.close = uart_close,
|
||||
.write = uart_write,
|
||||
|
|
|
@ -931,8 +931,8 @@ int usb_get_bos_descriptor(struct usb_device *dev)
|
|||
|
||||
/* Get BOS descriptor */
|
||||
ret = usb_get_descriptor(dev, USB_DT_BOS, 0, bos, USB_DT_BOS_SIZE);
|
||||
if (ret < USB_DT_BOS_SIZE) {
|
||||
dev_err(ddev, "unable to get BOS descriptor\n");
|
||||
if (ret < USB_DT_BOS_SIZE || bos->bLength < USB_DT_BOS_SIZE) {
|
||||
dev_err(ddev, "unable to get BOS descriptor or descriptor too short\n");
|
||||
if (ret >= 0)
|
||||
ret = -ENOMSG;
|
||||
kfree(bos);
|
||||
|
|
|
@ -64,6 +64,9 @@ static const struct usb_device_id usb_quirk_list[] = {
|
|||
/* Microsoft LifeCam-VX700 v2.0 */
|
||||
{ USB_DEVICE(0x045e, 0x0770), .driver_info = USB_QUIRK_RESET_RESUME },
|
||||
|
||||
/* Microsoft Surface Dock Ethernet (RTL8153 GigE) */
|
||||
{ USB_DEVICE(0x045e, 0x07c6), .driver_info = USB_QUIRK_NO_LPM },
|
||||
|
||||
/* Cherry Stream G230 2.0 (G85-231) and 3.0 (G85-232) */
|
||||
{ USB_DEVICE(0x046a, 0x0023), .driver_info = USB_QUIRK_RESET_RESUME },
|
||||
|
||||
|
|
|
@ -668,6 +668,7 @@ void xhci_unmap_td_bounce_buffer(struct xhci_hcd *xhci, struct xhci_ring *ring,
|
|||
struct device *dev = xhci_to_hcd(xhci)->self.sysdev;
|
||||
struct xhci_segment *seg = td->bounce_seg;
|
||||
struct urb *urb = td->urb;
|
||||
size_t len;
|
||||
|
||||
if (!seg || !urb)
|
||||
return;
|
||||
|
@ -678,11 +679,14 @@ void xhci_unmap_td_bounce_buffer(struct xhci_hcd *xhci, struct xhci_ring *ring,
|
|||
return;
|
||||
}
|
||||
|
||||
/* for in tranfers we need to copy the data from bounce to sg */
|
||||
sg_pcopy_from_buffer(urb->sg, urb->num_mapped_sgs, seg->bounce_buf,
|
||||
seg->bounce_len, seg->bounce_offs);
|
||||
dma_unmap_single(dev, seg->bounce_dma, ring->bounce_buf_len,
|
||||
DMA_FROM_DEVICE);
|
||||
/* for in tranfers we need to copy the data from bounce to sg */
|
||||
len = sg_pcopy_from_buffer(urb->sg, urb->num_sgs, seg->bounce_buf,
|
||||
seg->bounce_len, seg->bounce_offs);
|
||||
if (len != seg->bounce_len)
|
||||
xhci_warn(xhci, "WARN Wrong bounce buffer read length: %zu != %d\n",
|
||||
len, seg->bounce_len);
|
||||
seg->bounce_len = 0;
|
||||
seg->bounce_offs = 0;
|
||||
}
|
||||
|
@ -3153,6 +3157,7 @@ static int xhci_align_td(struct xhci_hcd *xhci, struct urb *urb, u32 enqd_len,
|
|||
unsigned int unalign;
|
||||
unsigned int max_pkt;
|
||||
u32 new_buff_len;
|
||||
size_t len;
|
||||
|
||||
max_pkt = GET_MAX_PACKET(usb_endpoint_maxp(&urb->ep->desc));
|
||||
unalign = (enqd_len + *trb_buff_len) % max_pkt;
|
||||
|
@ -3183,8 +3188,12 @@ static int xhci_align_td(struct xhci_hcd *xhci, struct urb *urb, u32 enqd_len,
|
|||
|
||||
/* create a max max_pkt sized bounce buffer pointed to by last trb */
|
||||
if (usb_urb_dir_out(urb)) {
|
||||
sg_pcopy_to_buffer(urb->sg, urb->num_mapped_sgs,
|
||||
len = sg_pcopy_to_buffer(urb->sg, urb->num_sgs,
|
||||
seg->bounce_buf, new_buff_len, enqd_len);
|
||||
if (len != seg->bounce_len)
|
||||
xhci_warn(xhci,
|
||||
"WARN Wrong bounce buffer write length: %zu != %d\n",
|
||||
len, seg->bounce_len);
|
||||
seg->bounce_dma = dma_map_single(dev, seg->bounce_buf,
|
||||
max_pkt, DMA_TO_DEVICE);
|
||||
} else {
|
||||
|
|
|
@ -21,6 +21,7 @@
|
|||
*/
|
||||
|
||||
#include <linux/pci.h>
|
||||
#include <linux/iopoll.h>
|
||||
#include <linux/irq.h>
|
||||
#include <linux/log2.h>
|
||||
#include <linux/module.h>
|
||||
|
@ -47,7 +48,6 @@ static unsigned int quirks;
|
|||
module_param(quirks, uint, S_IRUGO);
|
||||
MODULE_PARM_DESC(quirks, "Bit flags for quirks to be enabled as default");
|
||||
|
||||
/* TODO: copied from ehci-hcd.c - can this be refactored? */
|
||||
/*
|
||||
* xhci_handshake - spin reading hc until handshake completes or fails
|
||||
* @ptr: address of hc register to be read
|
||||
|
@ -64,18 +64,16 @@ MODULE_PARM_DESC(quirks, "Bit flags for quirks to be enabled as default");
|
|||
int xhci_handshake(void __iomem *ptr, u32 mask, u32 done, int usec)
|
||||
{
|
||||
u32 result;
|
||||
int ret;
|
||||
|
||||
do {
|
||||
result = readl(ptr);
|
||||
if (result == ~(u32)0) /* card removed */
|
||||
return -ENODEV;
|
||||
result &= mask;
|
||||
if (result == done)
|
||||
return 0;
|
||||
udelay(1);
|
||||
usec--;
|
||||
} while (usec > 0);
|
||||
return -ETIMEDOUT;
|
||||
ret = readl_poll_timeout_atomic(ptr, result,
|
||||
(result & mask) == done ||
|
||||
result == U32_MAX,
|
||||
1, usec);
|
||||
if (result == U32_MAX) /* card removed */
|
||||
return -ENODEV;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int xhci_handshake_check_state(struct xhci_hcd *xhci,
|
||||
|
@ -4220,7 +4218,6 @@ int xhci_set_usb2_hardware_lpm(struct usb_hcd *hcd,
|
|||
pm_addr = port_array[port_num] + PORTPMSC;
|
||||
pm_val = readl(pm_addr);
|
||||
hlpm_addr = port_array[port_num] + PORTHLPMC;
|
||||
field = le32_to_cpu(udev->bos->ext_cap->bmAttributes);
|
||||
|
||||
xhci_dbg(xhci, "%s port %d USB2 hardware LPM\n",
|
||||
enable ? "enable" : "disable", port_num + 1);
|
||||
|
@ -4232,6 +4229,7 @@ int xhci_set_usb2_hardware_lpm(struct usb_hcd *hcd,
|
|||
* default one which works with mixed HIRD and BESL
|
||||
* systems. See XHCI_DEFAULT_BESL definition in xhci.h
|
||||
*/
|
||||
field = le32_to_cpu(udev->bos->ext_cap->bmAttributes);
|
||||
if ((field & USB_BESL_SUPPORT) &&
|
||||
(field & USB_BESL_BASELINE_VALID))
|
||||
hird = USB_GET_BESL_BASELINE(field);
|
||||
|
|
|
@ -103,9 +103,22 @@ static int close_rio(struct inode *inode, struct file *file)
|
|||
{
|
||||
struct rio_usb_data *rio = &rio_instance;
|
||||
|
||||
rio->isopen = 0;
|
||||
/* against disconnect() */
|
||||
mutex_lock(&rio500_mutex);
|
||||
mutex_lock(&(rio->lock));
|
||||
|
||||
dev_info(&rio->rio_dev->dev, "Rio closed.\n");
|
||||
rio->isopen = 0;
|
||||
if (!rio->present) {
|
||||
/* cleanup has been delayed */
|
||||
kfree(rio->ibuf);
|
||||
kfree(rio->obuf);
|
||||
rio->ibuf = NULL;
|
||||
rio->obuf = NULL;
|
||||
} else {
|
||||
dev_info(&rio->rio_dev->dev, "Rio closed.\n");
|
||||
}
|
||||
mutex_unlock(&(rio->lock));
|
||||
mutex_unlock(&rio500_mutex);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -464,15 +477,23 @@ static int probe_rio(struct usb_interface *intf,
|
|||
{
|
||||
struct usb_device *dev = interface_to_usbdev(intf);
|
||||
struct rio_usb_data *rio = &rio_instance;
|
||||
int retval;
|
||||
int retval = 0;
|
||||
|
||||
dev_info(&intf->dev, "USB Rio found at address %d\n", dev->devnum);
|
||||
mutex_lock(&rio500_mutex);
|
||||
if (rio->present) {
|
||||
dev_info(&intf->dev, "Second USB Rio at address %d refused\n", dev->devnum);
|
||||
retval = -EBUSY;
|
||||
goto bail_out;
|
||||
} else {
|
||||
dev_info(&intf->dev, "USB Rio found at address %d\n", dev->devnum);
|
||||
}
|
||||
|
||||
retval = usb_register_dev(intf, &usb_rio_class);
|
||||
if (retval) {
|
||||
dev_err(&dev->dev,
|
||||
"Not able to get a minor for this device.\n");
|
||||
return -ENOMEM;
|
||||
retval = -ENOMEM;
|
||||
goto bail_out;
|
||||
}
|
||||
|
||||
rio->rio_dev = dev;
|
||||
|
@ -481,7 +502,8 @@ static int probe_rio(struct usb_interface *intf,
|
|||
dev_err(&dev->dev,
|
||||
"probe_rio: Not enough memory for the output buffer\n");
|
||||
usb_deregister_dev(intf, &usb_rio_class);
|
||||
return -ENOMEM;
|
||||
retval = -ENOMEM;
|
||||
goto bail_out;
|
||||
}
|
||||
dev_dbg(&intf->dev, "obuf address:%p\n", rio->obuf);
|
||||
|
||||
|
@ -490,7 +512,8 @@ static int probe_rio(struct usb_interface *intf,
|
|||
"probe_rio: Not enough memory for the input buffer\n");
|
||||
usb_deregister_dev(intf, &usb_rio_class);
|
||||
kfree(rio->obuf);
|
||||
return -ENOMEM;
|
||||
retval = -ENOMEM;
|
||||
goto bail_out;
|
||||
}
|
||||
dev_dbg(&intf->dev, "ibuf address:%p\n", rio->ibuf);
|
||||
|
||||
|
@ -498,8 +521,10 @@ static int probe_rio(struct usb_interface *intf,
|
|||
|
||||
usb_set_intfdata (intf, rio);
|
||||
rio->present = 1;
|
||||
bail_out:
|
||||
mutex_unlock(&rio500_mutex);
|
||||
|
||||
return 0;
|
||||
return retval;
|
||||
}
|
||||
|
||||
static void disconnect_rio(struct usb_interface *intf)
|
||||
|
|
|
@ -3041,6 +3041,13 @@ static int sisusb_probe(struct usb_interface *intf,
|
|||
|
||||
mutex_init(&(sisusb->lock));
|
||||
|
||||
sisusb->sisusb_dev = dev;
|
||||
sisusb->vrambase = SISUSB_PCI_MEMBASE;
|
||||
sisusb->mmiobase = SISUSB_PCI_MMIOBASE;
|
||||
sisusb->mmiosize = SISUSB_PCI_MMIOSIZE;
|
||||
sisusb->ioportbase = SISUSB_PCI_IOPORTBASE;
|
||||
/* Everything else is zero */
|
||||
|
||||
/* Register device */
|
||||
retval = usb_register_dev(intf, &usb_sisusb_class);
|
||||
if (retval) {
|
||||
|
@ -3051,13 +3058,7 @@ static int sisusb_probe(struct usb_interface *intf,
|
|||
goto error_1;
|
||||
}
|
||||
|
||||
sisusb->sisusb_dev = dev;
|
||||
sisusb->minor = intf->minor;
|
||||
sisusb->vrambase = SISUSB_PCI_MEMBASE;
|
||||
sisusb->mmiobase = SISUSB_PCI_MMIOBASE;
|
||||
sisusb->mmiosize = SISUSB_PCI_MMIOSIZE;
|
||||
sisusb->ioportbase = SISUSB_PCI_IOPORTBASE;
|
||||
/* Everything else is zero */
|
||||
sisusb->minor = intf->minor;
|
||||
|
||||
/* Allocate buffers */
|
||||
sisusb->ibufsize = SISUSB_IBUF_SIZE;
|
||||
|
|
|
@ -315,9 +315,17 @@ static int stub_probe(struct usb_device *udev)
|
|||
const char *udev_busid = dev_name(&udev->dev);
|
||||
struct bus_id_priv *busid_priv;
|
||||
int rc = 0;
|
||||
char save_status;
|
||||
|
||||
dev_dbg(&udev->dev, "Enter probe\n");
|
||||
|
||||
/* Not sure if this is our device. Allocate here to avoid
|
||||
* calling alloc while holding busid_table lock.
|
||||
*/
|
||||
sdev = stub_device_alloc(udev);
|
||||
if (!sdev)
|
||||
return -ENOMEM;
|
||||
|
||||
/* check we should claim or not by busid_table */
|
||||
busid_priv = get_busid_priv(udev_busid);
|
||||
if (!busid_priv || (busid_priv->status == STUB_BUSID_REMOV) ||
|
||||
|
@ -332,6 +340,9 @@ static int stub_probe(struct usb_device *udev)
|
|||
* See driver_probe_device() in driver/base/dd.c
|
||||
*/
|
||||
rc = -ENODEV;
|
||||
if (!busid_priv)
|
||||
goto sdev_free;
|
||||
|
||||
goto call_put_busid_priv;
|
||||
}
|
||||
|
||||
|
@ -351,12 +362,6 @@ static int stub_probe(struct usb_device *udev)
|
|||
goto call_put_busid_priv;
|
||||
}
|
||||
|
||||
/* ok, this is my device */
|
||||
sdev = stub_device_alloc(udev);
|
||||
if (!sdev) {
|
||||
rc = -ENOMEM;
|
||||
goto call_put_busid_priv;
|
||||
}
|
||||
|
||||
dev_info(&udev->dev,
|
||||
"usbip-host: register new device (bus %u dev %u)\n",
|
||||
|
@ -366,9 +371,16 @@ static int stub_probe(struct usb_device *udev)
|
|||
|
||||
/* set private data to usb_device */
|
||||
dev_set_drvdata(&udev->dev, sdev);
|
||||
|
||||
busid_priv->sdev = sdev;
|
||||
busid_priv->udev = udev;
|
||||
|
||||
save_status = busid_priv->status;
|
||||
busid_priv->status = STUB_BUSID_ALLOC;
|
||||
|
||||
/* release the busid_lock */
|
||||
put_busid_priv(busid_priv);
|
||||
|
||||
/*
|
||||
* Claim this hub port.
|
||||
* It doesn't matter what value we pass as owner
|
||||
|
@ -386,10 +398,8 @@ static int stub_probe(struct usb_device *udev)
|
|||
dev_err(&udev->dev, "stub_add_files for %s\n", udev_busid);
|
||||
goto err_files;
|
||||
}
|
||||
busid_priv->status = STUB_BUSID_ALLOC;
|
||||
|
||||
rc = 0;
|
||||
goto call_put_busid_priv;
|
||||
return 0;
|
||||
|
||||
err_files:
|
||||
usb_hub_release_port(udev->parent, udev->portnum,
|
||||
|
@ -398,23 +408,30 @@ err_port:
|
|||
dev_set_drvdata(&udev->dev, NULL);
|
||||
usb_put_dev(udev);
|
||||
|
||||
/* we already have busid_priv, just lock busid_lock */
|
||||
spin_lock(&busid_priv->busid_lock);
|
||||
busid_priv->sdev = NULL;
|
||||
stub_device_free(sdev);
|
||||
busid_priv->status = save_status;
|
||||
spin_unlock(&busid_priv->busid_lock);
|
||||
/* lock is released - go to free */
|
||||
goto sdev_free;
|
||||
|
||||
call_put_busid_priv:
|
||||
/* release the busid_lock */
|
||||
put_busid_priv(busid_priv);
|
||||
|
||||
sdev_free:
|
||||
stub_device_free(sdev);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static void shutdown_busid(struct bus_id_priv *busid_priv)
|
||||
{
|
||||
if (busid_priv->sdev && !busid_priv->shutdown_busid) {
|
||||
busid_priv->shutdown_busid = 1;
|
||||
usbip_event_add(&busid_priv->sdev->ud, SDEV_EVENT_REMOVED);
|
||||
usbip_event_add(&busid_priv->sdev->ud, SDEV_EVENT_REMOVED);
|
||||
|
||||
/* wait for the stop of the event handler */
|
||||
usbip_stop_eh(&busid_priv->sdev->ud);
|
||||
}
|
||||
/* wait for the stop of the event handler */
|
||||
usbip_stop_eh(&busid_priv->sdev->ud);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -441,11 +458,16 @@ static void stub_disconnect(struct usb_device *udev)
|
|||
/* get stub_device */
|
||||
if (!sdev) {
|
||||
dev_err(&udev->dev, "could not get device");
|
||||
goto call_put_busid_priv;
|
||||
/* release busid_lock */
|
||||
put_busid_priv(busid_priv);
|
||||
return;
|
||||
}
|
||||
|
||||
dev_set_drvdata(&udev->dev, NULL);
|
||||
|
||||
/* release busid_lock before call to remove device files */
|
||||
put_busid_priv(busid_priv);
|
||||
|
||||
/*
|
||||
* NOTE: rx/tx threads are invoked for each usb_device.
|
||||
*/
|
||||
|
@ -456,27 +478,36 @@ static void stub_disconnect(struct usb_device *udev)
|
|||
(struct usb_dev_state *) udev);
|
||||
if (rc) {
|
||||
dev_dbg(&udev->dev, "unable to release port\n");
|
||||
goto call_put_busid_priv;
|
||||
return;
|
||||
}
|
||||
|
||||
/* If usb reset is called from event handler */
|
||||
if (usbip_in_eh(current))
|
||||
goto call_put_busid_priv;
|
||||
return;
|
||||
|
||||
/* we already have busid_priv, just lock busid_lock */
|
||||
spin_lock(&busid_priv->busid_lock);
|
||||
if (!busid_priv->shutdown_busid)
|
||||
busid_priv->shutdown_busid = 1;
|
||||
/* release busid_lock */
|
||||
spin_unlock(&busid_priv->busid_lock);
|
||||
|
||||
/* shutdown the current connection */
|
||||
shutdown_busid(busid_priv);
|
||||
|
||||
usb_put_dev(sdev->udev);
|
||||
|
||||
/* we already have busid_priv, just lock busid_lock */
|
||||
spin_lock(&busid_priv->busid_lock);
|
||||
/* free sdev */
|
||||
busid_priv->sdev = NULL;
|
||||
stub_device_free(sdev);
|
||||
|
||||
if (busid_priv->status == STUB_BUSID_ALLOC)
|
||||
busid_priv->status = STUB_BUSID_ADDED;
|
||||
|
||||
call_put_busid_priv:
|
||||
put_busid_priv(busid_priv);
|
||||
/* release busid_lock */
|
||||
spin_unlock(&busid_priv->busid_lock);
|
||||
return;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
|
|
|
@ -126,8 +126,6 @@ void xen_pcibk_reset_device(struct pci_dev *dev)
|
|||
if (pci_is_enabled(dev))
|
||||
pci_disable_device(dev);
|
||||
|
||||
pci_write_config_word(dev, PCI_COMMAND, 0);
|
||||
|
||||
dev->is_busmaster = 0;
|
||||
} else {
|
||||
pci_read_config_word(dev, PCI_COMMAND, &cmd);
|
||||
|
|
|
@ -536,7 +536,7 @@ static int xenbus_file_open(struct inode *inode, struct file *filp)
|
|||
if (xen_store_evtchn == 0)
|
||||
return -ENOENT;
|
||||
|
||||
nonseekable_open(inode, filp);
|
||||
stream_open(inode, filp);
|
||||
|
||||
u = kzalloc(sizeof(*u), GFP_KERNEL);
|
||||
if (u == NULL)
|
||||
|
|
|
@ -2826,6 +2826,12 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
|
|||
root->log_transid++;
|
||||
log->log_transid = root->log_transid;
|
||||
root->log_start_pid = 0;
|
||||
/*
|
||||
* Update or create log root item under the root's log_mutex to prevent
|
||||
* races with concurrent log syncs that can lead to failure to update
|
||||
* log root item because it was not created yet.
|
||||
*/
|
||||
ret = update_log_root(trans, log);
|
||||
/*
|
||||
* IO has been started, blocks of the log tree have WRITTEN flag set
|
||||
* in their headers. new modifications of the log will be written to
|
||||
|
@ -2845,8 +2851,6 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
|
|||
|
||||
mutex_unlock(&log_root_tree->log_mutex);
|
||||
|
||||
ret = update_log_root(trans, log);
|
||||
|
||||
mutex_lock(&log_root_tree->log_mutex);
|
||||
if (atomic_dec_and_test(&log_root_tree->log_writers)) {
|
||||
/*
|
||||
|
|
|
@ -2892,7 +2892,9 @@ cifs_read_allocate_pages(struct cifs_readdata *rdata, unsigned int nr_pages)
|
|||
}
|
||||
|
||||
if (rc) {
|
||||
for (i = 0; i < nr_pages; i++) {
|
||||
unsigned int nr_page_failed = i;
|
||||
|
||||
for (i = 0; i < nr_page_failed; i++) {
|
||||
put_page(rdata->pages[i]);
|
||||
rdata->pages[i] = NULL;
|
||||
}
|
||||
|
|
|
@ -1994,10 +1994,8 @@ static ssize_t fuse_dev_splice_write(struct pipe_inode_info *pipe,
|
|||
rem += pipe->bufs[(pipe->curbuf + idx) & (pipe->buffers - 1)].len;
|
||||
|
||||
ret = -EINVAL;
|
||||
if (rem < len) {
|
||||
pipe_unlock(pipe);
|
||||
goto out;
|
||||
}
|
||||
if (rem < len)
|
||||
goto out_free;
|
||||
|
||||
rem = len;
|
||||
while (rem) {
|
||||
|
@ -2015,7 +2013,9 @@ static ssize_t fuse_dev_splice_write(struct pipe_inode_info *pipe,
|
|||
pipe->curbuf = (pipe->curbuf + 1) & (pipe->buffers - 1);
|
||||
pipe->nrbufs--;
|
||||
} else {
|
||||
pipe_buf_get(pipe, ibuf);
|
||||
if (!pipe_buf_get(pipe, ibuf))
|
||||
goto out_free;
|
||||
|
||||
*obuf = *ibuf;
|
||||
obuf->flags &= ~PIPE_BUF_FLAG_GIFT;
|
||||
obuf->len = rem;
|
||||
|
@ -2038,11 +2038,11 @@ static ssize_t fuse_dev_splice_write(struct pipe_inode_info *pipe,
|
|||
ret = fuse_dev_do_write(fud, &cs, len);
|
||||
|
||||
pipe_lock(pipe);
|
||||
out_free:
|
||||
for (idx = 0; idx < nbuf; idx++)
|
||||
pipe_buf_release(pipe, &bufs[idx]);
|
||||
pipe_unlock(pipe);
|
||||
|
||||
out:
|
||||
kfree(bufs);
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -194,7 +194,9 @@ void fuse_finish_open(struct inode *inode, struct file *file)
|
|||
file->f_op = &fuse_direct_io_file_operations;
|
||||
if (!(ff->open_flags & FOPEN_KEEP_CACHE))
|
||||
invalidate_inode_pages2(inode->i_mapping);
|
||||
if (ff->open_flags & FOPEN_NONSEEKABLE)
|
||||
if (ff->open_flags & FOPEN_STREAM)
|
||||
stream_open(inode, file);
|
||||
else if (ff->open_flags & FOPEN_NONSEEKABLE)
|
||||
nonseekable_open(inode, file);
|
||||
if (fc->atomic_o_trunc && (file->f_flags & O_TRUNC)) {
|
||||
struct fuse_inode *fi = get_fuse_inode(inode);
|
||||
|
@ -3002,7 +3004,7 @@ static long fuse_file_fallocate(struct file *file, int mode, loff_t offset,
|
|||
offset + length > i_size_read(inode)) {
|
||||
err = inode_newsize_ok(inode, offset + length);
|
||||
if (err)
|
||||
return err;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (!(mode & FALLOC_FL_KEEP_SIZE))
|
||||
|
|
18
fs/open.c
18
fs/open.c
|
@ -1205,3 +1205,21 @@ int nonseekable_open(struct inode *inode, struct file *filp)
|
|||
}
|
||||
|
||||
EXPORT_SYMBOL(nonseekable_open);
|
||||
|
||||
/*
|
||||
* stream_open is used by subsystems that want stream-like file descriptors.
|
||||
* Such file descriptors are not seekable and don't have notion of position
|
||||
* (file.f_pos is always 0). Contrary to file descriptors of other regular
|
||||
* files, .read() and .write() can run simultaneously.
|
||||
*
|
||||
* stream_open never fails and is marked to return int so that it could be
|
||||
* directly used as file_operations.open .
|
||||
*/
|
||||
int stream_open(struct inode *inode, struct file *filp)
|
||||
{
|
||||
filp->f_mode &= ~(FMODE_LSEEK | FMODE_PREAD | FMODE_PWRITE | FMODE_ATOMIC_POS);
|
||||
filp->f_mode |= FMODE_STREAM;
|
||||
return 0;
|
||||
}
|
||||
|
||||
EXPORT_SYMBOL(stream_open);
|
||||
|
|
|
@ -193,9 +193,9 @@ EXPORT_SYMBOL(generic_pipe_buf_steal);
|
|||
* in the tee() system call, when we duplicate the buffers in one
|
||||
* pipe into another.
|
||||
*/
|
||||
void generic_pipe_buf_get(struct pipe_inode_info *pipe, struct pipe_buffer *buf)
|
||||
bool generic_pipe_buf_get(struct pipe_inode_info *pipe, struct pipe_buffer *buf)
|
||||
{
|
||||
get_page(buf->page);
|
||||
return try_get_page(buf->page);
|
||||
}
|
||||
EXPORT_SYMBOL(generic_pipe_buf_get);
|
||||
|
||||
|
|
|
@ -572,12 +572,13 @@ EXPORT_SYMBOL(vfs_write);
|
|||
|
||||
static inline loff_t file_pos_read(struct file *file)
|
||||
{
|
||||
return file->f_pos;
|
||||
return file->f_mode & FMODE_STREAM ? 0 : file->f_pos;
|
||||
}
|
||||
|
||||
static inline void file_pos_write(struct file *file, loff_t pos)
|
||||
{
|
||||
file->f_pos = pos;
|
||||
if ((file->f_mode & FMODE_STREAM) == 0)
|
||||
file->f_pos = pos;
|
||||
}
|
||||
|
||||
SYSCALL_DEFINE3(read, unsigned int, fd, char __user *, buf, size_t, count)
|
||||
|
|
12
fs/splice.c
12
fs/splice.c
|
@ -1585,7 +1585,11 @@ retry:
|
|||
* Get a reference to this pipe buffer,
|
||||
* so we can copy the contents over.
|
||||
*/
|
||||
pipe_buf_get(ipipe, ibuf);
|
||||
if (!pipe_buf_get(ipipe, ibuf)) {
|
||||
if (ret == 0)
|
||||
ret = -EFAULT;
|
||||
break;
|
||||
}
|
||||
*obuf = *ibuf;
|
||||
|
||||
/*
|
||||
|
@ -1659,7 +1663,11 @@ static int link_pipe(struct pipe_inode_info *ipipe,
|
|||
* Get a reference to this pipe buffer,
|
||||
* so we can copy the contents over.
|
||||
*/
|
||||
pipe_buf_get(ipipe, ibuf);
|
||||
if (!pipe_buf_get(ipipe, ibuf)) {
|
||||
if (ret == 0)
|
||||
ret = -EFAULT;
|
||||
break;
|
||||
}
|
||||
|
||||
obuf = opipe->bufs + nbuf;
|
||||
*obuf = *ibuf;
|
||||
|
|
|
@ -58,7 +58,7 @@ static __always_inline unsigned long hweight_long(unsigned long w)
|
|||
*/
|
||||
static inline __u64 rol64(__u64 word, unsigned int shift)
|
||||
{
|
||||
return (word << shift) | (word >> (64 - shift));
|
||||
return (word << (shift & 63)) | (word >> ((-shift) & 63));
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -68,7 +68,7 @@ static inline __u64 rol64(__u64 word, unsigned int shift)
|
|||
*/
|
||||
static inline __u64 ror64(__u64 word, unsigned int shift)
|
||||
{
|
||||
return (word >> shift) | (word << (64 - shift));
|
||||
return (word >> (shift & 63)) | (word << ((-shift) & 63));
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -78,7 +78,7 @@ static inline __u64 ror64(__u64 word, unsigned int shift)
|
|||
*/
|
||||
static inline __u32 rol32(__u32 word, unsigned int shift)
|
||||
{
|
||||
return (word << shift) | (word >> ((-shift) & 31));
|
||||
return (word << (shift & 31)) | (word >> ((-shift) & 31));
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -88,7 +88,7 @@ static inline __u32 rol32(__u32 word, unsigned int shift)
|
|||
*/
|
||||
static inline __u32 ror32(__u32 word, unsigned int shift)
|
||||
{
|
||||
return (word >> shift) | (word << (32 - shift));
|
||||
return (word >> (shift & 31)) | (word << ((-shift) & 31));
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -98,7 +98,7 @@ static inline __u32 ror32(__u32 word, unsigned int shift)
|
|||
*/
|
||||
static inline __u16 rol16(__u16 word, unsigned int shift)
|
||||
{
|
||||
return (word << shift) | (word >> (16 - shift));
|
||||
return (word << (shift & 15)) | (word >> ((-shift) & 15));
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -108,7 +108,7 @@ static inline __u16 rol16(__u16 word, unsigned int shift)
|
|||
*/
|
||||
static inline __u16 ror16(__u16 word, unsigned int shift)
|
||||
{
|
||||
return (word >> shift) | (word << (16 - shift));
|
||||
return (word >> (shift & 15)) | (word << ((-shift) & 15));
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -118,7 +118,7 @@ static inline __u16 ror16(__u16 word, unsigned int shift)
|
|||
*/
|
||||
static inline __u8 rol8(__u8 word, unsigned int shift)
|
||||
{
|
||||
return (word << shift) | (word >> (8 - shift));
|
||||
return (word << (shift & 7)) | (word >> ((-shift) & 7));
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -128,7 +128,7 @@ static inline __u8 rol8(__u8 word, unsigned int shift)
|
|||
*/
|
||||
static inline __u8 ror8(__u8 word, unsigned int shift)
|
||||
{
|
||||
return (word >> shift) | (word << (8 - shift));
|
||||
return (word >> (shift & 7)) | (word << ((-shift) & 7));
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -273,11 +273,15 @@ extern enum cpuhp_smt_control cpu_smt_control;
|
|||
extern void cpu_smt_disable(bool force);
|
||||
extern void cpu_smt_check_topology_early(void);
|
||||
extern void cpu_smt_check_topology(void);
|
||||
extern int cpuhp_smt_enable(void);
|
||||
extern int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval);
|
||||
#else
|
||||
# define cpu_smt_control (CPU_SMT_ENABLED)
|
||||
static inline void cpu_smt_disable(bool force) { }
|
||||
static inline void cpu_smt_check_topology_early(void) { }
|
||||
static inline void cpu_smt_check_topology(void) { }
|
||||
static inline int cpuhp_smt_enable(void) { return 0; }
|
||||
static inline int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval) { return 0; }
|
||||
#endif
|
||||
|
||||
/*
|
||||
|
|
|
@ -144,6 +144,9 @@ typedef int (dio_iodone_t)(struct kiocb *iocb, loff_t offset,
|
|||
/* Has write method(s) */
|
||||
#define FMODE_CAN_WRITE ((__force fmode_t)0x40000)
|
||||
|
||||
/* File is stream-like */
|
||||
#define FMODE_STREAM ((__force fmode_t)0x200000)
|
||||
|
||||
/* File was opened by fanotify and shouldn't generate fanotify events */
|
||||
#define FMODE_NONOTIFY ((__force fmode_t)0x4000000)
|
||||
|
||||
|
@ -2887,6 +2890,7 @@ extern loff_t no_seek_end_llseek_size(struct file *, loff_t, int, loff_t);
|
|||
extern loff_t no_seek_end_llseek(struct file *, loff_t, int);
|
||||
extern int generic_file_open(struct inode * inode, struct file * filp);
|
||||
extern int nonseekable_open(struct inode * inode, struct file * filp);
|
||||
extern int stream_open(struct inode * inode, struct file * filp);
|
||||
|
||||
#ifdef CONFIG_BLOCK
|
||||
typedef void (dio_submit_t)(struct bio *bio, struct inode *inode,
|
||||
|
|
|
@ -51,6 +51,7 @@ struct list_lru {
|
|||
struct list_lru_node *node;
|
||||
#if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
|
||||
struct list_head list;
|
||||
bool memcg_aware;
|
||||
#endif
|
||||
};
|
||||
|
||||
|
|
|
@ -782,6 +782,10 @@ static inline bool is_zone_device_page(const struct page *page)
|
|||
}
|
||||
#endif
|
||||
|
||||
/* 127: arbitrary random number, small enough to assemble well */
|
||||
#define page_ref_zero_or_close_to_overflow(page) \
|
||||
((unsigned int) page_ref_count(page) + 127u <= 127u)
|
||||
|
||||
static inline void get_page(struct page *page)
|
||||
{
|
||||
page = compound_head(page);
|
||||
|
@ -789,7 +793,7 @@ static inline void get_page(struct page *page)
|
|||
* Getting a normal page or the head of a compound page
|
||||
* requires to already have an elevated page->_refcount.
|
||||
*/
|
||||
VM_BUG_ON_PAGE(page_ref_count(page) <= 0, page);
|
||||
VM_BUG_ON_PAGE(page_ref_zero_or_close_to_overflow(page), page);
|
||||
page_ref_inc(page);
|
||||
|
||||
if (unlikely(is_zone_device_page(page)))
|
||||
|
|
|
@ -107,18 +107,20 @@ struct pipe_buf_operations {
|
|||
/*
|
||||
* Get a reference to the pipe buffer.
|
||||
*/
|
||||
void (*get)(struct pipe_inode_info *, struct pipe_buffer *);
|
||||
bool (*get)(struct pipe_inode_info *, struct pipe_buffer *);
|
||||
};
|
||||
|
||||
/**
|
||||
* pipe_buf_get - get a reference to a pipe_buffer
|
||||
* @pipe: the pipe that the buffer belongs to
|
||||
* @buf: the buffer to get a reference to
|
||||
*
|
||||
* Return: %true if the reference was successfully obtained.
|
||||
*/
|
||||
static inline void pipe_buf_get(struct pipe_inode_info *pipe,
|
||||
static inline __must_check bool pipe_buf_get(struct pipe_inode_info *pipe,
|
||||
struct pipe_buffer *buf)
|
||||
{
|
||||
buf->ops->get(pipe, buf);
|
||||
return buf->ops->get(pipe, buf);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -178,7 +180,7 @@ struct pipe_inode_info *alloc_pipe_info(void);
|
|||
void free_pipe_info(struct pipe_inode_info *);
|
||||
|
||||
/* Generic pipe buffer ops functions */
|
||||
void generic_pipe_buf_get(struct pipe_inode_info *, struct pipe_buffer *);
|
||||
bool generic_pipe_buf_get(struct pipe_inode_info *, struct pipe_buffer *);
|
||||
int generic_pipe_buf_confirm(struct pipe_inode_info *, struct pipe_buffer *);
|
||||
int generic_pipe_buf_steal(struct pipe_inode_info *, struct pipe_buffer *);
|
||||
void generic_pipe_buf_release(struct pipe_inode_info *, struct pipe_buffer *);
|
||||
|
|
|
@ -306,14 +306,12 @@ void synchronize_rcu(void);
|
|||
|
||||
static inline void __rcu_read_lock(void)
|
||||
{
|
||||
if (IS_ENABLED(CONFIG_PREEMPT_COUNT))
|
||||
preempt_disable();
|
||||
preempt_disable();
|
||||
}
|
||||
|
||||
static inline void __rcu_read_unlock(void)
|
||||
{
|
||||
if (IS_ENABLED(CONFIG_PREEMPT_COUNT))
|
||||
preempt_enable();
|
||||
preempt_enable();
|
||||
}
|
||||
|
||||
static inline void synchronize_rcu(void)
|
||||
|
|
|
@ -17,6 +17,7 @@ static inline u32 arp_hashfn(const void *pkey, const struct net_device *dev, u32
|
|||
return val * hash_rnd[0];
|
||||
}
|
||||
|
||||
#ifdef CONFIG_INET
|
||||
static inline struct neighbour *__ipv4_neigh_lookup_noref(struct net_device *dev, u32 key)
|
||||
{
|
||||
if (dev->flags & (IFF_LOOPBACK | IFF_POINTOPOINT))
|
||||
|
@ -24,6 +25,13 @@ static inline struct neighbour *__ipv4_neigh_lookup_noref(struct net_device *dev
|
|||
|
||||
return ___neigh_lookup_noref(&arp_tbl, neigh_key_eq32, arp_hashfn, &key, dev);
|
||||
}
|
||||
#else
|
||||
static inline
|
||||
struct neighbour *__ipv4_neigh_lookup_noref(struct net_device *dev, u32 key)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
#endif
|
||||
|
||||
static inline struct neighbour *__ipv4_neigh_lookup(struct net_device *dev, u32 key)
|
||||
{
|
||||
|
|
|
@ -756,7 +756,7 @@ struct drm_i915_gem_execbuffer2 {
|
|||
__u32 num_cliprects;
|
||||
/** This is a struct drm_clip_rect *cliprects */
|
||||
__u64 cliprects_ptr;
|
||||
#define I915_EXEC_RING_MASK (7<<0)
|
||||
#define I915_EXEC_RING_MASK (0x3f)
|
||||
#define I915_EXEC_DEFAULT (0<<0)
|
||||
#define I915_EXEC_RENDER (1<<0)
|
||||
#define I915_EXEC_BSD (2<<0)
|
||||
|
|
|
@ -215,10 +215,12 @@ struct fuse_file_lock {
|
|||
* FOPEN_DIRECT_IO: bypass page cache for this open file
|
||||
* FOPEN_KEEP_CACHE: don't invalidate the data cache on open
|
||||
* FOPEN_NONSEEKABLE: the file is not seekable
|
||||
* FOPEN_STREAM: the file is stream-like (no file position at all)
|
||||
*/
|
||||
#define FOPEN_DIRECT_IO (1 << 0)
|
||||
#define FOPEN_KEEP_CACHE (1 << 1)
|
||||
#define FOPEN_NONSEEKABLE (1 << 2)
|
||||
#define FOPEN_STREAM (1 << 4)
|
||||
|
||||
/**
|
||||
* INIT request/reply flags
|
||||
|
|
|
@ -301,8 +301,10 @@ static inline int TLV_SET(void *tlv, __u16 type, void *data, __u16 len)
|
|||
tlv_ptr = (struct tlv_desc *)tlv;
|
||||
tlv_ptr->tlv_type = htons(type);
|
||||
tlv_ptr->tlv_len = htons(tlv_len);
|
||||
if (len && data)
|
||||
memcpy(TLV_DATA(tlv_ptr), data, tlv_len);
|
||||
if (len && data) {
|
||||
memcpy(TLV_DATA(tlv_ptr), data, len);
|
||||
memset(TLV_DATA(tlv_ptr) + len, 0, TLV_SPACE(len) - tlv_len);
|
||||
}
|
||||
return TLV_SPACE(len);
|
||||
}
|
||||
|
||||
|
@ -399,8 +401,10 @@ static inline int TCM_SET(void *msg, __u16 cmd, __u16 flags,
|
|||
tcm_hdr->tcm_len = htonl(msg_len);
|
||||
tcm_hdr->tcm_type = htons(cmd);
|
||||
tcm_hdr->tcm_flags = htons(flags);
|
||||
if (data_len && data)
|
||||
if (data_len && data) {
|
||||
memcpy(TCM_DATA(msg), data, data_len);
|
||||
memset(TCM_DATA(msg) + data_len, 0, TCM_SPACE(data_len) - msg_len);
|
||||
}
|
||||
return TCM_SPACE(data_len);
|
||||
}
|
||||
|
||||
|
|
|
@ -2064,7 +2064,7 @@ static void cpuhp_online_cpu_device(unsigned int cpu)
|
|||
kobject_uevent(&dev->kobj, KOBJ_ONLINE);
|
||||
}
|
||||
|
||||
static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
|
||||
int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
|
||||
{
|
||||
int cpu, ret = 0;
|
||||
|
||||
|
@ -2098,7 +2098,7 @@ static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int cpuhp_smt_enable(void)
|
||||
int cpuhp_smt_enable(void)
|
||||
{
|
||||
int cpu, ret = 0;
|
||||
|
||||
|
|
|
@ -256,6 +256,11 @@ void swsusp_show_speed(ktime_t start, ktime_t stop,
|
|||
kps / 1000, (kps % 1000) / 10);
|
||||
}
|
||||
|
||||
__weak int arch_resume_nosmt(void)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* create_image - Create a hibernation image.
|
||||
* @platform_mode: Whether or not to use the platform driver.
|
||||
|
@ -322,6 +327,10 @@ static int create_image(int platform_mode)
|
|||
Enable_cpus:
|
||||
enable_nonboot_cpus();
|
||||
|
||||
/* Allow architectures to do nosmt-specific post-resume dances */
|
||||
if (!in_suspend)
|
||||
error = arch_resume_nosmt();
|
||||
|
||||
Platform_finish:
|
||||
platform_finish(platform_mode);
|
||||
|
||||
|
|
|
@ -2249,6 +2249,8 @@ relock:
|
|||
if (signal_group_exit(signal)) {
|
||||
ksig->info.si_signo = signr = SIGKILL;
|
||||
sigdelset(¤t->pending.signal, SIGKILL);
|
||||
trace_signal_deliver(SIGKILL, SEND_SIG_NOINFO,
|
||||
&sighand->action[SIGKILL - 1]);
|
||||
recalc_sigpending();
|
||||
goto fatal;
|
||||
}
|
||||
|
|
|
@ -6301,12 +6301,16 @@ static void buffer_pipe_buf_release(struct pipe_inode_info *pipe,
|
|||
buf->private = 0;
|
||||
}
|
||||
|
||||
static void buffer_pipe_buf_get(struct pipe_inode_info *pipe,
|
||||
static bool buffer_pipe_buf_get(struct pipe_inode_info *pipe,
|
||||
struct pipe_buffer *buf)
|
||||
{
|
||||
struct buffer_ref *ref = (struct buffer_ref *)buf->private;
|
||||
|
||||
if (ref->ref > INT_MAX/2)
|
||||
return false;
|
||||
|
||||
ref->ref++;
|
||||
return true;
|
||||
}
|
||||
|
||||
/* Pipe buffer operations for a buffer. */
|
||||
|
|
54
mm/gup.c
54
mm/gup.c
|
@ -153,7 +153,10 @@ retry:
|
|||
}
|
||||
|
||||
if (flags & FOLL_GET) {
|
||||
get_page(page);
|
||||
if (unlikely(!try_get_page(page))) {
|
||||
page = ERR_PTR(-ENOMEM);
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* drop the pgmap reference now that we hold the page */
|
||||
if (pgmap) {
|
||||
|
@ -292,7 +295,10 @@ struct page *follow_page_mask(struct vm_area_struct *vma,
|
|||
if (pmd_trans_unstable(pmd))
|
||||
ret = -EBUSY;
|
||||
} else {
|
||||
get_page(page);
|
||||
if (unlikely(!try_get_page(page))) {
|
||||
spin_unlock(ptl);
|
||||
return ERR_PTR(-ENOMEM);
|
||||
}
|
||||
spin_unlock(ptl);
|
||||
lock_page(page);
|
||||
ret = split_huge_page(page);
|
||||
|
@ -348,7 +354,10 @@ static int get_gate_page(struct mm_struct *mm, unsigned long address,
|
|||
goto unmap;
|
||||
*page = pte_page(*pte);
|
||||
}
|
||||
get_page(*page);
|
||||
if (unlikely(!try_get_page(*page))) {
|
||||
ret = -ENOMEM;
|
||||
goto unmap;
|
||||
}
|
||||
out:
|
||||
ret = 0;
|
||||
unmap:
|
||||
|
@ -1231,6 +1240,20 @@ struct page *get_dump_page(unsigned long addr)
|
|||
*/
|
||||
#ifdef CONFIG_HAVE_GENERIC_RCU_GUP
|
||||
|
||||
/*
|
||||
* Return the compund head page with ref appropriately incremented,
|
||||
* or NULL if that failed.
|
||||
*/
|
||||
static inline struct page *try_get_compound_head(struct page *page, int refs)
|
||||
{
|
||||
struct page *head = compound_head(page);
|
||||
if (WARN_ON_ONCE(page_ref_count(head) < 0))
|
||||
return NULL;
|
||||
if (unlikely(!page_cache_add_speculative(head, refs)))
|
||||
return NULL;
|
||||
return head;
|
||||
}
|
||||
|
||||
#ifdef __HAVE_ARCH_PTE_SPECIAL
|
||||
static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
|
||||
int write, struct page **pages, int *nr)
|
||||
|
@ -1263,9 +1286,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
|
|||
|
||||
VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
|
||||
page = pte_page(pte);
|
||||
head = compound_head(page);
|
||||
|
||||
if (!page_cache_get_speculative(head))
|
||||
head = try_get_compound_head(page, 1);
|
||||
if (!head)
|
||||
goto pte_unmap;
|
||||
|
||||
if (unlikely(pte_val(pte) != pte_val(*ptep))) {
|
||||
|
@ -1313,17 +1336,16 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
|
|||
return 0;
|
||||
|
||||
refs = 0;
|
||||
head = pmd_page(orig);
|
||||
page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
|
||||
page = pmd_page(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
|
||||
do {
|
||||
VM_BUG_ON_PAGE(compound_head(page) != head, page);
|
||||
pages[*nr] = page;
|
||||
(*nr)++;
|
||||
page++;
|
||||
refs++;
|
||||
} while (addr += PAGE_SIZE, addr != end);
|
||||
|
||||
if (!page_cache_add_speculative(head, refs)) {
|
||||
head = try_get_compound_head(pmd_page(orig), refs);
|
||||
if (!head) {
|
||||
*nr -= refs;
|
||||
return 0;
|
||||
}
|
||||
|
@ -1348,17 +1370,16 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr,
|
|||
return 0;
|
||||
|
||||
refs = 0;
|
||||
head = pud_page(orig);
|
||||
page = head + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
|
||||
page = pud_page(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
|
||||
do {
|
||||
VM_BUG_ON_PAGE(compound_head(page) != head, page);
|
||||
pages[*nr] = page;
|
||||
(*nr)++;
|
||||
page++;
|
||||
refs++;
|
||||
} while (addr += PAGE_SIZE, addr != end);
|
||||
|
||||
if (!page_cache_add_speculative(head, refs)) {
|
||||
head = try_get_compound_head(pud_page(orig), refs);
|
||||
if (!head) {
|
||||
*nr -= refs;
|
||||
return 0;
|
||||
}
|
||||
|
@ -1384,17 +1405,16 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr,
|
|||
return 0;
|
||||
|
||||
refs = 0;
|
||||
head = pgd_page(orig);
|
||||
page = head + ((addr & ~PGDIR_MASK) >> PAGE_SHIFT);
|
||||
page = pgd_page(orig) + ((addr & ~PGDIR_MASK) >> PAGE_SHIFT);
|
||||
do {
|
||||
VM_BUG_ON_PAGE(compound_head(page) != head, page);
|
||||
pages[*nr] = page;
|
||||
(*nr)++;
|
||||
page++;
|
||||
refs++;
|
||||
} while (addr += PAGE_SIZE, addr != end);
|
||||
|
||||
if (!page_cache_add_speculative(head, refs)) {
|
||||
head = try_get_compound_head(pgd_page(orig), refs);
|
||||
if (!head) {
|
||||
*nr -= refs;
|
||||
return 0;
|
||||
}
|
||||
|
|
16
mm/hugetlb.c
16
mm/hugetlb.c
|
@ -3984,6 +3984,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
|
|||
unsigned long vaddr = *position;
|
||||
unsigned long remainder = *nr_pages;
|
||||
struct hstate *h = hstate_vma(vma);
|
||||
int err = -EFAULT;
|
||||
|
||||
while (vaddr < vma->vm_end && remainder) {
|
||||
pte_t *pte;
|
||||
|
@ -4055,6 +4056,19 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
|
|||
|
||||
pfn_offset = (vaddr & ~huge_page_mask(h)) >> PAGE_SHIFT;
|
||||
page = pte_page(huge_ptep_get(pte));
|
||||
|
||||
/*
|
||||
* Instead of doing 'try_get_page()' below in the same_page
|
||||
* loop, just check the count once here.
|
||||
*/
|
||||
if (unlikely(page_count(page) <= 0)) {
|
||||
if (pages) {
|
||||
spin_unlock(ptl);
|
||||
remainder = 0;
|
||||
err = -ENOMEM;
|
||||
break;
|
||||
}
|
||||
}
|
||||
same_page:
|
||||
if (pages) {
|
||||
pages[i] = mem_map_offset(page, pfn_offset);
|
||||
|
@ -4081,7 +4095,7 @@ same_page:
|
|||
*nr_pages = remainder;
|
||||
*position = vaddr;
|
||||
|
||||
return i ? i : -EFAULT;
|
||||
return i ? i : err;
|
||||
}
|
||||
|
||||
#ifndef __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE
|
||||
|
|
|
@ -42,11 +42,7 @@ static void list_lru_unregister(struct list_lru *lru)
|
|||
#if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB)
|
||||
static inline bool list_lru_memcg_aware(struct list_lru *lru)
|
||||
{
|
||||
/*
|
||||
* This needs node 0 to be always present, even
|
||||
* in the systems supporting sparse numa ids.
|
||||
*/
|
||||
return !!lru->node[0].memcg_lrus;
|
||||
return lru->memcg_aware;
|
||||
}
|
||||
|
||||
static inline struct list_lru_one *
|
||||
|
@ -389,6 +385,8 @@ static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
|
|||
{
|
||||
int i;
|
||||
|
||||
lru->memcg_aware = memcg_aware;
|
||||
|
||||
if (!memcg_aware)
|
||||
return 0;
|
||||
|
||||
|
|
|
@ -4932,7 +4932,6 @@ static struct sk_buff *napi_frags_skb(struct napi_struct *napi)
|
|||
skb_reset_mac_header(skb);
|
||||
skb_gro_reset_offset(skb);
|
||||
|
||||
eth = skb_gro_header_fast(skb, 0);
|
||||
if (unlikely(skb_gro_header_hard(skb, hlen))) {
|
||||
eth = skb_gro_header_slow(skb, hlen, 0);
|
||||
if (unlikely(!eth)) {
|
||||
|
@ -4942,6 +4941,7 @@ static struct sk_buff *napi_frags_skb(struct napi_struct *napi)
|
|||
return NULL;
|
||||
}
|
||||
} else {
|
||||
eth = (const struct ethhdr *)skb->data;
|
||||
gro_pull_from_frag0(skb, hlen);
|
||||
NAPI_GRO_CB(skb)->frag0 += hlen;
|
||||
NAPI_GRO_CB(skb)->frag0_len -= hlen;
|
||||
|
|
|
@ -878,8 +878,13 @@ static noinline_for_stack int ethtool_get_drvinfo(struct net_device *dev,
|
|||
if (rc >= 0)
|
||||
info.n_priv_flags = rc;
|
||||
}
|
||||
if (ops->get_regs_len)
|
||||
info.regdump_len = ops->get_regs_len(dev);
|
||||
if (ops->get_regs_len) {
|
||||
int ret = ops->get_regs_len(dev);
|
||||
|
||||
if (ret > 0)
|
||||
info.regdump_len = ret;
|
||||
}
|
||||
|
||||
if (ops->get_eeprom_len)
|
||||
info.eedump_len = ops->get_eeprom_len(dev);
|
||||
|
||||
|
@ -1380,6 +1385,9 @@ static int ethtool_get_regs(struct net_device *dev, char __user *useraddr)
|
|||
return -EFAULT;
|
||||
|
||||
reglen = ops->get_regs_len(dev);
|
||||
if (reglen <= 0)
|
||||
return reglen;
|
||||
|
||||
if (regs.len > reglen)
|
||||
regs.len = reglen;
|
||||
|
||||
|
@ -1390,13 +1398,16 @@ static int ethtool_get_regs(struct net_device *dev, char __user *useraddr)
|
|||
return -ENOMEM;
|
||||
}
|
||||
|
||||
if (regs.len < reglen)
|
||||
reglen = regs.len;
|
||||
|
||||
ops->get_regs(dev, ®s, regbuf);
|
||||
|
||||
ret = -EFAULT;
|
||||
if (copy_to_user(useraddr, ®s, sizeof(regs)))
|
||||
goto out;
|
||||
useraddr += offsetof(struct ethtool_regs, data);
|
||||
if (regbuf && copy_to_user(useraddr, regbuf, regs.len))
|
||||
if (copy_to_user(useraddr, regbuf, reglen))
|
||||
goto out;
|
||||
ret = 0;
|
||||
|
||||
|
|
|
@ -30,6 +30,7 @@
|
|||
#include <linux/times.h>
|
||||
#include <net/net_namespace.h>
|
||||
#include <net/neighbour.h>
|
||||
#include <net/arp.h>
|
||||
#include <net/dst.h>
|
||||
#include <net/sock.h>
|
||||
#include <net/netevent.h>
|
||||
|
@ -2510,7 +2511,13 @@ int neigh_xmit(int index, struct net_device *dev,
|
|||
if (!tbl)
|
||||
goto out;
|
||||
rcu_read_lock_bh();
|
||||
neigh = __neigh_lookup_noref(tbl, addr, dev);
|
||||
if (index == NEIGH_ARP_TABLE) {
|
||||
u32 key = *((u32 *)addr);
|
||||
|
||||
neigh = __ipv4_neigh_lookup_noref(dev, key);
|
||||
} else {
|
||||
neigh = __neigh_lookup_noref(tbl, addr, dev);
|
||||
}
|
||||
if (!neigh)
|
||||
neigh = __neigh_create(tbl, addr, dev, false);
|
||||
err = PTR_ERR(neigh);
|
||||
|
|
|
@ -3147,7 +3147,13 @@ static int pktgen_wait_thread_run(struct pktgen_thread *t)
|
|||
{
|
||||
while (thread_is_running(t)) {
|
||||
|
||||
/* note: 't' will still be around even after the unlock/lock
|
||||
* cycle because pktgen_thread threads are only cleared at
|
||||
* net exit
|
||||
*/
|
||||
mutex_unlock(&pktgen_thread_lock);
|
||||
msleep_interruptible(100);
|
||||
mutex_lock(&pktgen_thread_lock);
|
||||
|
||||
if (signal_pending(current))
|
||||
goto signal;
|
||||
|
@ -3162,6 +3168,10 @@ static int pktgen_wait_all_threads_run(struct pktgen_net *pn)
|
|||
struct pktgen_thread *t;
|
||||
int sig = 1;
|
||||
|
||||
/* prevent from racing with rmmod */
|
||||
if (!try_module_get(THIS_MODULE))
|
||||
return sig;
|
||||
|
||||
mutex_lock(&pktgen_thread_lock);
|
||||
|
||||
list_for_each_entry(t, &pn->pktgen_threads, th_list) {
|
||||
|
@ -3175,6 +3185,7 @@ static int pktgen_wait_all_threads_run(struct pktgen_net *pn)
|
|||
t->control |= (T_STOP);
|
||||
|
||||
mutex_unlock(&pktgen_thread_lock);
|
||||
module_put(THIS_MODULE);
|
||||
return sig;
|
||||
}
|
||||
|
||||
|
|
|
@ -190,6 +190,17 @@ static void ip_ma_put(struct ip_mc_list *im)
|
|||
pmc != NULL; \
|
||||
pmc = rtnl_dereference(pmc->next_rcu))
|
||||
|
||||
static void ip_sf_list_clear_all(struct ip_sf_list *psf)
|
||||
{
|
||||
struct ip_sf_list *next;
|
||||
|
||||
while (psf) {
|
||||
next = psf->sf_next;
|
||||
kfree(psf);
|
||||
psf = next;
|
||||
}
|
||||
}
|
||||
|
||||
#ifdef CONFIG_IP_MULTICAST
|
||||
|
||||
/*
|
||||
|
@ -635,6 +646,13 @@ static void igmpv3_clear_zeros(struct ip_sf_list **ppsf)
|
|||
}
|
||||
}
|
||||
|
||||
static void kfree_pmc(struct ip_mc_list *pmc)
|
||||
{
|
||||
ip_sf_list_clear_all(pmc->sources);
|
||||
ip_sf_list_clear_all(pmc->tomb);
|
||||
kfree(pmc);
|
||||
}
|
||||
|
||||
static void igmpv3_send_cr(struct in_device *in_dev)
|
||||
{
|
||||
struct ip_mc_list *pmc, *pmc_prev, *pmc_next;
|
||||
|
@ -671,7 +689,7 @@ static void igmpv3_send_cr(struct in_device *in_dev)
|
|||
else
|
||||
in_dev->mc_tomb = pmc_next;
|
||||
in_dev_put(pmc->interface);
|
||||
kfree(pmc);
|
||||
kfree_pmc(pmc);
|
||||
} else
|
||||
pmc_prev = pmc;
|
||||
}
|
||||
|
@ -1195,12 +1213,16 @@ static void igmpv3_del_delrec(struct in_device *in_dev, struct ip_mc_list *im)
|
|||
im->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv;
|
||||
if (im->sfmode == MCAST_INCLUDE) {
|
||||
im->tomb = pmc->tomb;
|
||||
pmc->tomb = NULL;
|
||||
|
||||
im->sources = pmc->sources;
|
||||
pmc->sources = NULL;
|
||||
|
||||
for (psf = im->sources; psf; psf = psf->sf_next)
|
||||
psf->sf_crcount = im->crcount;
|
||||
}
|
||||
in_dev_put(pmc->interface);
|
||||
kfree(pmc);
|
||||
kfree_pmc(pmc);
|
||||
}
|
||||
spin_unlock_bh(&im->lock);
|
||||
}
|
||||
|
@ -1221,21 +1243,18 @@ static void igmpv3_clear_delrec(struct in_device *in_dev)
|
|||
nextpmc = pmc->next;
|
||||
ip_mc_clear_src(pmc);
|
||||
in_dev_put(pmc->interface);
|
||||
kfree(pmc);
|
||||
kfree_pmc(pmc);
|
||||
}
|
||||
/* clear dead sources, too */
|
||||
rcu_read_lock();
|
||||
for_each_pmc_rcu(in_dev, pmc) {
|
||||
struct ip_sf_list *psf, *psf_next;
|
||||
struct ip_sf_list *psf;
|
||||
|
||||
spin_lock_bh(&pmc->lock);
|
||||
psf = pmc->tomb;
|
||||
pmc->tomb = NULL;
|
||||
spin_unlock_bh(&pmc->lock);
|
||||
for (; psf; psf = psf_next) {
|
||||
psf_next = psf->sf_next;
|
||||
kfree(psf);
|
||||
}
|
||||
ip_sf_list_clear_all(psf);
|
||||
}
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
@ -2099,7 +2118,7 @@ static int ip_mc_add_src(struct in_device *in_dev, __be32 *pmca, int sfmode,
|
|||
|
||||
static void ip_mc_clear_src(struct ip_mc_list *pmc)
|
||||
{
|
||||
struct ip_sf_list *psf, *nextpsf, *tomb, *sources;
|
||||
struct ip_sf_list *tomb, *sources;
|
||||
|
||||
spin_lock_bh(&pmc->lock);
|
||||
tomb = pmc->tomb;
|
||||
|
@ -2111,14 +2130,8 @@ static void ip_mc_clear_src(struct ip_mc_list *pmc)
|
|||
pmc->sfcount[MCAST_EXCLUDE] = 1;
|
||||
spin_unlock_bh(&pmc->lock);
|
||||
|
||||
for (psf = tomb; psf; psf = nextpsf) {
|
||||
nextpsf = psf->sf_next;
|
||||
kfree(psf);
|
||||
}
|
||||
for (psf = sources; psf; psf = nextpsf) {
|
||||
nextpsf = psf->sf_next;
|
||||
kfree(psf);
|
||||
}
|
||||
ip_sf_list_clear_all(tomb);
|
||||
ip_sf_list_clear_all(sources);
|
||||
}
|
||||
|
||||
/* Join a multicast group
|
||||
|
|
|
@ -283,7 +283,9 @@ static int rawv6_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len)
|
|||
/* Binding to link-local address requires an interface */
|
||||
if (!sk->sk_bound_dev_if)
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
if (sk->sk_bound_dev_if) {
|
||||
err = -ENODEV;
|
||||
dev = dev_get_by_index_rcu(sock_net(sk),
|
||||
sk->sk_bound_dev_if);
|
||||
|
@ -772,6 +774,7 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
|
|||
struct sockcm_cookie sockc;
|
||||
struct ipcm6_cookie ipc6;
|
||||
int addr_len = msg->msg_namelen;
|
||||
int hdrincl;
|
||||
u16 proto;
|
||||
int err;
|
||||
|
||||
|
@ -785,6 +788,13 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
|
|||
if (msg->msg_flags & MSG_OOB)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
/* hdrincl should be READ_ONCE(inet->hdrincl)
|
||||
* but READ_ONCE() doesn't work with bit fields.
|
||||
* Doing this indirectly yields the same result.
|
||||
*/
|
||||
hdrincl = inet->hdrincl;
|
||||
hdrincl = READ_ONCE(hdrincl);
|
||||
|
||||
/*
|
||||
* Get and verify the address.
|
||||
*/
|
||||
|
@ -879,11 +889,14 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
|
|||
opt = ipv6_fixup_options(&opt_space, opt);
|
||||
|
||||
fl6.flowi6_proto = proto;
|
||||
rfv.msg = msg;
|
||||
rfv.hlen = 0;
|
||||
err = rawv6_probe_proto_opt(&rfv, &fl6);
|
||||
if (err)
|
||||
goto out;
|
||||
|
||||
if (!hdrincl) {
|
||||
rfv.msg = msg;
|
||||
rfv.hlen = 0;
|
||||
err = rawv6_probe_proto_opt(&rfv, &fl6);
|
||||
if (err)
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (!ipv6_addr_any(daddr))
|
||||
fl6.daddr = *daddr;
|
||||
|
@ -900,7 +913,7 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
|
|||
fl6.flowi6_oif = np->ucast_oif;
|
||||
security_sk_classify_flow(sk, flowi6_to_flowi(&fl6));
|
||||
|
||||
if (inet->hdrincl)
|
||||
if (hdrincl)
|
||||
fl6.flowi6_flags |= FLOWI_FLAG_KNOWN_NH;
|
||||
|
||||
if (ipc6.tclass < 0)
|
||||
|
@ -923,7 +936,7 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
|
|||
goto do_confirm;
|
||||
|
||||
back_from_confirm:
|
||||
if (inet->hdrincl)
|
||||
if (hdrincl)
|
||||
err = rawv6_send_hdrinc(sk, msg, len, &fl6, &dst, msg->msg_flags);
|
||||
else {
|
||||
ipc6.opt = opt;
|
||||
|
|
|
@ -72,6 +72,8 @@ int llc_build_and_send_ui_pkt(struct llc_sap *sap, struct sk_buff *skb,
|
|||
rc = llc_mac_hdr_init(skb, skb->dev->dev_addr, dmac);
|
||||
if (likely(!rc))
|
||||
rc = dev_queue_xmit(skb);
|
||||
else
|
||||
kfree_skb(skb);
|
||||
return rc;
|
||||
}
|
||||
|
||||
|
|
|
@ -416,12 +416,14 @@ int rds_ib_flush_mr_pool(struct rds_ib_mr_pool *pool,
|
|||
wait_clean_list_grace();
|
||||
|
||||
list_to_llist_nodes(pool, &unmap_list, &clean_nodes, &clean_tail);
|
||||
if (ibmr_ret)
|
||||
if (ibmr_ret) {
|
||||
*ibmr_ret = llist_entry(clean_nodes, struct rds_ib_mr, llnode);
|
||||
|
||||
clean_nodes = clean_nodes->next;
|
||||
}
|
||||
/* more than one entry in llist nodes */
|
||||
if (clean_nodes->next)
|
||||
llist_add_batch(clean_nodes->next, clean_tail, &pool->clean_list);
|
||||
if (clean_nodes)
|
||||
llist_add_batch(clean_nodes, clean_tail,
|
||||
&pool->clean_list);
|
||||
|
||||
}
|
||||
|
||||
|
|
|
@ -62,10 +62,6 @@ static int __net_init tipc_init_net(struct net *net)
|
|||
INIT_LIST_HEAD(&tn->node_list);
|
||||
spin_lock_init(&tn->node_list_lock);
|
||||
|
||||
err = tipc_socket_init();
|
||||
if (err)
|
||||
goto out_socket;
|
||||
|
||||
err = tipc_sk_rht_init(net);
|
||||
if (err)
|
||||
goto out_sk_rht;
|
||||
|
@ -75,9 +71,6 @@ static int __net_init tipc_init_net(struct net *net)
|
|||
goto out_nametbl;
|
||||
|
||||
INIT_LIST_HEAD(&tn->dist_queue);
|
||||
err = tipc_topsrv_start(net);
|
||||
if (err)
|
||||
goto out_subscr;
|
||||
|
||||
err = tipc_bcast_init(net);
|
||||
if (err)
|
||||
|
@ -86,25 +79,19 @@ static int __net_init tipc_init_net(struct net *net)
|
|||
return 0;
|
||||
|
||||
out_bclink:
|
||||
tipc_bcast_stop(net);
|
||||
out_subscr:
|
||||
tipc_nametbl_stop(net);
|
||||
out_nametbl:
|
||||
tipc_sk_rht_destroy(net);
|
||||
out_sk_rht:
|
||||
tipc_socket_stop();
|
||||
out_socket:
|
||||
return err;
|
||||
}
|
||||
|
||||
static void __net_exit tipc_exit_net(struct net *net)
|
||||
{
|
||||
tipc_topsrv_stop(net);
|
||||
tipc_net_stop(net);
|
||||
tipc_bcast_stop(net);
|
||||
tipc_nametbl_stop(net);
|
||||
tipc_sk_rht_destroy(net);
|
||||
tipc_socket_stop();
|
||||
}
|
||||
|
||||
static struct pernet_operations tipc_net_ops = {
|
||||
|
@ -114,6 +101,11 @@ static struct pernet_operations tipc_net_ops = {
|
|||
.size = sizeof(struct tipc_net),
|
||||
};
|
||||
|
||||
static struct pernet_operations tipc_topsrv_net_ops = {
|
||||
.init = tipc_topsrv_init_net,
|
||||
.exit = tipc_topsrv_exit_net,
|
||||
};
|
||||
|
||||
static int __init tipc_init(void)
|
||||
{
|
||||
int err;
|
||||
|
@ -140,6 +132,14 @@ static int __init tipc_init(void)
|
|||
if (err)
|
||||
goto out_pernet;
|
||||
|
||||
err = tipc_socket_init();
|
||||
if (err)
|
||||
goto out_socket;
|
||||
|
||||
err = register_pernet_subsys(&tipc_topsrv_net_ops);
|
||||
if (err)
|
||||
goto out_pernet_topsrv;
|
||||
|
||||
err = tipc_bearer_setup();
|
||||
if (err)
|
||||
goto out_bearer;
|
||||
|
@ -147,6 +147,10 @@ static int __init tipc_init(void)
|
|||
pr_info("Started in single node mode\n");
|
||||
return 0;
|
||||
out_bearer:
|
||||
unregister_pernet_subsys(&tipc_topsrv_net_ops);
|
||||
out_pernet_topsrv:
|
||||
tipc_socket_stop();
|
||||
out_socket:
|
||||
unregister_pernet_subsys(&tipc_net_ops);
|
||||
out_pernet:
|
||||
tipc_unregister_sysctl();
|
||||
|
@ -162,6 +166,8 @@ out_netlink:
|
|||
static void __exit tipc_exit(void)
|
||||
{
|
||||
tipc_bearer_cleanup();
|
||||
unregister_pernet_subsys(&tipc_topsrv_net_ops);
|
||||
tipc_socket_stop();
|
||||
unregister_pernet_subsys(&tipc_net_ops);
|
||||
tipc_netlink_stop();
|
||||
tipc_netlink_compat_stop();
|
||||
|
|
|
@ -358,7 +358,7 @@ static void *tipc_subscrb_connect_cb(int conid)
|
|||
return (void *)tipc_subscrb_create(conid);
|
||||
}
|
||||
|
||||
int tipc_topsrv_start(struct net *net)
|
||||
static int tipc_topsrv_start(struct net *net)
|
||||
{
|
||||
struct tipc_net *tn = net_generic(net, tipc_net_id);
|
||||
const char name[] = "topology_server";
|
||||
|
@ -396,7 +396,7 @@ int tipc_topsrv_start(struct net *net)
|
|||
return tipc_server_start(topsrv);
|
||||
}
|
||||
|
||||
void tipc_topsrv_stop(struct net *net)
|
||||
static void tipc_topsrv_stop(struct net *net)
|
||||
{
|
||||
struct tipc_net *tn = net_generic(net, tipc_net_id);
|
||||
struct tipc_server *topsrv = tn->topsrv;
|
||||
|
@ -405,3 +405,13 @@ void tipc_topsrv_stop(struct net *net)
|
|||
kfree(topsrv->saddr);
|
||||
kfree(topsrv);
|
||||
}
|
||||
|
||||
int __net_init tipc_topsrv_init_net(struct net *net)
|
||||
{
|
||||
return tipc_topsrv_start(net);
|
||||
}
|
||||
|
||||
void __net_exit tipc_topsrv_exit_net(struct net *net)
|
||||
{
|
||||
tipc_topsrv_stop(net);
|
||||
}
|
||||
|
|
|
@ -75,7 +75,8 @@ void tipc_subscrp_report_overlap(struct tipc_subscription *sub,
|
|||
void tipc_subscrp_convert_seq(struct tipc_name_seq *in, int swap,
|
||||
struct tipc_name_seq *out);
|
||||
u32 tipc_subscrp_convert_seq_type(u32 type, int swap);
|
||||
int tipc_topsrv_start(struct net *net);
|
||||
void tipc_topsrv_stop(struct net *net);
|
||||
|
||||
int __net_init tipc_topsrv_init_net(struct net *net);
|
||||
void __net_exit tipc_topsrv_exit_net(struct net *net);
|
||||
|
||||
#endif
|
||||
|
|
|
@ -0,0 +1,363 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
// Author: Kirill Smelkov (kirr@nexedi.com)
|
||||
//
|
||||
// Search for stream-like files that are using nonseekable_open and convert
|
||||
// them to stream_open. A stream-like file is a file that does not use ppos in
|
||||
// its read and write. Rationale for the conversion is to avoid deadlock in
|
||||
// between read and write.
|
||||
|
||||
virtual report
|
||||
virtual patch
|
||||
virtual explain // explain decisions in the patch (SPFLAGS="-D explain")
|
||||
|
||||
// stream-like reader & writer - ones that do not depend on f_pos.
|
||||
@ stream_reader @
|
||||
identifier readstream, ppos;
|
||||
identifier f, buf, len;
|
||||
type loff_t;
|
||||
@@
|
||||
ssize_t readstream(struct file *f, char *buf, size_t len, loff_t *ppos)
|
||||
{
|
||||
... when != ppos
|
||||
}
|
||||
|
||||
@ stream_writer @
|
||||
identifier writestream, ppos;
|
||||
identifier f, buf, len;
|
||||
type loff_t;
|
||||
@@
|
||||
ssize_t writestream(struct file *f, const char *buf, size_t len, loff_t *ppos)
|
||||
{
|
||||
... when != ppos
|
||||
}
|
||||
|
||||
|
||||
// a function that blocks
|
||||
@ blocks @
|
||||
identifier block_f;
|
||||
identifier wait_event =~ "^wait_event_.*";
|
||||
@@
|
||||
block_f(...) {
|
||||
... when exists
|
||||
wait_event(...)
|
||||
... when exists
|
||||
}
|
||||
|
||||
// stream_reader that can block inside.
|
||||
//
|
||||
// XXX wait_* can be called not directly from current function (e.g. func -> f -> g -> wait())
|
||||
// XXX currently reader_blocks supports only direct and 1-level indirect cases.
|
||||
@ reader_blocks_direct @
|
||||
identifier stream_reader.readstream;
|
||||
identifier wait_event =~ "^wait_event_.*";
|
||||
@@
|
||||
readstream(...)
|
||||
{
|
||||
... when exists
|
||||
wait_event(...)
|
||||
... when exists
|
||||
}
|
||||
|
||||
@ reader_blocks_1 @
|
||||
identifier stream_reader.readstream;
|
||||
identifier blocks.block_f;
|
||||
@@
|
||||
readstream(...)
|
||||
{
|
||||
... when exists
|
||||
block_f(...)
|
||||
... when exists
|
||||
}
|
||||
|
||||
@ reader_blocks depends on reader_blocks_direct || reader_blocks_1 @
|
||||
identifier stream_reader.readstream;
|
||||
@@
|
||||
readstream(...) {
|
||||
...
|
||||
}
|
||||
|
||||
|
||||
// file_operations + whether they have _any_ .read, .write, .llseek ... at all.
|
||||
//
|
||||
// XXX add support for file_operations xxx[N] = ... (sound/core/pcm_native.c)
|
||||
@ fops0 @
|
||||
identifier fops;
|
||||
@@
|
||||
struct file_operations fops = {
|
||||
...
|
||||
};
|
||||
|
||||
@ has_read @
|
||||
identifier fops0.fops;
|
||||
identifier read_f;
|
||||
@@
|
||||
struct file_operations fops = {
|
||||
.read = read_f,
|
||||
};
|
||||
|
||||
@ has_read_iter @
|
||||
identifier fops0.fops;
|
||||
identifier read_iter_f;
|
||||
@@
|
||||
struct file_operations fops = {
|
||||
.read_iter = read_iter_f,
|
||||
};
|
||||
|
||||
@ has_write @
|
||||
identifier fops0.fops;
|
||||
identifier write_f;
|
||||
@@
|
||||
struct file_operations fops = {
|
||||
.write = write_f,
|
||||
};
|
||||
|
||||
@ has_write_iter @
|
||||
identifier fops0.fops;
|
||||
identifier write_iter_f;
|
||||
@@
|
||||
struct file_operations fops = {
|
||||
.write_iter = write_iter_f,
|
||||
};
|
||||
|
||||
@ has_llseek @
|
||||
identifier fops0.fops;
|
||||
identifier llseek_f;
|
||||
@@
|
||||
struct file_operations fops = {
|
||||
.llseek = llseek_f,
|
||||
};
|
||||
|
||||
@ has_no_llseek @
|
||||
identifier fops0.fops;
|
||||
@@
|
||||
struct file_operations fops = {
|
||||
.llseek = no_llseek,
|
||||
};
|
||||
|
||||
@ has_mmap @
|
||||
identifier fops0.fops;
|
||||
identifier mmap_f;
|
||||
@@
|
||||
struct file_operations fops = {
|
||||
.mmap = mmap_f,
|
||||
};
|
||||
|
||||
@ has_copy_file_range @
|
||||
identifier fops0.fops;
|
||||
identifier copy_file_range_f;
|
||||
@@
|
||||
struct file_operations fops = {
|
||||
.copy_file_range = copy_file_range_f,
|
||||
};
|
||||
|
||||
@ has_remap_file_range @
|
||||
identifier fops0.fops;
|
||||
identifier remap_file_range_f;
|
||||
@@
|
||||
struct file_operations fops = {
|
||||
.remap_file_range = remap_file_range_f,
|
||||
};
|
||||
|
||||
@ has_splice_read @
|
||||
identifier fops0.fops;
|
||||
identifier splice_read_f;
|
||||
@@
|
||||
struct file_operations fops = {
|
||||
.splice_read = splice_read_f,
|
||||
};
|
||||
|
||||
@ has_splice_write @
|
||||
identifier fops0.fops;
|
||||
identifier splice_write_f;
|
||||
@@
|
||||
struct file_operations fops = {
|
||||
.splice_write = splice_write_f,
|
||||
};
|
||||
|
||||
|
||||
// file_operations that is candidate for stream_open conversion - it does not
|
||||
// use mmap and other methods that assume @offset access to file.
|
||||
//
|
||||
// XXX for simplicity require no .{read/write}_iter and no .splice_{read/write} for now.
|
||||
// XXX maybe_steam.fops cannot be used in other rules - it gives "bad rule maybe_stream or bad variable fops".
|
||||
@ maybe_stream depends on (!has_llseek || has_no_llseek) && !has_mmap && !has_copy_file_range && !has_remap_file_range && !has_read_iter && !has_write_iter && !has_splice_read && !has_splice_write @
|
||||
identifier fops0.fops;
|
||||
@@
|
||||
struct file_operations fops = {
|
||||
};
|
||||
|
||||
|
||||
// ---- conversions ----
|
||||
|
||||
// XXX .open = nonseekable_open -> .open = stream_open
|
||||
// XXX .open = func -> openfunc -> nonseekable_open
|
||||
|
||||
// read & write
|
||||
//
|
||||
// if both are used in the same file_operations together with an opener -
|
||||
// under that conditions we can use stream_open instead of nonseekable_open.
|
||||
@ fops_rw depends on maybe_stream @
|
||||
identifier fops0.fops, openfunc;
|
||||
identifier stream_reader.readstream;
|
||||
identifier stream_writer.writestream;
|
||||
@@
|
||||
struct file_operations fops = {
|
||||
.open = openfunc,
|
||||
.read = readstream,
|
||||
.write = writestream,
|
||||
};
|
||||
|
||||
@ report_rw depends on report @
|
||||
identifier fops_rw.openfunc;
|
||||
position p1;
|
||||
@@
|
||||
openfunc(...) {
|
||||
<...
|
||||
nonseekable_open@p1
|
||||
...>
|
||||
}
|
||||
|
||||
@ script:python depends on report && reader_blocks @
|
||||
fops << fops0.fops;
|
||||
p << report_rw.p1;
|
||||
@@
|
||||
coccilib.report.print_report(p[0],
|
||||
"ERROR: %s: .read() can deadlock .write(); change nonseekable_open -> stream_open to fix." % (fops,))
|
||||
|
||||
@ script:python depends on report && !reader_blocks @
|
||||
fops << fops0.fops;
|
||||
p << report_rw.p1;
|
||||
@@
|
||||
coccilib.report.print_report(p[0],
|
||||
"WARNING: %s: .read() and .write() have stream semantic; safe to change nonseekable_open -> stream_open." % (fops,))
|
||||
|
||||
|
||||
@ explain_rw_deadlocked depends on explain && reader_blocks @
|
||||
identifier fops_rw.openfunc;
|
||||
@@
|
||||
openfunc(...) {
|
||||
<...
|
||||
- nonseekable_open
|
||||
+ nonseekable_open /* read & write (was deadlock) */
|
||||
...>
|
||||
}
|
||||
|
||||
|
||||
@ explain_rw_nodeadlock depends on explain && !reader_blocks @
|
||||
identifier fops_rw.openfunc;
|
||||
@@
|
||||
openfunc(...) {
|
||||
<...
|
||||
- nonseekable_open
|
||||
+ nonseekable_open /* read & write (no direct deadlock) */
|
||||
...>
|
||||
}
|
||||
|
||||
@ patch_rw depends on patch @
|
||||
identifier fops_rw.openfunc;
|
||||
@@
|
||||
openfunc(...) {
|
||||
<...
|
||||
- nonseekable_open
|
||||
+ stream_open
|
||||
...>
|
||||
}
|
||||
|
||||
|
||||
// read, but not write
|
||||
@ fops_r depends on maybe_stream && !has_write @
|
||||
identifier fops0.fops, openfunc;
|
||||
identifier stream_reader.readstream;
|
||||
@@
|
||||
struct file_operations fops = {
|
||||
.open = openfunc,
|
||||
.read = readstream,
|
||||
};
|
||||
|
||||
@ report_r depends on report @
|
||||
identifier fops_r.openfunc;
|
||||
position p1;
|
||||
@@
|
||||
openfunc(...) {
|
||||
<...
|
||||
nonseekable_open@p1
|
||||
...>
|
||||
}
|
||||
|
||||
@ script:python depends on report @
|
||||
fops << fops0.fops;
|
||||
p << report_r.p1;
|
||||
@@
|
||||
coccilib.report.print_report(p[0],
|
||||
"WARNING: %s: .read() has stream semantic; safe to change nonseekable_open -> stream_open." % (fops,))
|
||||
|
||||
@ explain_r depends on explain @
|
||||
identifier fops_r.openfunc;
|
||||
@@
|
||||
openfunc(...) {
|
||||
<...
|
||||
- nonseekable_open
|
||||
+ nonseekable_open /* read only */
|
||||
...>
|
||||
}
|
||||
|
||||
@ patch_r depends on patch @
|
||||
identifier fops_r.openfunc;
|
||||
@@
|
||||
openfunc(...) {
|
||||
<...
|
||||
- nonseekable_open
|
||||
+ stream_open
|
||||
...>
|
||||
}
|
||||
|
||||
|
||||
// write, but not read
|
||||
@ fops_w depends on maybe_stream && !has_read @
|
||||
identifier fops0.fops, openfunc;
|
||||
identifier stream_writer.writestream;
|
||||
@@
|
||||
struct file_operations fops = {
|
||||
.open = openfunc,
|
||||
.write = writestream,
|
||||
};
|
||||
|
||||
@ report_w depends on report @
|
||||
identifier fops_w.openfunc;
|
||||
position p1;
|
||||
@@
|
||||
openfunc(...) {
|
||||
<...
|
||||
nonseekable_open@p1
|
||||
...>
|
||||
}
|
||||
|
||||
@ script:python depends on report @
|
||||
fops << fops0.fops;
|
||||
p << report_w.p1;
|
||||
@@
|
||||
coccilib.report.print_report(p[0],
|
||||
"WARNING: %s: .write() has stream semantic; safe to change nonseekable_open -> stream_open." % (fops,))
|
||||
|
||||
@ explain_w depends on explain @
|
||||
identifier fops_w.openfunc;
|
||||
@@
|
||||
openfunc(...) {
|
||||
<...
|
||||
- nonseekable_open
|
||||
+ nonseekable_open /* write only */
|
||||
...>
|
||||
}
|
||||
|
||||
@ patch_w depends on patch @
|
||||
identifier fops_w.openfunc;
|
||||
@@
|
||||
openfunc(...) {
|
||||
<...
|
||||
- nonseekable_open
|
||||
+ stream_open
|
||||
...>
|
||||
}
|
||||
|
||||
|
||||
// no read, no write - don't change anything
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue