Merge android-4.9.190 (476e7ea
) into msm-4.9
* refs/heads/tmp-476e7ea: Linux 4.9.190 bonding: Add vlan tx offload to hw_enc_features team: Add vlan tx offload to hw_enc_features net/mlx5e: Use flow keys dissector to parse packets for ARFS net/mlx5e: Only support tx/rx pause setting for port owner xen/netback: Reset nr_frags before freeing skb sctp: fix the transport error_count check net/packet: fix race in tpacket_snd() bnx2x: Fix VF's VLAN reconfiguration in reload. iommu/amd: Move iommu_init_pci() to .init section Input: psmouse - fix build error of multiple definition netfilter: conntrack: Use consistent ct id hash calculation arm64: compat: Allow single-byte watchpoints on all addresses bpf: fix bpf_jit_limit knob for PAGE_SIZE >= 64K asm-generic: fix -Wtype-limits compiler warnings USB: serial: option: Add Motorola modem UARTs USB: serial: option: add the BroadMobi BM818 card USB: serial: option: Add support for ZTE MF871A USB: serial: option: add D-Link DWM-222 device ID USB: CDC: fix sanity checks in CDC union parser usb: cdc-acm: make sure a refcount is taken early enough USB: core: Fix races in character device registration and deregistraion staging: comedi: dt3000: Fix rounding up of timer divisor staging: comedi: dt3000: Fix signed integer overflow 'divider * base' ocfs2: remove set but not used variable 'last_hash' IB/mad: Fix use-after-free in ib mad completion handling IB/core: Add mitigation for Spectre V1 arm64/mm: fix variable 'pud' set but not used arm64/efi: fix variable 'si' set but not used kbuild: modpost: handle KBUILD_EXTRA_SYMBOLS only for external modules ata: libahci: do not complain in case of deferred probe scsi: hpsa: correct scsi command status issue after reset libata: zpodd: Fix small read overflow in zpodd_get_mech_type() perf header: Fix use of unitialized value warning perf header: Fix divide by zero error if f_header.attr_size==0 irqchip/irq-imx-gpcv2: Forward irq type to parent xen/pciback: remove set but not used variable 'old_state' net: usb: pegasus: fix improper read if get_registers() fail Input: iforce - add sanity checks Input: kbtab - sanity check for endpoint type HID: hiddev: do cleanup in failure of opening a device HID: hiddev: avoid opening a disconnected device HID: holtek: test for sanity of intfdata ALSA: hda - Let all conexant codec enter D3 when rebooting ALSA: hda - Add a generic reboot_notify ALSA: hda - Fix a memory leak bug xtensa: add missing isync to the cpu_reset TLB code netfilter: ctnetlink: don't use conntrack/expect object addresses as id inet: switch IP ID generator to siphash siphash: implement HalfSipHash1-3 for hash tables siphash: add cryptographically secure PRF vhost: scsi: add weight support vhost_net: fix possible infinite loop vhost: introduce vhost_exceeds_weight() vhost_net: introduce vhost_exceeds_weight() vhost_net: use packet weight for rx handler, too vhost-net: set packet weight of tx polling to 2 * vq size bpf: add bpf_jit_limit knob to restrict unpriv allocations bpf: restrict access to core bpf sysctls bpf: get rid of pure_initcall dependency to enable jits mm/memcontrol.c: fix use after free in mem_cgroup_iter() mm/usercopy: use memory range to be accessed for wraparound check sh: kernel: hw_breakpoint: Fix missing break in switch statement scsi: mpt3sas: Use 63-bit DMA addressing on SAS35 HBA iwlwifi: don't unmap as page memory that was mapped as single mwifiex: fix 802.11n/WPA detection smb3: send CAP_DFS capability during session setup SMB3: Fix deadlock in validate negotiate hits reconnect mac80211: don't WARN on short WMM parameters from AP ALSA: hda - Don't override global PCM hw info flag ALSA: firewire: fix a memory leak bug hwmon: (nct7802) Fix wrong detection of in4 presence can: peak_usb: pcan_usb_fd: Fix info-leaks to USB devices can: peak_usb: pcan_usb_pro: Fix info-leaks to USB devices perf/core: Fix creating kernel counters for PMUs that override event->cpu tty/ldsem, locking/rwsem: Add missing ACQUIRE to read_failed sleep loop scsi: scsi_dh_alua: always use a 2 second delay before retrying RTPG scsi: ibmvfc: fix WARN_ON during event pool release scsi: megaraid_sas: fix panic on loading firmware crashdump ARM: davinci: fix sleep.S build error on ARMv4 ACPI/IORT: Fix off-by-one check in iort_dev_find_its_id() drbd: dynamically allocate shash descriptor perf probe: Avoid calling freeing routine multiple times for same pointer ALSA: compress: Be more restrictive about when a drain is allowed ALSA: compress: Don't allow paritial drain operations on capture streams ALSA: compress: Prevent bypasses of set_params ALSA: compress: Fix regression on compressed capture streams s390/qdio: add sanity checks to the fast-requeue path cpufreq/pasemi: fix use-after-free in pas_cpufreq_cpu_init() hwmon: (nct6775) Fix register address and added missed tolerance for nct6106 mac80211: don't warn about CW params when not using them iscsi_ibft: make ISCSI_IBFT dependson ACPI instead of ISCSI_IBFT_FIND netfilter: nfnetlink: avoid deadlock due to synchronous request_module can: peak_usb: fix potential double kfree_skb() usb: yurex: Fix use-after-free in yurex_delete perf record: Fix module size on s390 perf db-export: Fix thread__exec_comm() perf record: Fix wrong size in perf_record_mmap for last kernel module mm/vmalloc: Sync unmappings in __purge_vmap_area_lazy() x86/mm: Sync also unmappings in vmalloc_sync_all() x86/mm: Check for pfn instead of page in vmalloc_sync_one() sound: fix a memory leak bug usb: iowarrior: fix deadlock on disconnect usb: usbfs: fix double-free of usb memory upon submiturb error BACKPORT: arch: add pidfd and io_uring syscalls everywhere ANDROID: arch: add missing pidfd_open definitions for arm32 ANDROID: fix kernelci build-break in lowmemorykiller f2fs: fix build error on android tracepoints ANDROID: Avoid taking multiple locks in handle_lmk_event UPSTREAM: net/ipv6: allow sysctl to change link-local address generation mode ANDROID: fix binder change in merge of 4.9.188 UPSTREAM: pidfd: fix a poll race when setting exit_state BACKPORT: arch: wire-up pidfd_open() BACKPORT: pid: add pidfd_open() UPSTREAM: pidfd: add polling support UPSTREAM: signal: improve comments BACKPORT: fork: do not release lock that wasn't taken BACKPORT: signal: support CLONE_PIDFD with pidfd_send_signal BACKPORT: clone: add CLONE_PIDFD UPSTREAM: Make anon_inodes unconditional UPSTREAM: signal: use fdget() since we don't allow O_PATH UPSTREAM: signal: don't silently convert SI_USER signals to non-current pidfd BACKPORT: signal: add pidfd_send_signal() syscall Conflicts: drivers/staging/android/lowmemorykiller.c include/linux/ipv6.h net/ipv6/addrconf.c sound/core/compress_offload.c Change-Id: I18be309a1a2fd17077b949c7b7113f407a9033a8 Signed-off-by: jianzhou <jianzhou@codeaurora.org>
This commit is contained in:
commit
dca3398ea7
|
@ -54,6 +54,14 @@ Values :
|
|||
1 - enable JIT hardening for unprivileged users only
|
||||
2 - enable JIT hardening for all users
|
||||
|
||||
bpf_jit_limit
|
||||
-------------
|
||||
|
||||
This enforces a global limit for memory allocations to the BPF JIT
|
||||
compiler in order to reject unprivileged JIT requests once it has
|
||||
been surpassed. bpf_jit_limit contains the value of the global limit
|
||||
in bytes.
|
||||
|
||||
dev_weight
|
||||
--------------
|
||||
|
||||
|
|
2
Makefile
2
Makefile
|
@ -1,6 +1,6 @@
|
|||
VERSION = 4
|
||||
PATCHLEVEL = 9
|
||||
SUBLEVEL = 189
|
||||
SUBLEVEL = 190
|
||||
EXTRAVERSION =
|
||||
NAME = Roaring Lionus
|
||||
|
||||
|
|
|
@ -423,6 +423,8 @@
|
|||
#define __NR_pkey_mprotect (__NR_SYSCALL_BASE+394)
|
||||
#define __NR_pkey_alloc (__NR_SYSCALL_BASE+395)
|
||||
#define __NR_pkey_free (__NR_SYSCALL_BASE+396)
|
||||
#define __NR_pidfd_send_signal (__NR_SYSCALL_BASE+424)
|
||||
#define __NR_pidfd_open (__NR_SYSCALL_BASE+434)
|
||||
|
||||
/*
|
||||
* The following SWIs are ARM private.
|
||||
|
|
|
@ -406,6 +406,8 @@
|
|||
CALL(sys_pkey_mprotect)
|
||||
/* 395 */ CALL(sys_pkey_alloc)
|
||||
CALL(sys_pkey_free)
|
||||
CALL(sys_pidfd_send_signal)
|
||||
CALL(sys_pidfd_open)
|
||||
#ifndef syscalls_counted
|
||||
.equ syscalls_padding, ((NR_syscalls + 3) & ~3) - NR_syscalls
|
||||
#define syscalls_counted
|
||||
|
|
|
@ -20,7 +20,6 @@ config KVM
|
|||
bool "Kernel-based Virtual Machine (KVM) support"
|
||||
depends on MMU && OF
|
||||
select PREEMPT_NOTIFIERS
|
||||
select ANON_INODES
|
||||
select ARM_GIC
|
||||
select HAVE_KVM_CPU_RELAX_INTERCEPT
|
||||
select HAVE_KVM_ARCH_TLB_FLUSH_ALL
|
||||
|
|
|
@ -37,6 +37,7 @@
|
|||
#define DEEPSLEEP_SLEEPENABLE_BIT BIT(31)
|
||||
|
||||
.text
|
||||
.arch armv5te
|
||||
/*
|
||||
* Move DaVinci into deep sleep state
|
||||
*
|
||||
|
|
|
@ -72,8 +72,6 @@ struct jit_ctx {
|
|||
#endif
|
||||
};
|
||||
|
||||
int bpf_jit_enable __read_mostly;
|
||||
|
||||
static inline int call_neg_helper(struct sk_buff *skb, int offset, void *ret,
|
||||
unsigned int size)
|
||||
{
|
||||
|
|
|
@ -53,7 +53,11 @@ int efi_set_mapping_permissions(struct mm_struct *mm, efi_memory_desc_t *md);
|
|||
#define efi_is_64bit() (true)
|
||||
|
||||
#define alloc_screen_info(x...) &screen_info
|
||||
#define free_screen_info(x...)
|
||||
|
||||
static inline void free_screen_info(efi_system_table_t *sys_table_arg,
|
||||
struct screen_info *si)
|
||||
{
|
||||
}
|
||||
|
||||
/* redeclare as 'hidden' so the compiler will generate relative references */
|
||||
extern struct screen_info screen_info __attribute__((__visibility__("hidden")));
|
||||
|
|
|
@ -417,8 +417,8 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
|
|||
PMD_TYPE_SECT)
|
||||
|
||||
#if defined(CONFIG_ARM64_64K_PAGES) || CONFIG_PGTABLE_LEVELS < 3
|
||||
#define pud_sect(pud) (0)
|
||||
#define pud_table(pud) (1)
|
||||
static inline bool pud_sect(pud_t pud) { return false; }
|
||||
static inline bool pud_table(pud_t pud) { return true; }
|
||||
#else
|
||||
#define pud_sect(pud) ((pud_val(pud) & PUD_TYPE_MASK) == \
|
||||
PUD_TYPE_SECT)
|
||||
|
|
|
@ -44,7 +44,7 @@
|
|||
#define __ARM_NR_compat_cacheflush (__ARM_NR_COMPAT_BASE+2)
|
||||
#define __ARM_NR_compat_set_tls (__ARM_NR_COMPAT_BASE+5)
|
||||
|
||||
#define __NR_compat_syscalls 394
|
||||
#define __NR_compat_syscalls 435
|
||||
#endif
|
||||
|
||||
#define __ARCH_WANT_SYS_CLONE
|
||||
|
|
|
@ -809,6 +809,10 @@ __SYSCALL(__NR_copy_file_range, sys_copy_file_range)
|
|||
__SYSCALL(__NR_preadv2, compat_sys_preadv2)
|
||||
#define __NR_pwritev2 393
|
||||
__SYSCALL(__NR_pwritev2, compat_sys_pwritev2)
|
||||
#define __NR_pidfd_send_signal 424
|
||||
__SYSCALL(__NR_pidfd_send_signal, sys_pidfd_send_signal)
|
||||
#define __NR_pidfd_open 434
|
||||
__SYSCALL(__NR_pidfd_open, sys_pidfd_open)
|
||||
|
||||
/*
|
||||
* Please add new compat syscalls above this comment and update
|
||||
|
|
|
@ -548,13 +548,14 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp)
|
|||
/* Aligned */
|
||||
break;
|
||||
case 1:
|
||||
/* Allow single byte watchpoint. */
|
||||
if (info->ctrl.len == ARM_BREAKPOINT_LEN_1)
|
||||
break;
|
||||
case 2:
|
||||
/* Allow halfword watchpoints and breakpoints. */
|
||||
if (info->ctrl.len == ARM_BREAKPOINT_LEN_2)
|
||||
break;
|
||||
case 3:
|
||||
/* Allow single byte watchpoint. */
|
||||
if (info->ctrl.len == ARM_BREAKPOINT_LEN_1)
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
|
|
@ -24,7 +24,6 @@ config KVM
|
|||
depends on OF
|
||||
select MMU_NOTIFIER
|
||||
select PREEMPT_NOTIFIERS
|
||||
select ANON_INODES
|
||||
select HAVE_KVM_CPU_RELAX_INTERCEPT
|
||||
select HAVE_KVM_ARCH_TLB_FLUSH_ALL
|
||||
select KVM_MMIO
|
||||
|
|
|
@ -30,8 +30,6 @@
|
|||
|
||||
#include "bpf_jit.h"
|
||||
|
||||
int bpf_jit_enable __read_mostly;
|
||||
|
||||
#define TMP_REG_1 (MAX_BPF_JIT_REG + 0)
|
||||
#define TMP_REG_2 (MAX_BPF_JIT_REG + 1)
|
||||
#define TCALL_CNT (MAX_BPF_JIT_REG + 2)
|
||||
|
|
|
@ -19,7 +19,6 @@ config KVM
|
|||
depends on HAVE_KVM
|
||||
select EXPORT_UASM
|
||||
select PREEMPT_NOTIFIERS
|
||||
select ANON_INODES
|
||||
select KVM_MMIO
|
||||
select SRCU
|
||||
---help---
|
||||
|
|
|
@ -1194,8 +1194,6 @@ jmp_cmp:
|
|||
return 0;
|
||||
}
|
||||
|
||||
int bpf_jit_enable __read_mostly;
|
||||
|
||||
void bpf_jit_compile(struct bpf_prog *fp)
|
||||
{
|
||||
struct jit_ctx ctx;
|
||||
|
|
|
@ -19,7 +19,6 @@ if VIRTUALIZATION
|
|||
config KVM
|
||||
bool
|
||||
select PREEMPT_NOTIFIERS
|
||||
select ANON_INODES
|
||||
select HAVE_KVM_EVENTFD
|
||||
select SRCU
|
||||
select KVM_VFIO
|
||||
|
|
|
@ -18,8 +18,6 @@
|
|||
|
||||
#include "bpf_jit32.h"
|
||||
|
||||
int bpf_jit_enable __read_mostly;
|
||||
|
||||
static inline void bpf_flush_icache(void *start, void *end)
|
||||
{
|
||||
smp_wmb();
|
||||
|
|
|
@ -21,8 +21,6 @@
|
|||
|
||||
#include "bpf_jit64.h"
|
||||
|
||||
int bpf_jit_enable __read_mostly;
|
||||
|
||||
static void bpf_jit_fill_ill_insns(void *area, unsigned int size)
|
||||
{
|
||||
int *p = area;
|
||||
|
|
|
@ -20,7 +20,6 @@ config KVM
|
|||
prompt "Kernel-based Virtual Machine (KVM) support"
|
||||
depends on HAVE_KVM
|
||||
select PREEMPT_NOTIFIERS
|
||||
select ANON_INODES
|
||||
select HAVE_KVM_CPU_RELAX_INTERCEPT
|
||||
select HAVE_KVM_EVENTFD
|
||||
select KVM_ASYNC_PF
|
||||
|
|
|
@ -28,8 +28,6 @@
|
|||
#include <asm/nospec-branch.h>
|
||||
#include "bpf_jit.h"
|
||||
|
||||
int bpf_jit_enable __read_mostly;
|
||||
|
||||
struct bpf_jit {
|
||||
u32 seen; /* Flags to remember seen eBPF instructions */
|
||||
u32 seen_reg[16]; /* Array to remember which registers are used */
|
||||
|
|
|
@ -160,6 +160,7 @@ int arch_bp_generic_fields(int sh_len, int sh_type,
|
|||
switch (sh_type) {
|
||||
case SH_BREAKPOINT_READ:
|
||||
*gen_type = HW_BREAKPOINT_R;
|
||||
break;
|
||||
case SH_BREAKPOINT_WRITE:
|
||||
*gen_type = HW_BREAKPOINT_W;
|
||||
break;
|
||||
|
|
|
@ -10,8 +10,6 @@
|
|||
|
||||
#include "bpf_jit.h"
|
||||
|
||||
int bpf_jit_enable __read_mostly;
|
||||
|
||||
static inline bool is_simm13(unsigned int value)
|
||||
{
|
||||
return value + 0x1000 < 0x2000;
|
||||
|
|
|
@ -19,7 +19,6 @@ config X86
|
|||
def_bool y
|
||||
select ACPI_LEGACY_TABLES_LOOKUP if ACPI
|
||||
select ACPI_SYSTEM_POWER_STATES_SUPPORT if ACPI
|
||||
select ANON_INODES
|
||||
select ARCH_CLOCKSOURCE_DATA
|
||||
select ARCH_DISCARD_MEMBLOCK
|
||||
select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI
|
||||
|
|
|
@ -389,3 +389,5 @@
|
|||
380 i386 pkey_mprotect sys_pkey_mprotect
|
||||
381 i386 pkey_alloc sys_pkey_alloc
|
||||
382 i386 pkey_free sys_pkey_free
|
||||
424 i386 pidfd_send_signal sys_pidfd_send_signal
|
||||
434 i386 pidfd_open sys_pidfd_open
|
||||
|
|
|
@ -338,6 +338,8 @@
|
|||
329 common pkey_mprotect sys_pkey_mprotect
|
||||
330 common pkey_alloc sys_pkey_alloc
|
||||
331 common pkey_free sys_pkey_free
|
||||
424 common pidfd_send_signal sys_pidfd_send_signal
|
||||
434 common pidfd_open sys_pidfd_open
|
||||
|
||||
#
|
||||
# x32-specific system call numbers start at 512 to avoid cache impact
|
||||
|
|
|
@ -26,7 +26,6 @@ config KVM
|
|||
depends on X86_LOCAL_APIC
|
||||
select PREEMPT_NOTIFIERS
|
||||
select MMU_NOTIFIER
|
||||
select ANON_INODES
|
||||
select HAVE_KVM_IRQCHIP
|
||||
select HAVE_KVM_IRQFD
|
||||
select IRQ_BYPASS_MANAGER
|
||||
|
|
|
@ -273,13 +273,14 @@ static inline pmd_t *vmalloc_sync_one(pgd_t *pgd, unsigned long address)
|
|||
|
||||
pmd = pmd_offset(pud, address);
|
||||
pmd_k = pmd_offset(pud_k, address);
|
||||
|
||||
if (pmd_present(*pmd) != pmd_present(*pmd_k))
|
||||
set_pmd(pmd, *pmd_k);
|
||||
|
||||
if (!pmd_present(*pmd_k))
|
||||
return NULL;
|
||||
|
||||
if (!pmd_present(*pmd))
|
||||
set_pmd(pmd, *pmd_k);
|
||||
else
|
||||
BUG_ON(pmd_page(*pmd) != pmd_page(*pmd_k));
|
||||
BUG_ON(pmd_pfn(*pmd) != pmd_pfn(*pmd_k));
|
||||
|
||||
return pmd_k;
|
||||
}
|
||||
|
@ -299,17 +300,13 @@ void vmalloc_sync_all(void)
|
|||
spin_lock(&pgd_lock);
|
||||
list_for_each_entry(page, &pgd_list, lru) {
|
||||
spinlock_t *pgt_lock;
|
||||
pmd_t *ret;
|
||||
|
||||
/* the pgt_lock only for Xen */
|
||||
pgt_lock = &pgd_page_get_mm(page)->page_table_lock;
|
||||
|
||||
spin_lock(pgt_lock);
|
||||
ret = vmalloc_sync_one(page_address(page), address);
|
||||
vmalloc_sync_one(page_address(page), address);
|
||||
spin_unlock(pgt_lock);
|
||||
|
||||
if (!ret)
|
||||
break;
|
||||
}
|
||||
spin_unlock(&pgd_lock);
|
||||
}
|
||||
|
|
|
@ -15,8 +15,6 @@
|
|||
#include <asm/nospec-branch.h>
|
||||
#include <linux/bpf.h>
|
||||
|
||||
int bpf_jit_enable __read_mostly;
|
||||
|
||||
/*
|
||||
* assembly code in arch/x86/net/bpf_jit.S
|
||||
*/
|
||||
|
|
|
@ -626,6 +626,7 @@ void cpu_reset(void)
|
|||
"add %2, %2, %7\n\t"
|
||||
"addi %0, %0, -1\n\t"
|
||||
"bnez %0, 1b\n\t"
|
||||
"isync\n\t"
|
||||
/* Jump to identity mapping */
|
||||
"jx %3\n"
|
||||
"2:\n\t"
|
||||
|
|
|
@ -324,8 +324,8 @@ static int iort_dev_find_its_id(struct device *dev, u32 req_id,
|
|||
|
||||
/* Move to ITS specific data */
|
||||
its = (struct acpi_iort_its_group *)node->node_data;
|
||||
if (idx > its->its_count) {
|
||||
dev_err(dev, "requested ITS ID index [%d] is greater than available [%d]\n",
|
||||
if (idx >= its->its_count) {
|
||||
dev_err(dev, "requested ITS ID index [%d] overruns ITS entries [%d]\n",
|
||||
idx, its->its_count);
|
||||
return -ENXIO;
|
||||
}
|
||||
|
|
|
@ -219,6 +219,11 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
|
|||
|
||||
if (mm) {
|
||||
down_read(&mm->mmap_sem);
|
||||
if (!mmget_still_valid(mm)) {
|
||||
if (allocate == 0)
|
||||
goto free_range;
|
||||
goto err_no_vma;
|
||||
}
|
||||
vma = alloc->vma;
|
||||
}
|
||||
|
||||
|
|
|
@ -300,6 +300,9 @@ static int ahci_platform_get_phy(struct ahci_host_priv *hpriv, u32 port,
|
|||
hpriv->phys[port] = NULL;
|
||||
rc = 0;
|
||||
break;
|
||||
case -EPROBE_DEFER:
|
||||
/* Do not complain yet */
|
||||
break;
|
||||
|
||||
default:
|
||||
dev_err(dev,
|
||||
|
|
|
@ -55,7 +55,7 @@ static enum odd_mech_type zpodd_get_mech_type(struct ata_device *dev)
|
|||
unsigned int ret;
|
||||
struct rm_feature_desc *desc;
|
||||
struct ata_taskfile tf;
|
||||
static const char cdb[] = { GPCMD_GET_CONFIGURATION,
|
||||
static const char cdb[ATAPI_CDB_LEN] = { GPCMD_GET_CONFIGURATION,
|
||||
2, /* only 1 feature descriptor requested */
|
||||
0, 3, /* 3, removable medium feature */
|
||||
0, 0, 0,/* reserved */
|
||||
|
|
|
@ -251,7 +251,6 @@ source "drivers/base/regmap/Kconfig"
|
|||
config DMA_SHARED_BUFFER
|
||||
bool
|
||||
default n
|
||||
select ANON_INODES
|
||||
help
|
||||
This option enables the framework for buffer-sharing between
|
||||
multiple drivers. A buffer is associated with a file using driver
|
||||
|
|
|
@ -5297,7 +5297,7 @@ static int drbd_do_auth(struct drbd_connection *connection)
|
|||
unsigned int key_len;
|
||||
char secret[SHARED_SECRET_MAX]; /* 64 byte */
|
||||
unsigned int resp_size;
|
||||
SHASH_DESC_ON_STACK(desc, connection->cram_hmac_tfm);
|
||||
struct shash_desc *desc;
|
||||
struct packet_info pi;
|
||||
struct net_conf *nc;
|
||||
int err, rv;
|
||||
|
@ -5310,6 +5310,13 @@ static int drbd_do_auth(struct drbd_connection *connection)
|
|||
memcpy(secret, nc->shared_secret, key_len);
|
||||
rcu_read_unlock();
|
||||
|
||||
desc = kmalloc(sizeof(struct shash_desc) +
|
||||
crypto_shash_descsize(connection->cram_hmac_tfm),
|
||||
GFP_KERNEL);
|
||||
if (!desc) {
|
||||
rv = -1;
|
||||
goto fail;
|
||||
}
|
||||
desc->tfm = connection->cram_hmac_tfm;
|
||||
desc->flags = 0;
|
||||
|
||||
|
@ -5452,7 +5459,10 @@ static int drbd_do_auth(struct drbd_connection *connection)
|
|||
kfree(peers_ch);
|
||||
kfree(response);
|
||||
kfree(right_response);
|
||||
shash_desc_zero(desc);
|
||||
if (desc) {
|
||||
shash_desc_zero(desc);
|
||||
kfree(desc);
|
||||
}
|
||||
|
||||
return rv;
|
||||
}
|
||||
|
|
|
@ -144,7 +144,6 @@ config TCG_CRB
|
|||
config TCG_VTPM_PROXY
|
||||
tristate "VTPM Proxy Interface"
|
||||
depends on TCG_TPM
|
||||
select ANON_INODES
|
||||
---help---
|
||||
This driver proxies for an emulated TPM (vTPM) running in userspace.
|
||||
A device /dev/vtpmx is provided that creates a device pair
|
||||
|
|
|
@ -145,11 +145,19 @@ static int pas_cpufreq_cpu_init(struct cpufreq_policy *policy)
|
|||
int err = -ENODEV;
|
||||
|
||||
cpu = of_get_cpu_node(policy->cpu, NULL);
|
||||
|
||||
of_node_put(cpu);
|
||||
if (!cpu)
|
||||
goto out;
|
||||
|
||||
max_freqp = of_get_property(cpu, "clock-frequency", NULL);
|
||||
of_node_put(cpu);
|
||||
if (!max_freqp) {
|
||||
err = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* we need the freq in kHz */
|
||||
max_freq = *max_freqp / 1000;
|
||||
|
||||
dn = of_find_compatible_node(NULL, NULL, "1682m-sdc");
|
||||
if (!dn)
|
||||
dn = of_find_compatible_node(NULL, NULL,
|
||||
|
@ -185,16 +193,6 @@ static int pas_cpufreq_cpu_init(struct cpufreq_policy *policy)
|
|||
}
|
||||
|
||||
pr_debug("init cpufreq on CPU %d\n", policy->cpu);
|
||||
|
||||
max_freqp = of_get_property(cpu, "clock-frequency", NULL);
|
||||
if (!max_freqp) {
|
||||
err = -EINVAL;
|
||||
goto out_unmap_sdcpwr;
|
||||
}
|
||||
|
||||
/* we need the freq in kHz */
|
||||
max_freq = *max_freqp / 1000;
|
||||
|
||||
pr_debug("max clock-frequency is at %u kHz\n", max_freq);
|
||||
pr_debug("initializing frequency table\n");
|
||||
|
||||
|
@ -212,9 +210,6 @@ static int pas_cpufreq_cpu_init(struct cpufreq_policy *policy)
|
|||
|
||||
return cpufreq_generic_init(policy, pas_freqs, get_gizmo_latency());
|
||||
|
||||
out_unmap_sdcpwr:
|
||||
iounmap(sdcpwr_mapbase);
|
||||
|
||||
out_unmap_sdcasr:
|
||||
iounmap(sdcasr_mapbase);
|
||||
out:
|
||||
|
|
|
@ -3,7 +3,6 @@ menu "DMABUF options"
|
|||
config SYNC_FILE
|
||||
bool "Explicit Synchronization Framework"
|
||||
default n
|
||||
select ANON_INODES
|
||||
select DMA_SHARED_BUFFER
|
||||
---help---
|
||||
The Sync File Framework adds explicit syncronization via
|
||||
|
|
|
@ -144,7 +144,7 @@ config DMI_SCAN_MACHINE_NON_EFI_FALLBACK
|
|||
|
||||
config ISCSI_IBFT_FIND
|
||||
bool "iSCSI Boot Firmware Table Attributes"
|
||||
depends on X86 && ACPI
|
||||
depends on X86 && ISCSI_IBFT
|
||||
default n
|
||||
help
|
||||
This option enables the kernel to find the region of memory
|
||||
|
@ -155,7 +155,8 @@ config ISCSI_IBFT_FIND
|
|||
config ISCSI_IBFT
|
||||
tristate "iSCSI Boot Firmware Table Attributes module"
|
||||
select ISCSI_BOOT_SYSFS
|
||||
depends on ISCSI_IBFT_FIND && SCSI && SCSI_LOWLEVEL
|
||||
select ISCSI_IBFT_FIND if X86
|
||||
depends on ACPI && SCSI && SCSI_LOWLEVEL
|
||||
default n
|
||||
help
|
||||
This option enables support for detection and exposing of iSCSI
|
||||
|
|
|
@ -93,6 +93,10 @@ MODULE_DESCRIPTION("sysfs interface to BIOS iBFT information");
|
|||
MODULE_LICENSE("GPL");
|
||||
MODULE_VERSION(IBFT_ISCSI_VERSION);
|
||||
|
||||
#ifndef CONFIG_ISCSI_IBFT_FIND
|
||||
struct acpi_table_ibft *ibft_addr;
|
||||
#endif
|
||||
|
||||
struct ibft_hdr {
|
||||
u8 id;
|
||||
u8 version;
|
||||
|
|
|
@ -12,7 +12,6 @@ config ARCH_HAVE_CUSTOM_GPIO_H
|
|||
|
||||
menuconfig GPIOLIB
|
||||
bool "GPIO Support"
|
||||
select ANON_INODES
|
||||
help
|
||||
This enables GPIO support through the generic GPIO library.
|
||||
You only need to enable this, if you also want to enable
|
||||
|
|
|
@ -126,9 +126,14 @@ static int holtek_kbd_input_event(struct input_dev *dev, unsigned int type,
|
|||
|
||||
/* Locate the boot interface, to receive the LED change events */
|
||||
struct usb_interface *boot_interface = usb_ifnum_to_if(usb_dev, 0);
|
||||
struct hid_device *boot_hid;
|
||||
struct hid_input *boot_hid_input;
|
||||
|
||||
struct hid_device *boot_hid = usb_get_intfdata(boot_interface);
|
||||
struct hid_input *boot_hid_input = list_first_entry(&boot_hid->inputs,
|
||||
if (unlikely(boot_interface == NULL))
|
||||
return -ENODEV;
|
||||
|
||||
boot_hid = usb_get_intfdata(boot_interface);
|
||||
boot_hid_input = list_first_entry(&boot_hid->inputs,
|
||||
struct hid_input, list);
|
||||
|
||||
return boot_hid_input->input->event(boot_hid_input->input, type, code,
|
||||
|
|
|
@ -308,6 +308,14 @@ static int hiddev_open(struct inode *inode, struct file *file)
|
|||
spin_unlock_irq(&list->hiddev->list_lock);
|
||||
|
||||
mutex_lock(&hiddev->existancelock);
|
||||
/*
|
||||
* recheck exist with existance lock held to
|
||||
* avoid opening a disconnected device
|
||||
*/
|
||||
if (!list->hiddev->exist) {
|
||||
res = -ENODEV;
|
||||
goto bail_unlock;
|
||||
}
|
||||
if (!list->hiddev->open++)
|
||||
if (list->hiddev->exist) {
|
||||
struct hid_device *hid = hiddev->hid;
|
||||
|
@ -322,6 +330,10 @@ static int hiddev_open(struct inode *inode, struct file *file)
|
|||
return 0;
|
||||
bail_unlock:
|
||||
mutex_unlock(&hiddev->existancelock);
|
||||
|
||||
spin_lock_irq(&list->hiddev->list_lock);
|
||||
list_del(&list->node);
|
||||
spin_unlock_irq(&list->hiddev->list_lock);
|
||||
bail:
|
||||
file->private_data = NULL;
|
||||
vfree(list);
|
||||
|
|
|
@ -698,7 +698,7 @@ static const u16 NCT6106_REG_TARGET[] = { 0x111, 0x121, 0x131 };
|
|||
static const u16 NCT6106_REG_WEIGHT_TEMP_SEL[] = { 0x168, 0x178, 0x188 };
|
||||
static const u16 NCT6106_REG_WEIGHT_TEMP_STEP[] = { 0x169, 0x179, 0x189 };
|
||||
static const u16 NCT6106_REG_WEIGHT_TEMP_STEP_TOL[] = { 0x16a, 0x17a, 0x18a };
|
||||
static const u16 NCT6106_REG_WEIGHT_DUTY_STEP[] = { 0x16b, 0x17b, 0x17c };
|
||||
static const u16 NCT6106_REG_WEIGHT_DUTY_STEP[] = { 0x16b, 0x17b, 0x18b };
|
||||
static const u16 NCT6106_REG_WEIGHT_TEMP_BASE[] = { 0x16c, 0x17c, 0x18c };
|
||||
static const u16 NCT6106_REG_WEIGHT_DUTY_BASE[] = { 0x16d, 0x17d, 0x18d };
|
||||
|
||||
|
@ -3481,6 +3481,7 @@ static int nct6775_probe(struct platform_device *pdev)
|
|||
data->REG_FAN_TIME[0] = NCT6106_REG_FAN_STOP_TIME;
|
||||
data->REG_FAN_TIME[1] = NCT6106_REG_FAN_STEP_UP_TIME;
|
||||
data->REG_FAN_TIME[2] = NCT6106_REG_FAN_STEP_DOWN_TIME;
|
||||
data->REG_TOLERANCE_H = NCT6106_REG_TOLERANCE_H;
|
||||
data->REG_PWM[0] = NCT6106_REG_PWM;
|
||||
data->REG_PWM[1] = NCT6106_REG_FAN_START_OUTPUT;
|
||||
data->REG_PWM[2] = NCT6106_REG_FAN_STOP_OUTPUT;
|
||||
|
|
|
@ -768,7 +768,7 @@ static struct attribute *nct7802_in_attrs[] = {
|
|||
&sensor_dev_attr_in3_alarm.dev_attr.attr,
|
||||
&sensor_dev_attr_in3_beep.dev_attr.attr,
|
||||
|
||||
&sensor_dev_attr_in4_input.dev_attr.attr, /* 17 */
|
||||
&sensor_dev_attr_in4_input.dev_attr.attr, /* 16 */
|
||||
&sensor_dev_attr_in4_min.dev_attr.attr,
|
||||
&sensor_dev_attr_in4_max.dev_attr.attr,
|
||||
&sensor_dev_attr_in4_alarm.dev_attr.attr,
|
||||
|
@ -794,9 +794,9 @@ static umode_t nct7802_in_is_visible(struct kobject *kobj,
|
|||
|
||||
if (index >= 6 && index < 11 && (reg & 0x03) != 0x03) /* VSEN1 */
|
||||
return 0;
|
||||
if (index >= 11 && index < 17 && (reg & 0x0c) != 0x0c) /* VSEN2 */
|
||||
if (index >= 11 && index < 16 && (reg & 0x0c) != 0x0c) /* VSEN2 */
|
||||
return 0;
|
||||
if (index >= 17 && (reg & 0x30) != 0x30) /* VSEN3 */
|
||||
if (index >= 16 && (reg & 0x30) != 0x30) /* VSEN3 */
|
||||
return 0;
|
||||
|
||||
return attr->mode;
|
||||
|
|
|
@ -4,7 +4,6 @@
|
|||
|
||||
menuconfig IIO
|
||||
tristate "Industrial I/O support"
|
||||
select ANON_INODES
|
||||
help
|
||||
The industrial I/O subsystem provides a unified framework for
|
||||
drivers for many different types of embedded sensors using a
|
||||
|
|
|
@ -24,7 +24,6 @@ config INFINIBAND_USER_MAD
|
|||
|
||||
config INFINIBAND_USER_ACCESS
|
||||
tristate "InfiniBand userspace access (verbs and CM)"
|
||||
select ANON_INODES
|
||||
---help---
|
||||
Userspace InfiniBand access support. This enables the
|
||||
kernel side of userspace verbs and the userspace
|
||||
|
|
|
@ -3155,18 +3155,18 @@ static int ib_mad_port_open(struct ib_device *device,
|
|||
if (has_smi)
|
||||
cq_size *= 2;
|
||||
|
||||
port_priv->pd = ib_alloc_pd(device, 0);
|
||||
if (IS_ERR(port_priv->pd)) {
|
||||
dev_err(&device->dev, "Couldn't create ib_mad PD\n");
|
||||
ret = PTR_ERR(port_priv->pd);
|
||||
goto error3;
|
||||
}
|
||||
|
||||
port_priv->cq = ib_alloc_cq(port_priv->device, port_priv, cq_size, 0,
|
||||
IB_POLL_WORKQUEUE);
|
||||
if (IS_ERR(port_priv->cq)) {
|
||||
dev_err(&device->dev, "Couldn't create ib_mad CQ\n");
|
||||
ret = PTR_ERR(port_priv->cq);
|
||||
goto error3;
|
||||
}
|
||||
|
||||
port_priv->pd = ib_alloc_pd(device, 0);
|
||||
if (IS_ERR(port_priv->pd)) {
|
||||
dev_err(&device->dev, "Couldn't create ib_mad PD\n");
|
||||
ret = PTR_ERR(port_priv->pd);
|
||||
goto error4;
|
||||
}
|
||||
|
||||
|
@ -3209,11 +3209,11 @@ error8:
|
|||
error7:
|
||||
destroy_mad_qp(&port_priv->qp_info[0]);
|
||||
error6:
|
||||
ib_dealloc_pd(port_priv->pd);
|
||||
error4:
|
||||
ib_free_cq(port_priv->cq);
|
||||
cleanup_recv_queue(&port_priv->qp_info[1]);
|
||||
cleanup_recv_queue(&port_priv->qp_info[0]);
|
||||
error4:
|
||||
ib_dealloc_pd(port_priv->pd);
|
||||
error3:
|
||||
kfree(port_priv);
|
||||
|
||||
|
@ -3243,8 +3243,8 @@ static int ib_mad_port_close(struct ib_device *device, int port_num)
|
|||
destroy_workqueue(port_priv->wq);
|
||||
destroy_mad_qp(&port_priv->qp_info[1]);
|
||||
destroy_mad_qp(&port_priv->qp_info[0]);
|
||||
ib_dealloc_pd(port_priv->pd);
|
||||
ib_free_cq(port_priv->cq);
|
||||
ib_dealloc_pd(port_priv->pd);
|
||||
cleanup_recv_queue(&port_priv->qp_info[1]);
|
||||
cleanup_recv_queue(&port_priv->qp_info[0]);
|
||||
/* XXX: Handle deallocation of MAD registration tables */
|
||||
|
|
|
@ -49,6 +49,7 @@
|
|||
#include <linux/sched.h>
|
||||
#include <linux/semaphore.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/nospec.h>
|
||||
|
||||
#include <asm/uaccess.h>
|
||||
|
||||
|
@ -843,11 +844,14 @@ static int ib_umad_unreg_agent(struct ib_umad_file *file, u32 __user *arg)
|
|||
|
||||
if (get_user(id, arg))
|
||||
return -EFAULT;
|
||||
if (id >= IB_UMAD_MAX_AGENTS)
|
||||
return -EINVAL;
|
||||
|
||||
mutex_lock(&file->port->file_mutex);
|
||||
mutex_lock(&file->mutex);
|
||||
|
||||
if (id >= IB_UMAD_MAX_AGENTS || !__get_agent(file, id)) {
|
||||
id = array_index_nospec(id, IB_UMAD_MAX_AGENTS);
|
||||
if (!__get_agent(file, id)) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
|
|
@ -145,7 +145,12 @@ static int iforce_usb_probe(struct usb_interface *intf,
|
|||
return -ENODEV;
|
||||
|
||||
epirq = &interface->endpoint[0].desc;
|
||||
if (!usb_endpoint_is_int_in(epirq))
|
||||
return -ENODEV;
|
||||
|
||||
epout = &interface->endpoint[1].desc;
|
||||
if (!usb_endpoint_is_int_out(epout))
|
||||
return -ENODEV;
|
||||
|
||||
if (!(iforce = kzalloc(sizeof(struct iforce) + 32, GFP_KERNEL)))
|
||||
goto fail;
|
||||
|
|
|
@ -153,7 +153,8 @@ struct trackpoint_data
|
|||
#ifdef CONFIG_MOUSE_PS2_TRACKPOINT
|
||||
int trackpoint_detect(struct psmouse *psmouse, bool set_properties);
|
||||
#else
|
||||
inline int trackpoint_detect(struct psmouse *psmouse, bool set_properties)
|
||||
static inline int trackpoint_detect(struct psmouse *psmouse,
|
||||
bool set_properties)
|
||||
{
|
||||
return -ENOSYS;
|
||||
}
|
||||
|
|
|
@ -125,6 +125,10 @@ static int kbtab_probe(struct usb_interface *intf, const struct usb_device_id *i
|
|||
if (intf->cur_altsetting->desc.bNumEndpoints < 1)
|
||||
return -ENODEV;
|
||||
|
||||
endpoint = &intf->cur_altsetting->endpoint[0].desc;
|
||||
if (!usb_endpoint_is_int_in(endpoint))
|
||||
return -ENODEV;
|
||||
|
||||
kbtab = kzalloc(sizeof(struct kbtab), GFP_KERNEL);
|
||||
input_dev = input_allocate_device();
|
||||
if (!kbtab || !input_dev)
|
||||
|
@ -163,8 +167,6 @@ static int kbtab_probe(struct usb_interface *intf, const struct usb_device_id *i
|
|||
input_set_abs_params(input_dev, ABS_Y, 0, 0x1750, 4, 0);
|
||||
input_set_abs_params(input_dev, ABS_PRESSURE, 0, 0xff, 0, 0);
|
||||
|
||||
endpoint = &intf->cur_altsetting->endpoint[0].desc;
|
||||
|
||||
usb_fill_int_urb(kbtab->irq, dev,
|
||||
usb_rcvintpipe(dev, endpoint->bEndpointAddress),
|
||||
kbtab->data, 8,
|
||||
|
|
|
@ -1534,7 +1534,7 @@ static const struct attribute_group *amd_iommu_groups[] = {
|
|||
NULL,
|
||||
};
|
||||
|
||||
static int iommu_init_pci(struct amd_iommu *iommu)
|
||||
static int __init iommu_init_pci(struct amd_iommu *iommu)
|
||||
{
|
||||
int cap_ptr = iommu->cap_ptr;
|
||||
u32 range, misc, low, high;
|
||||
|
|
|
@ -145,6 +145,7 @@ static struct irq_chip gpcv2_irqchip_data_chip = {
|
|||
.irq_unmask = imx_gpcv2_irq_unmask,
|
||||
.irq_set_wake = imx_gpcv2_irq_set_wake,
|
||||
.irq_retrigger = irq_chip_retrigger_hierarchy,
|
||||
.irq_set_type = irq_chip_set_type_parent,
|
||||
#ifdef CONFIG_SMP
|
||||
.irq_set_affinity = irq_chip_set_affinity_parent,
|
||||
#endif
|
||||
|
|
|
@ -1107,7 +1107,9 @@ static void bond_compute_features(struct bonding *bond)
|
|||
|
||||
done:
|
||||
bond_dev->vlan_features = vlan_features;
|
||||
bond_dev->hw_enc_features = enc_features | NETIF_F_GSO_ENCAP_ALL;
|
||||
bond_dev->hw_enc_features = enc_features | NETIF_F_GSO_ENCAP_ALL |
|
||||
NETIF_F_HW_VLAN_CTAG_TX |
|
||||
NETIF_F_HW_VLAN_STAG_TX;
|
||||
bond_dev->hard_header_len = max_hard_header_len;
|
||||
bond_dev->gso_max_segs = gso_max_segs;
|
||||
netif_set_gso_max_size(bond_dev, gso_max_size);
|
||||
|
|
|
@ -592,16 +592,16 @@ static int peak_usb_ndo_stop(struct net_device *netdev)
|
|||
dev->state &= ~PCAN_USB_STATE_STARTED;
|
||||
netif_stop_queue(netdev);
|
||||
|
||||
close_candev(netdev);
|
||||
|
||||
dev->can.state = CAN_STATE_STOPPED;
|
||||
|
||||
/* unlink all pending urbs and free used memory */
|
||||
peak_usb_unlink_all_urbs(dev);
|
||||
|
||||
if (dev->adapter->dev_stop)
|
||||
dev->adapter->dev_stop(dev);
|
||||
|
||||
close_candev(netdev);
|
||||
|
||||
dev->can.state = CAN_STATE_STOPPED;
|
||||
|
||||
/* can set bus off now */
|
||||
if (dev->adapter->dev_set_bus) {
|
||||
int err = dev->adapter->dev_set_bus(dev, 0);
|
||||
|
|
|
@ -851,7 +851,7 @@ static int pcan_usb_fd_init(struct peak_usb_device *dev)
|
|||
goto err_out;
|
||||
|
||||
/* allocate command buffer once for all for the interface */
|
||||
pdev->cmd_buffer_addr = kmalloc(PCAN_UFD_CMD_BUFFER_SIZE,
|
||||
pdev->cmd_buffer_addr = kzalloc(PCAN_UFD_CMD_BUFFER_SIZE,
|
||||
GFP_KERNEL);
|
||||
if (!pdev->cmd_buffer_addr)
|
||||
goto err_out_1;
|
||||
|
|
|
@ -500,7 +500,7 @@ static int pcan_usb_pro_drv_loaded(struct peak_usb_device *dev, int loaded)
|
|||
u8 *buffer;
|
||||
int err;
|
||||
|
||||
buffer = kmalloc(PCAN_USBPRO_FCT_DRVLD_REQ_LEN, GFP_KERNEL);
|
||||
buffer = kzalloc(PCAN_USBPRO_FCT_DRVLD_REQ_LEN, GFP_KERNEL);
|
||||
if (!buffer)
|
||||
return -ENOMEM;
|
||||
|
||||
|
|
|
@ -3062,12 +3062,13 @@ int bnx2x_nic_unload(struct bnx2x *bp, int unload_mode, bool keep_link)
|
|||
/* if VF indicate to PF this function is going down (PF will delete sp
|
||||
* elements and clear initializations
|
||||
*/
|
||||
if (IS_VF(bp))
|
||||
if (IS_VF(bp)) {
|
||||
bnx2x_clear_vlan_info(bp);
|
||||
bnx2x_vfpf_close_vf(bp);
|
||||
else if (unload_mode != UNLOAD_RECOVERY)
|
||||
} else if (unload_mode != UNLOAD_RECOVERY) {
|
||||
/* if this is a normal/close unload need to clean up chip*/
|
||||
bnx2x_chip_cleanup(bp, unload_mode, keep_link);
|
||||
else {
|
||||
} else {
|
||||
/* Send the UNLOAD_REQUEST to the MCP */
|
||||
bnx2x_send_unload_req(bp, unload_mode);
|
||||
|
||||
|
|
|
@ -425,6 +425,8 @@ void bnx2x_set_reset_global(struct bnx2x *bp);
|
|||
void bnx2x_disable_close_the_gate(struct bnx2x *bp);
|
||||
int bnx2x_init_hw_func_cnic(struct bnx2x *bp);
|
||||
|
||||
void bnx2x_clear_vlan_info(struct bnx2x *bp);
|
||||
|
||||
/**
|
||||
* bnx2x_sp_event - handle ramrods completion.
|
||||
*
|
||||
|
|
|
@ -8488,11 +8488,21 @@ int bnx2x_set_vlan_one(struct bnx2x *bp, u16 vlan,
|
|||
return rc;
|
||||
}
|
||||
|
||||
void bnx2x_clear_vlan_info(struct bnx2x *bp)
|
||||
{
|
||||
struct bnx2x_vlan_entry *vlan;
|
||||
|
||||
/* Mark that hw forgot all entries */
|
||||
list_for_each_entry(vlan, &bp->vlan_reg, link)
|
||||
vlan->hw = false;
|
||||
|
||||
bp->vlan_cnt = 0;
|
||||
}
|
||||
|
||||
static int bnx2x_del_all_vlans(struct bnx2x *bp)
|
||||
{
|
||||
struct bnx2x_vlan_mac_obj *vlan_obj = &bp->sp_objs[0].vlan_obj;
|
||||
unsigned long ramrod_flags = 0, vlan_flags = 0;
|
||||
struct bnx2x_vlan_entry *vlan;
|
||||
int rc;
|
||||
|
||||
__set_bit(RAMROD_COMP_WAIT, &ramrod_flags);
|
||||
|
@ -8501,10 +8511,7 @@ static int bnx2x_del_all_vlans(struct bnx2x *bp)
|
|||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Mark that hw forgot all entries */
|
||||
list_for_each_entry(vlan, &bp->vlan_reg, link)
|
||||
vlan->hw = false;
|
||||
bp->vlan_cnt = 0;
|
||||
bnx2x_clear_vlan_info(bp);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -441,12 +441,6 @@ arfs_hash_bucket(struct arfs_table *arfs_t, __be16 src_port,
|
|||
return &arfs_t->rules_hash[bucket_idx];
|
||||
}
|
||||
|
||||
static u8 arfs_get_ip_proto(const struct sk_buff *skb)
|
||||
{
|
||||
return (skb->protocol == htons(ETH_P_IP)) ?
|
||||
ip_hdr(skb)->protocol : ipv6_hdr(skb)->nexthdr;
|
||||
}
|
||||
|
||||
static struct arfs_table *arfs_get_table(struct mlx5e_arfs_tables *arfs,
|
||||
u8 ip_proto, __be16 etype)
|
||||
{
|
||||
|
@ -605,31 +599,9 @@ out:
|
|||
arfs_may_expire_flow(priv);
|
||||
}
|
||||
|
||||
/* return L4 destination port from ip4/6 packets */
|
||||
static __be16 arfs_get_dst_port(const struct sk_buff *skb)
|
||||
{
|
||||
char *transport_header;
|
||||
|
||||
transport_header = skb_transport_header(skb);
|
||||
if (arfs_get_ip_proto(skb) == IPPROTO_TCP)
|
||||
return ((struct tcphdr *)transport_header)->dest;
|
||||
return ((struct udphdr *)transport_header)->dest;
|
||||
}
|
||||
|
||||
/* return L4 source port from ip4/6 packets */
|
||||
static __be16 arfs_get_src_port(const struct sk_buff *skb)
|
||||
{
|
||||
char *transport_header;
|
||||
|
||||
transport_header = skb_transport_header(skb);
|
||||
if (arfs_get_ip_proto(skb) == IPPROTO_TCP)
|
||||
return ((struct tcphdr *)transport_header)->source;
|
||||
return ((struct udphdr *)transport_header)->source;
|
||||
}
|
||||
|
||||
static struct arfs_rule *arfs_alloc_rule(struct mlx5e_priv *priv,
|
||||
struct arfs_table *arfs_t,
|
||||
const struct sk_buff *skb,
|
||||
const struct flow_keys *fk,
|
||||
u16 rxq, u32 flow_id)
|
||||
{
|
||||
struct arfs_rule *rule;
|
||||
|
@ -644,19 +616,19 @@ static struct arfs_rule *arfs_alloc_rule(struct mlx5e_priv *priv,
|
|||
INIT_WORK(&rule->arfs_work, arfs_handle_work);
|
||||
|
||||
tuple = &rule->tuple;
|
||||
tuple->etype = skb->protocol;
|
||||
tuple->etype = fk->basic.n_proto;
|
||||
tuple->ip_proto = fk->basic.ip_proto;
|
||||
if (tuple->etype == htons(ETH_P_IP)) {
|
||||
tuple->src_ipv4 = ip_hdr(skb)->saddr;
|
||||
tuple->dst_ipv4 = ip_hdr(skb)->daddr;
|
||||
tuple->src_ipv4 = fk->addrs.v4addrs.src;
|
||||
tuple->dst_ipv4 = fk->addrs.v4addrs.dst;
|
||||
} else {
|
||||
memcpy(&tuple->src_ipv6, &ipv6_hdr(skb)->saddr,
|
||||
memcpy(&tuple->src_ipv6, &fk->addrs.v6addrs.src,
|
||||
sizeof(struct in6_addr));
|
||||
memcpy(&tuple->dst_ipv6, &ipv6_hdr(skb)->daddr,
|
||||
memcpy(&tuple->dst_ipv6, &fk->addrs.v6addrs.dst,
|
||||
sizeof(struct in6_addr));
|
||||
}
|
||||
tuple->ip_proto = arfs_get_ip_proto(skb);
|
||||
tuple->src_port = arfs_get_src_port(skb);
|
||||
tuple->dst_port = arfs_get_dst_port(skb);
|
||||
tuple->src_port = fk->ports.src;
|
||||
tuple->dst_port = fk->ports.dst;
|
||||
|
||||
rule->flow_id = flow_id;
|
||||
rule->filter_id = priv->fs.arfs.last_filter_id++ % RPS_NO_FILTER;
|
||||
|
@ -667,37 +639,33 @@ static struct arfs_rule *arfs_alloc_rule(struct mlx5e_priv *priv,
|
|||
return rule;
|
||||
}
|
||||
|
||||
static bool arfs_cmp_ips(struct arfs_tuple *tuple,
|
||||
const struct sk_buff *skb)
|
||||
static bool arfs_cmp(const struct arfs_tuple *tuple, const struct flow_keys *fk)
|
||||
{
|
||||
if (tuple->etype == htons(ETH_P_IP) &&
|
||||
tuple->src_ipv4 == ip_hdr(skb)->saddr &&
|
||||
tuple->dst_ipv4 == ip_hdr(skb)->daddr)
|
||||
return true;
|
||||
if (tuple->etype == htons(ETH_P_IPV6) &&
|
||||
(!memcmp(&tuple->src_ipv6, &ipv6_hdr(skb)->saddr,
|
||||
sizeof(struct in6_addr))) &&
|
||||
(!memcmp(&tuple->dst_ipv6, &ipv6_hdr(skb)->daddr,
|
||||
sizeof(struct in6_addr))))
|
||||
return true;
|
||||
if (tuple->src_port != fk->ports.src || tuple->dst_port != fk->ports.dst)
|
||||
return false;
|
||||
if (tuple->etype != fk->basic.n_proto)
|
||||
return false;
|
||||
if (tuple->etype == htons(ETH_P_IP))
|
||||
return tuple->src_ipv4 == fk->addrs.v4addrs.src &&
|
||||
tuple->dst_ipv4 == fk->addrs.v4addrs.dst;
|
||||
if (tuple->etype == htons(ETH_P_IPV6))
|
||||
return !memcmp(&tuple->src_ipv6, &fk->addrs.v6addrs.src,
|
||||
sizeof(struct in6_addr)) &&
|
||||
!memcmp(&tuple->dst_ipv6, &fk->addrs.v6addrs.dst,
|
||||
sizeof(struct in6_addr));
|
||||
return false;
|
||||
}
|
||||
|
||||
static struct arfs_rule *arfs_find_rule(struct arfs_table *arfs_t,
|
||||
const struct sk_buff *skb)
|
||||
const struct flow_keys *fk)
|
||||
{
|
||||
struct arfs_rule *arfs_rule;
|
||||
struct hlist_head *head;
|
||||
__be16 src_port = arfs_get_src_port(skb);
|
||||
__be16 dst_port = arfs_get_dst_port(skb);
|
||||
|
||||
head = arfs_hash_bucket(arfs_t, src_port, dst_port);
|
||||
head = arfs_hash_bucket(arfs_t, fk->ports.src, fk->ports.dst);
|
||||
hlist_for_each_entry(arfs_rule, head, hlist) {
|
||||
if (arfs_rule->tuple.src_port == src_port &&
|
||||
arfs_rule->tuple.dst_port == dst_port &&
|
||||
arfs_cmp_ips(&arfs_rule->tuple, skb)) {
|
||||
if (arfs_cmp(&arfs_rule->tuple, fk))
|
||||
return arfs_rule;
|
||||
}
|
||||
}
|
||||
|
||||
return NULL;
|
||||
|
@ -710,20 +678,24 @@ int mlx5e_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb,
|
|||
struct mlx5e_arfs_tables *arfs = &priv->fs.arfs;
|
||||
struct arfs_table *arfs_t;
|
||||
struct arfs_rule *arfs_rule;
|
||||
struct flow_keys fk;
|
||||
|
||||
if (skb->protocol != htons(ETH_P_IP) &&
|
||||
skb->protocol != htons(ETH_P_IPV6))
|
||||
if (!skb_flow_dissect_flow_keys(skb, &fk, 0))
|
||||
return -EPROTONOSUPPORT;
|
||||
|
||||
if (fk.basic.n_proto != htons(ETH_P_IP) &&
|
||||
fk.basic.n_proto != htons(ETH_P_IPV6))
|
||||
return -EPROTONOSUPPORT;
|
||||
|
||||
if (skb->encapsulation)
|
||||
return -EPROTONOSUPPORT;
|
||||
|
||||
arfs_t = arfs_get_table(arfs, arfs_get_ip_proto(skb), skb->protocol);
|
||||
arfs_t = arfs_get_table(arfs, fk.basic.ip_proto, fk.basic.n_proto);
|
||||
if (!arfs_t)
|
||||
return -EPROTONOSUPPORT;
|
||||
|
||||
spin_lock_bh(&arfs->arfs_lock);
|
||||
arfs_rule = arfs_find_rule(arfs_t, skb);
|
||||
arfs_rule = arfs_find_rule(arfs_t, &fk);
|
||||
if (arfs_rule) {
|
||||
if (arfs_rule->rxq == rxq_index) {
|
||||
spin_unlock_bh(&arfs->arfs_lock);
|
||||
|
@ -731,8 +703,7 @@ int mlx5e_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb,
|
|||
}
|
||||
arfs_rule->rxq = rxq_index;
|
||||
} else {
|
||||
arfs_rule = arfs_alloc_rule(priv, arfs_t, skb,
|
||||
rxq_index, flow_id);
|
||||
arfs_rule = arfs_alloc_rule(priv, arfs_t, &fk, rxq_index, flow_id);
|
||||
if (!arfs_rule) {
|
||||
spin_unlock_bh(&arfs->arfs_lock);
|
||||
return -ENOMEM;
|
||||
|
|
|
@ -1149,6 +1149,9 @@ static int mlx5e_set_pauseparam(struct net_device *netdev,
|
|||
struct mlx5_core_dev *mdev = priv->mdev;
|
||||
int err;
|
||||
|
||||
if (!MLX5_CAP_GEN(mdev, vport_group_manager))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (pauseparam->autoneg)
|
||||
return -EINVAL;
|
||||
|
||||
|
|
|
@ -1014,7 +1014,9 @@ static void ___team_compute_features(struct team *team)
|
|||
}
|
||||
|
||||
team->dev->vlan_features = vlan_features;
|
||||
team->dev->hw_enc_features = enc_features | NETIF_F_GSO_ENCAP_ALL;
|
||||
team->dev->hw_enc_features = enc_features | NETIF_F_GSO_ENCAP_ALL |
|
||||
NETIF_F_HW_VLAN_CTAG_TX |
|
||||
NETIF_F_HW_VLAN_STAG_TX;
|
||||
team->dev->hard_header_len = max_hard_header_len;
|
||||
|
||||
team->dev->priv_flags &= ~IFF_XMIT_DST_RELEASE;
|
||||
|
|
|
@ -285,7 +285,7 @@ static void mdio_write(struct net_device *dev, int phy_id, int loc, int val)
|
|||
static int read_eprom_word(pegasus_t *pegasus, __u8 index, __u16 *retdata)
|
||||
{
|
||||
int i;
|
||||
__u8 tmp;
|
||||
__u8 tmp = 0;
|
||||
__le16 retdatai;
|
||||
int ret;
|
||||
|
||||
|
|
|
@ -439,6 +439,8 @@ static void iwl_pcie_tfd_unmap(struct iwl_trans *trans,
|
|||
DMA_TO_DEVICE);
|
||||
}
|
||||
|
||||
meta->tbs = 0;
|
||||
|
||||
if (trans->cfg->use_tfh) {
|
||||
struct iwl_tfh_tfd *tfd_fh = (void *)tfd;
|
||||
|
||||
|
|
|
@ -120,6 +120,7 @@ enum {
|
|||
|
||||
#define MWIFIEX_MAX_TOTAL_SCAN_TIME (MWIFIEX_TIMER_10S - MWIFIEX_TIMER_1S)
|
||||
|
||||
#define WPA_GTK_OUI_OFFSET 2
|
||||
#define RSN_GTK_OUI_OFFSET 2
|
||||
|
||||
#define MWIFIEX_OUI_NOT_PRESENT 0
|
||||
|
|
|
@ -181,7 +181,8 @@ mwifiex_is_wpa_oui_present(struct mwifiex_bssdescriptor *bss_desc, u32 cipher)
|
|||
u8 ret = MWIFIEX_OUI_NOT_PRESENT;
|
||||
|
||||
if (has_vendor_hdr(bss_desc->bcn_wpa_ie, WLAN_EID_VENDOR_SPECIFIC)) {
|
||||
iebody = (struct ie_body *) bss_desc->bcn_wpa_ie->data;
|
||||
iebody = (struct ie_body *)((u8 *)bss_desc->bcn_wpa_ie->data +
|
||||
WPA_GTK_OUI_OFFSET);
|
||||
oui = &mwifiex_wpa_oui[cipher][0];
|
||||
ret = mwifiex_search_oui_in_ie(iebody, oui);
|
||||
if (ret)
|
||||
|
|
|
@ -927,6 +927,7 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
|
|||
skb_shinfo(skb)->nr_frags = MAX_SKB_FRAGS;
|
||||
nskb = xenvif_alloc_skb(0);
|
||||
if (unlikely(nskb == NULL)) {
|
||||
skb_shinfo(skb)->nr_frags = 0;
|
||||
kfree_skb(skb);
|
||||
xenvif_tx_err(queue, &txreq, extra_count, idx);
|
||||
if (net_ratelimit())
|
||||
|
@ -942,6 +943,7 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
|
|||
|
||||
if (xenvif_set_skb_gso(queue->vif, skb, gso)) {
|
||||
/* Failure in xenvif_set_skb_gso is fatal. */
|
||||
skb_shinfo(skb)->nr_frags = 0;
|
||||
kfree_skb(skb);
|
||||
kfree_skb(nskb);
|
||||
break;
|
||||
|
|
|
@ -1576,13 +1576,13 @@ static int handle_outbound(struct qdio_q *q, unsigned int callflags,
|
|||
rc = qdio_kick_outbound_q(q, phys_aob);
|
||||
} else if (need_siga_sync(q)) {
|
||||
rc = qdio_siga_sync_q(q);
|
||||
} else if (count < QDIO_MAX_BUFFERS_PER_Q &&
|
||||
get_buf_state(q, prev_buf(bufnr), &state, 0) > 0 &&
|
||||
state == SLSB_CU_OUTPUT_PRIMED) {
|
||||
/* The previous buffer is not processed yet, tack on. */
|
||||
qperf_inc(q, fast_requeue);
|
||||
} else {
|
||||
/* try to fast requeue buffers */
|
||||
get_buf_state(q, prev_buf(bufnr), &state, 0);
|
||||
if (state != SLSB_CU_OUTPUT_PRIMED)
|
||||
rc = qdio_kick_outbound_q(q, 0);
|
||||
else
|
||||
qperf_inc(q, fast_requeue);
|
||||
rc = qdio_kick_outbound_q(q, 0);
|
||||
}
|
||||
|
||||
/* in case of SIGA errors we must process the error immediately */
|
||||
|
|
|
@ -53,6 +53,7 @@
|
|||
#define ALUA_FAILOVER_TIMEOUT 60
|
||||
#define ALUA_FAILOVER_RETRIES 5
|
||||
#define ALUA_RTPG_DELAY_MSECS 5
|
||||
#define ALUA_RTPG_RETRY_DELAY 2
|
||||
|
||||
/* device handler flags */
|
||||
#define ALUA_OPTIMIZE_STPG 0x01
|
||||
|
@ -681,7 +682,7 @@ static int alua_rtpg(struct scsi_device *sdev, struct alua_port_group *pg)
|
|||
case SCSI_ACCESS_STATE_TRANSITIONING:
|
||||
if (time_before(jiffies, pg->expiry)) {
|
||||
/* State transition, retry */
|
||||
pg->interval = 2;
|
||||
pg->interval = ALUA_RTPG_RETRY_DELAY;
|
||||
err = SCSI_DH_RETRY;
|
||||
} else {
|
||||
struct alua_dh_data *h;
|
||||
|
@ -809,6 +810,8 @@ static void alua_rtpg_work(struct work_struct *work)
|
|||
spin_lock_irqsave(&pg->lock, flags);
|
||||
pg->flags &= ~ALUA_PG_RUNNING;
|
||||
pg->flags |= ALUA_PG_RUN_RTPG;
|
||||
if (!pg->interval)
|
||||
pg->interval = ALUA_RTPG_RETRY_DELAY;
|
||||
spin_unlock_irqrestore(&pg->lock, flags);
|
||||
queue_delayed_work(alua_wq, &pg->rtpg_work,
|
||||
pg->interval * HZ);
|
||||
|
@ -820,6 +823,8 @@ static void alua_rtpg_work(struct work_struct *work)
|
|||
spin_lock_irqsave(&pg->lock, flags);
|
||||
if (err == SCSI_DH_RETRY || pg->flags & ALUA_PG_RUN_RTPG) {
|
||||
pg->flags &= ~ALUA_PG_RUNNING;
|
||||
if (!pg->interval && !(pg->flags & ALUA_PG_RUN_RTPG))
|
||||
pg->interval = ALUA_RTPG_RETRY_DELAY;
|
||||
pg->flags |= ALUA_PG_RUN_RTPG;
|
||||
spin_unlock_irqrestore(&pg->lock, flags);
|
||||
queue_delayed_work(alua_wq, &pg->rtpg_work,
|
||||
|
|
|
@ -2236,6 +2236,8 @@ static int handle_ioaccel_mode2_error(struct ctlr_info *h,
|
|||
case IOACCEL2_SERV_RESPONSE_COMPLETE:
|
||||
switch (c2->error_data.status) {
|
||||
case IOACCEL2_STATUS_SR_TASK_COMP_GOOD:
|
||||
if (cmd)
|
||||
cmd->result = 0;
|
||||
break;
|
||||
case IOACCEL2_STATUS_SR_TASK_COMP_CHK_COND:
|
||||
cmd->result |= SAM_STAT_CHECK_CONDITION;
|
||||
|
@ -2423,8 +2425,10 @@ static void process_ioaccel2_completion(struct ctlr_info *h,
|
|||
|
||||
/* check for good status */
|
||||
if (likely(c2->error_data.serv_response == 0 &&
|
||||
c2->error_data.status == 0))
|
||||
c2->error_data.status == 0)) {
|
||||
cmd->result = 0;
|
||||
return hpsa_cmd_free_and_done(h, c, cmd);
|
||||
}
|
||||
|
||||
/*
|
||||
* Any RAID offload error results in retry which will use
|
||||
|
@ -5511,6 +5515,12 @@ static int hpsa_scsi_queue_command(struct Scsi_Host *sh, struct scsi_cmnd *cmd)
|
|||
}
|
||||
c = cmd_tagged_alloc(h, cmd);
|
||||
|
||||
/*
|
||||
* This is necessary because the SML doesn't zero out this field during
|
||||
* error recovery.
|
||||
*/
|
||||
cmd->result = 0;
|
||||
|
||||
/*
|
||||
* Call alternate submit routine for I/O accelerated commands.
|
||||
* Retries always go down the normal I/O path.
|
||||
|
|
|
@ -4883,8 +4883,8 @@ static int ibmvfc_remove(struct vio_dev *vdev)
|
|||
|
||||
spin_lock_irqsave(vhost->host->host_lock, flags);
|
||||
ibmvfc_purge_requests(vhost, DID_ERROR);
|
||||
ibmvfc_free_event_pool(vhost);
|
||||
spin_unlock_irqrestore(vhost->host->host_lock, flags);
|
||||
ibmvfc_free_event_pool(vhost);
|
||||
|
||||
ibmvfc_free_mem(vhost);
|
||||
spin_lock(&ibmvfc_driver_lock);
|
||||
|
|
|
@ -2847,6 +2847,7 @@ megasas_fw_crash_buffer_show(struct device *cdev,
|
|||
u32 size;
|
||||
unsigned long buff_addr;
|
||||
unsigned long dmachunk = CRASH_DMA_BUF_SIZE;
|
||||
unsigned long chunk_left_bytes;
|
||||
unsigned long src_addr;
|
||||
unsigned long flags;
|
||||
u32 buff_offset;
|
||||
|
@ -2872,6 +2873,8 @@ megasas_fw_crash_buffer_show(struct device *cdev,
|
|||
}
|
||||
|
||||
size = (instance->fw_crash_buffer_size * dmachunk) - buff_offset;
|
||||
chunk_left_bytes = dmachunk - (buff_offset % dmachunk);
|
||||
size = (size > chunk_left_bytes) ? chunk_left_bytes : size;
|
||||
size = (size >= PAGE_SIZE) ? (PAGE_SIZE - 1) : size;
|
||||
|
||||
src_addr = (unsigned long)instance->crash_buf[buff_offset / dmachunk] +
|
||||
|
|
|
@ -1707,9 +1707,11 @@ _base_config_dma_addressing(struct MPT3SAS_ADAPTER *ioc, struct pci_dev *pdev)
|
|||
{
|
||||
struct sysinfo s;
|
||||
u64 consistent_dma_mask;
|
||||
/* Set 63 bit DMA mask for all SAS3 and SAS35 controllers */
|
||||
int dma_mask = (ioc->hba_mpi_version_belonged > MPI2_VERSION) ? 63 : 64;
|
||||
|
||||
if (ioc->dma_mask)
|
||||
consistent_dma_mask = DMA_BIT_MASK(64);
|
||||
consistent_dma_mask = DMA_BIT_MASK(dma_mask);
|
||||
else
|
||||
consistent_dma_mask = DMA_BIT_MASK(32);
|
||||
|
||||
|
@ -1717,11 +1719,11 @@ _base_config_dma_addressing(struct MPT3SAS_ADAPTER *ioc, struct pci_dev *pdev)
|
|||
const uint64_t required_mask =
|
||||
dma_get_required_mask(&pdev->dev);
|
||||
if ((required_mask > DMA_BIT_MASK(32)) &&
|
||||
!pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) &&
|
||||
!pci_set_dma_mask(pdev, DMA_BIT_MASK(dma_mask)) &&
|
||||
!pci_set_consistent_dma_mask(pdev, consistent_dma_mask)) {
|
||||
ioc->base_add_sg_single = &_base_add_sg_single_64;
|
||||
ioc->sge_size = sizeof(Mpi2SGESimple64_t);
|
||||
ioc->dma_mask = 64;
|
||||
ioc->dma_mask = dma_mask;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
@ -1747,7 +1749,7 @@ static int
|
|||
_base_change_consistent_dma_mask(struct MPT3SAS_ADAPTER *ioc,
|
||||
struct pci_dev *pdev)
|
||||
{
|
||||
if (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64))) {
|
||||
if (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(ioc->dma_mask))) {
|
||||
if (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)))
|
||||
return -ENODEV;
|
||||
}
|
||||
|
@ -3381,7 +3383,7 @@ _base_allocate_memory_pools(struct MPT3SAS_ADAPTER *ioc)
|
|||
total_sz += sz;
|
||||
} while (ioc->rdpq_array_enable && (++i < ioc->reply_queue_count));
|
||||
|
||||
if (ioc->dma_mask == 64) {
|
||||
if (ioc->dma_mask > 32) {
|
||||
if (_base_change_consistent_dma_mask(ioc, ioc->pdev) != 0) {
|
||||
pr_warn(MPT3SAS_FMT
|
||||
"no suitable consistent DMA mask for %s\n",
|
||||
|
|
|
@ -801,6 +801,7 @@ static unsigned long lowmem_scan(struct shrinker *s, struct shrink_control *sc)
|
|||
handle_lmk_event(selected, selected_tasksize, min_score_adj);
|
||||
put_task_struct(selected);
|
||||
}
|
||||
|
||||
return rem;
|
||||
}
|
||||
|
||||
|
|
|
@ -351,9 +351,9 @@ static irqreturn_t dt3k_interrupt(int irq, void *d)
|
|||
static int dt3k_ns_to_timer(unsigned int timer_base, unsigned int *nanosec,
|
||||
unsigned int flags)
|
||||
{
|
||||
int divider, base, prescale;
|
||||
unsigned int divider, base, prescale;
|
||||
|
||||
/* This function needs improvment */
|
||||
/* This function needs improvement */
|
||||
/* Don't know if divider==0 works. */
|
||||
|
||||
for (prescale = 0; prescale < 16; prescale++) {
|
||||
|
@ -367,7 +367,7 @@ static int dt3k_ns_to_timer(unsigned int timer_base, unsigned int *nanosec,
|
|||
divider = (*nanosec) / base;
|
||||
break;
|
||||
case CMDF_ROUND_UP:
|
||||
divider = (*nanosec) / base;
|
||||
divider = DIV_ROUND_UP(*nanosec, base);
|
||||
break;
|
||||
}
|
||||
if (divider < 65536) {
|
||||
|
@ -377,7 +377,7 @@ static int dt3k_ns_to_timer(unsigned int timer_base, unsigned int *nanosec,
|
|||
}
|
||||
|
||||
prescale = 15;
|
||||
base = timer_base * (1 << prescale);
|
||||
base = timer_base * (prescale + 1);
|
||||
divider = 65535;
|
||||
*nanosec = divider * base;
|
||||
return (prescale << 16) | (divider);
|
||||
|
|
|
@ -137,8 +137,7 @@ static void __ldsem_wake_readers(struct ld_semaphore *sem)
|
|||
|
||||
list_for_each_entry_safe(waiter, next, &sem->read_wait, list) {
|
||||
tsk = waiter->task;
|
||||
smp_mb();
|
||||
waiter->task = NULL;
|
||||
smp_store_release(&waiter->task, NULL);
|
||||
wake_up_process(tsk);
|
||||
put_task_struct(tsk);
|
||||
}
|
||||
|
@ -234,7 +233,7 @@ down_read_failed(struct ld_semaphore *sem, long count, long timeout)
|
|||
for (;;) {
|
||||
set_task_state(tsk, TASK_UNINTERRUPTIBLE);
|
||||
|
||||
if (!waiter.task)
|
||||
if (!smp_load_acquire(&waiter.task))
|
||||
break;
|
||||
if (!timeout)
|
||||
break;
|
||||
|
|
|
@ -1264,10 +1264,6 @@ made_compressed_probe:
|
|||
if (acm == NULL)
|
||||
goto alloc_fail;
|
||||
|
||||
minor = acm_alloc_minor(acm);
|
||||
if (minor < 0)
|
||||
goto alloc_fail1;
|
||||
|
||||
ctrlsize = usb_endpoint_maxp(epctrl);
|
||||
readsize = usb_endpoint_maxp(epread) *
|
||||
(quirks == SINGLE_RX_URB ? 1 : 2);
|
||||
|
@ -1275,6 +1271,13 @@ made_compressed_probe:
|
|||
acm->writesize = usb_endpoint_maxp(epwrite) * 20;
|
||||
acm->control = control_interface;
|
||||
acm->data = data_interface;
|
||||
|
||||
usb_get_intf(acm->control); /* undone in destruct() */
|
||||
|
||||
minor = acm_alloc_minor(acm);
|
||||
if (minor < 0)
|
||||
goto alloc_fail1;
|
||||
|
||||
acm->minor = minor;
|
||||
acm->dev = usb_dev;
|
||||
if (h.usb_cdc_acm_descriptor)
|
||||
|
@ -1420,7 +1423,6 @@ skip_countries:
|
|||
usb_driver_claim_interface(&acm_driver, data_interface, acm);
|
||||
usb_set_intfdata(data_interface, acm);
|
||||
|
||||
usb_get_intf(control_interface);
|
||||
tty_dev = tty_port_register_device(&acm->port, acm_tty_driver, minor,
|
||||
&control_interface->dev);
|
||||
if (IS_ERR(tty_dev)) {
|
||||
|
|
|
@ -1810,8 +1810,6 @@ static int proc_do_submiturb(struct usb_dev_state *ps, struct usbdevfs_urb *uurb
|
|||
return 0;
|
||||
|
||||
error:
|
||||
if (as && as->usbm)
|
||||
dec_usb_memory_use_count(as->usbm, &as->usbm->urb_use_count);
|
||||
kfree(isopkt);
|
||||
kfree(dr);
|
||||
if (as)
|
||||
|
|
|
@ -191,9 +191,10 @@ int usb_register_dev(struct usb_interface *intf,
|
|||
intf->minor = minor;
|
||||
break;
|
||||
}
|
||||
up_write(&minor_rwsem);
|
||||
if (intf->minor < 0)
|
||||
if (intf->minor < 0) {
|
||||
up_write(&minor_rwsem);
|
||||
return -EXFULL;
|
||||
}
|
||||
|
||||
/* create a usb class device for this usb interface */
|
||||
snprintf(name, sizeof(name), class_driver->name, minor - minor_base);
|
||||
|
@ -201,12 +202,11 @@ int usb_register_dev(struct usb_interface *intf,
|
|||
MKDEV(USB_MAJOR, minor), class_driver,
|
||||
"%s", kbasename(name));
|
||||
if (IS_ERR(intf->usb_dev)) {
|
||||
down_write(&minor_rwsem);
|
||||
usb_minors[minor] = NULL;
|
||||
intf->minor = -1;
|
||||
up_write(&minor_rwsem);
|
||||
retval = PTR_ERR(intf->usb_dev);
|
||||
}
|
||||
up_write(&minor_rwsem);
|
||||
return retval;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(usb_register_dev);
|
||||
|
@ -232,12 +232,12 @@ void usb_deregister_dev(struct usb_interface *intf,
|
|||
return;
|
||||
|
||||
dev_dbg(&intf->dev, "removing %d minor\n", intf->minor);
|
||||
device_destroy(usb_class->class, MKDEV(USB_MAJOR, intf->minor));
|
||||
|
||||
down_write(&minor_rwsem);
|
||||
usb_minors[intf->minor] = NULL;
|
||||
up_write(&minor_rwsem);
|
||||
|
||||
device_destroy(usb_class->class, MKDEV(USB_MAJOR, intf->minor));
|
||||
intf->usb_dev = NULL;
|
||||
intf->minor = -1;
|
||||
destroy_usb_class();
|
||||
|
|
|
@ -2311,14 +2311,14 @@ int cdc_parse_cdc_header(struct usb_cdc_parsed_header *hdr,
|
|||
(struct usb_cdc_dmm_desc *)buffer;
|
||||
break;
|
||||
case USB_CDC_MDLM_TYPE:
|
||||
if (elength < sizeof(struct usb_cdc_mdlm_desc *))
|
||||
if (elength < sizeof(struct usb_cdc_mdlm_desc))
|
||||
goto next_desc;
|
||||
if (desc)
|
||||
return -EINVAL;
|
||||
desc = (struct usb_cdc_mdlm_desc *)buffer;
|
||||
break;
|
||||
case USB_CDC_MDLM_DETAIL_TYPE:
|
||||
if (elength < sizeof(struct usb_cdc_mdlm_detail_desc *))
|
||||
if (elength < sizeof(struct usb_cdc_mdlm_detail_desc))
|
||||
goto next_desc;
|
||||
if (detail)
|
||||
return -EINVAL;
|
||||
|
|
|
@ -886,19 +886,20 @@ static void iowarrior_disconnect(struct usb_interface *interface)
|
|||
dev = usb_get_intfdata(interface);
|
||||
mutex_lock(&iowarrior_open_disc_lock);
|
||||
usb_set_intfdata(interface, NULL);
|
||||
/* prevent device read, write and ioctl */
|
||||
dev->present = 0;
|
||||
|
||||
minor = dev->minor;
|
||||
mutex_unlock(&iowarrior_open_disc_lock);
|
||||
/* give back our minor - this will call close() locks need to be dropped at this point*/
|
||||
|
||||
/* give back our minor */
|
||||
usb_deregister_dev(interface, &iowarrior_class);
|
||||
|
||||
mutex_lock(&dev->mutex);
|
||||
|
||||
/* prevent device read, write and ioctl */
|
||||
dev->present = 0;
|
||||
|
||||
mutex_unlock(&dev->mutex);
|
||||
mutex_unlock(&iowarrior_open_disc_lock);
|
||||
|
||||
if (dev->opened) {
|
||||
/* There is a process that holds a filedescriptor to the device ,
|
||||
|
|
|
@ -96,7 +96,6 @@ static void yurex_delete(struct kref *kref)
|
|||
|
||||
dev_dbg(&dev->interface->dev, "%s\n", __func__);
|
||||
|
||||
usb_put_dev(dev->udev);
|
||||
if (dev->cntl_urb) {
|
||||
usb_kill_urb(dev->cntl_urb);
|
||||
kfree(dev->cntl_req);
|
||||
|
@ -112,6 +111,7 @@ static void yurex_delete(struct kref *kref)
|
|||
dev->int_buffer, dev->urb->transfer_dma);
|
||||
usb_free_urb(dev->urb);
|
||||
}
|
||||
usb_put_dev(dev->udev);
|
||||
kfree(dev);
|
||||
}
|
||||
|
||||
|
|
|
@ -967,6 +967,11 @@ static const struct usb_device_id option_ids[] = {
|
|||
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x7B) },
|
||||
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x7C) },
|
||||
|
||||
/* Motorola devices */
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(0x22b8, 0x2a70, 0xff, 0xff, 0xff) }, /* mdm6600 */
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(0x22b8, 0x2e0a, 0xff, 0xff, 0xff) }, /* mdm9600 */
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(0x22b8, 0x4281, 0x0a, 0x00, 0xfc) }, /* mdm ram dl */
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(0x22b8, 0x900e, 0xff, 0xff, 0xff) }, /* mdm qc dl */
|
||||
|
||||
{ USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_V640) },
|
||||
{ USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_V620) },
|
||||
|
@ -1544,6 +1549,7 @@ static const struct usb_device_id option_ids[] = {
|
|||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1428, 0xff, 0xff, 0xff), /* Telewell TW-LTE 4G v2 */
|
||||
.driver_info = RSVD(2) },
|
||||
{ USB_DEVICE_INTERFACE_CLASS(ZTE_VENDOR_ID, 0x1476, 0xff) }, /* GosunCn ZTE WeLink ME3630 (ECM/NCM mode) */
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1481, 0xff, 0x00, 0x00) }, /* ZTE MF871A */
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1533, 0xff, 0xff, 0xff) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1534, 0xff, 0xff, 0xff) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1535, 0xff, 0xff, 0xff) },
|
||||
|
@ -1949,11 +1955,15 @@ static const struct usb_device_id option_ids[] = {
|
|||
.driver_info = RSVD(4) },
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7e35, 0xff), /* D-Link DWM-222 */
|
||||
.driver_info = RSVD(4) },
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7e3d, 0xff), /* D-Link DWM-222 A2 */
|
||||
.driver_info = RSVD(4) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e01, 0xff, 0xff, 0xff) }, /* D-Link DWM-152/C1 */
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e02, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/C1 */
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x7e11, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/A3 */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2020, 0x2031, 0xff), /* Olicard 600 */
|
||||
.driver_info = RSVD(4) },
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2020, 0x2060, 0xff), /* BroadMobi BM818 */
|
||||
.driver_info = RSVD(4) },
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2020, 0x4000, 0xff) }, /* OLICARD300 - MT6225 */
|
||||
{ USB_DEVICE(INOVIA_VENDOR_ID, INOVIA_SEW858) },
|
||||
{ USB_DEVICE(VIATELECOM_VENDOR_ID, VIATELECOM_PRODUCT_CDS7) },
|
||||
|
|
|
@ -24,7 +24,6 @@ menuconfig VFIO
|
|||
select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM_SMMU || ARM_SMMU_V3)
|
||||
select VFIO_IOMMU_SPAPR_TCE if (PPC_POWERNV || PPC_PSERIES)
|
||||
select VFIO_SPAPR_EEH if (PPC_POWERNV || PPC_PSERIES)
|
||||
select ANON_INODES
|
||||
help
|
||||
VFIO provides a framework for secure userspace device drivers.
|
||||
See Documentation/vfio.txt for more details.
|
||||
|
|
|
@ -39,6 +39,12 @@ MODULE_PARM_DESC(experimental_zcopytx, "Enable Zero Copy TX;"
|
|||
* Using this limit prevents one virtqueue from starving others. */
|
||||
#define VHOST_NET_WEIGHT 0x80000
|
||||
|
||||
/* Max number of packets transferred before requeueing the job.
|
||||
* Using this limit prevents one virtqueue from starving others with small
|
||||
* pkts.
|
||||
*/
|
||||
#define VHOST_NET_PKT_WEIGHT 256
|
||||
|
||||
/* MAX number of TX used buffers for outstanding zerocopy */
|
||||
#define VHOST_MAX_PEND 128
|
||||
#define VHOST_GOODCOPY_LEN 256
|
||||
|
@ -372,6 +378,7 @@ static void handle_tx(struct vhost_net *net)
|
|||
struct socket *sock;
|
||||
struct vhost_net_ubuf_ref *uninitialized_var(ubufs);
|
||||
bool zcopy, zcopy_used;
|
||||
int sent_pkts = 0;
|
||||
|
||||
mutex_lock(&vq->mutex);
|
||||
sock = vq->private_data;
|
||||
|
@ -386,7 +393,7 @@ static void handle_tx(struct vhost_net *net)
|
|||
hdr_size = nvq->vhost_hlen;
|
||||
zcopy = nvq->ubufs;
|
||||
|
||||
for (;;) {
|
||||
do {
|
||||
/* Release DMAs done buffers first */
|
||||
if (zcopy)
|
||||
vhost_zerocopy_signal_used(net, vq);
|
||||
|
@ -474,11 +481,7 @@ static void handle_tx(struct vhost_net *net)
|
|||
vhost_zerocopy_signal_used(net, vq);
|
||||
total_len += len;
|
||||
vhost_net_tx_packet(net);
|
||||
if (unlikely(total_len >= VHOST_NET_WEIGHT)) {
|
||||
vhost_poll_queue(&vq->poll);
|
||||
break;
|
||||
}
|
||||
}
|
||||
} while (likely(!vhost_exceeds_weight(vq, ++sent_pkts, total_len)));
|
||||
out:
|
||||
mutex_unlock(&vq->mutex);
|
||||
}
|
||||
|
@ -656,6 +659,7 @@ static void handle_rx(struct vhost_net *net)
|
|||
struct socket *sock;
|
||||
struct iov_iter fixup;
|
||||
__virtio16 num_buffers;
|
||||
int recv_pkts = 0;
|
||||
|
||||
mutex_lock_nested(&vq->mutex, 0);
|
||||
sock = vq->private_data;
|
||||
|
@ -675,7 +679,10 @@ static void handle_rx(struct vhost_net *net)
|
|||
vq->log : NULL;
|
||||
mergeable = vhost_has_feature(vq, VIRTIO_NET_F_MRG_RXBUF);
|
||||
|
||||
while ((sock_len = vhost_net_rx_peek_head_len(net, sock->sk))) {
|
||||
do {
|
||||
sock_len = vhost_net_rx_peek_head_len(net, sock->sk);
|
||||
if (!sock_len)
|
||||
break;
|
||||
sock_len += sock_hlen;
|
||||
vhost_len = sock_len + vhost_hlen;
|
||||
headcount = get_rx_bufs(vq, vq->heads, vhost_len,
|
||||
|
@ -754,12 +761,10 @@ static void handle_rx(struct vhost_net *net)
|
|||
vhost_log_write(vq, vq_log, log, vhost_len,
|
||||
vq->iov, in);
|
||||
total_len += vhost_len;
|
||||
if (unlikely(total_len >= VHOST_NET_WEIGHT)) {
|
||||
vhost_poll_queue(&vq->poll);
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
vhost_net_enable_vq(net, vq);
|
||||
} while (likely(!vhost_exceeds_weight(vq, ++recv_pkts, total_len)));
|
||||
|
||||
if (!sock_len)
|
||||
vhost_net_enable_vq(net, vq);
|
||||
out:
|
||||
mutex_unlock(&vq->mutex);
|
||||
}
|
||||
|
@ -828,7 +833,8 @@ static int vhost_net_open(struct inode *inode, struct file *f)
|
|||
n->vqs[i].vhost_hlen = 0;
|
||||
n->vqs[i].sock_hlen = 0;
|
||||
}
|
||||
vhost_dev_init(dev, vqs, VHOST_NET_VQ_MAX);
|
||||
vhost_dev_init(dev, vqs, VHOST_NET_VQ_MAX,
|
||||
VHOST_NET_PKT_WEIGHT, VHOST_NET_WEIGHT);
|
||||
|
||||
vhost_poll_init(n->poll + VHOST_NET_VQ_TX, handle_tx_net, POLLOUT, dev);
|
||||
vhost_poll_init(n->poll + VHOST_NET_VQ_RX, handle_rx_net, POLLIN, dev);
|
||||
|
|
|
@ -58,6 +58,12 @@
|
|||
#define VHOST_SCSI_PREALLOC_UPAGES 2048
|
||||
#define VHOST_SCSI_PREALLOC_PROT_SGLS 512
|
||||
|
||||
/* Max number of requests before requeueing the job.
|
||||
* Using this limit prevents one virtqueue from starving others with
|
||||
* request.
|
||||
*/
|
||||
#define VHOST_SCSI_WEIGHT 256
|
||||
|
||||
struct vhost_scsi_inflight {
|
||||
/* Wait for the flush operation to finish */
|
||||
struct completion comp;
|
||||
|
@ -845,7 +851,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
|
|||
u64 tag;
|
||||
u32 exp_data_len, data_direction;
|
||||
unsigned out, in;
|
||||
int head, ret, prot_bytes;
|
||||
int head, ret, prot_bytes, c = 0;
|
||||
size_t req_size, rsp_size = sizeof(struct virtio_scsi_cmd_resp);
|
||||
size_t out_size, in_size;
|
||||
u16 lun;
|
||||
|
@ -864,7 +870,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
|
|||
|
||||
vhost_disable_notify(&vs->dev, vq);
|
||||
|
||||
for (;;) {
|
||||
do {
|
||||
head = vhost_get_vq_desc(vq, vq->iov,
|
||||
ARRAY_SIZE(vq->iov), &out, &in,
|
||||
NULL, NULL);
|
||||
|
@ -1080,7 +1086,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
|
|||
*/
|
||||
INIT_WORK(&cmd->work, vhost_scsi_submission_work);
|
||||
queue_work(vhost_scsi_workqueue, &cmd->work);
|
||||
}
|
||||
} while (likely(!vhost_exceeds_weight(vq, ++c, 0)));
|
||||
out:
|
||||
mutex_unlock(&vq->mutex);
|
||||
}
|
||||
|
@ -1433,7 +1439,8 @@ static int vhost_scsi_open(struct inode *inode, struct file *f)
|
|||
vqs[i] = &vs->vqs[i].vq;
|
||||
vs->vqs[i].vq.handle_kick = vhost_scsi_handle_kick;
|
||||
}
|
||||
vhost_dev_init(&vs->dev, vqs, VHOST_SCSI_MAX_VQ);
|
||||
vhost_dev_init(&vs->dev, vqs, VHOST_SCSI_MAX_VQ,
|
||||
VHOST_SCSI_WEIGHT, 0);
|
||||
|
||||
vhost_scsi_init_inflight(vs, NULL);
|
||||
|
||||
|
|
|
@ -393,8 +393,24 @@ static void vhost_dev_free_iovecs(struct vhost_dev *dev)
|
|||
vhost_vq_free_iovecs(dev->vqs[i]);
|
||||
}
|
||||
|
||||
bool vhost_exceeds_weight(struct vhost_virtqueue *vq,
|
||||
int pkts, int total_len)
|
||||
{
|
||||
struct vhost_dev *dev = vq->dev;
|
||||
|
||||
if ((dev->byte_weight && total_len >= dev->byte_weight) ||
|
||||
pkts >= dev->weight) {
|
||||
vhost_poll_queue(&vq->poll);
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(vhost_exceeds_weight);
|
||||
|
||||
void vhost_dev_init(struct vhost_dev *dev,
|
||||
struct vhost_virtqueue **vqs, int nvqs)
|
||||
struct vhost_virtqueue **vqs, int nvqs,
|
||||
int weight, int byte_weight)
|
||||
{
|
||||
struct vhost_virtqueue *vq;
|
||||
int i;
|
||||
|
@ -408,6 +424,8 @@ void vhost_dev_init(struct vhost_dev *dev,
|
|||
dev->iotlb = NULL;
|
||||
dev->mm = NULL;
|
||||
dev->worker = NULL;
|
||||
dev->weight = weight;
|
||||
dev->byte_weight = byte_weight;
|
||||
init_llist_head(&dev->work_list);
|
||||
init_waitqueue_head(&dev->wait);
|
||||
INIT_LIST_HEAD(&dev->read_list);
|
||||
|
|
|
@ -164,9 +164,13 @@ struct vhost_dev {
|
|||
struct list_head read_list;
|
||||
struct list_head pending_list;
|
||||
wait_queue_head_t wait;
|
||||
int weight;
|
||||
int byte_weight;
|
||||
};
|
||||
|
||||
void vhost_dev_init(struct vhost_dev *, struct vhost_virtqueue **vqs, int nvqs);
|
||||
bool vhost_exceeds_weight(struct vhost_virtqueue *vq, int pkts, int total_len);
|
||||
void vhost_dev_init(struct vhost_dev *, struct vhost_virtqueue **vqs,
|
||||
int nvqs, int weight, int byte_weight);
|
||||
long vhost_dev_set_owner(struct vhost_dev *dev);
|
||||
bool vhost_dev_has_owner(struct vhost_dev *dev);
|
||||
long vhost_dev_check_owner(struct vhost_dev *);
|
||||
|
|
|
@ -21,6 +21,14 @@
|
|||
#include "vhost.h"
|
||||
|
||||
#define VHOST_VSOCK_DEFAULT_HOST_CID 2
|
||||
/* Max number of bytes transferred before requeueing the job.
|
||||
* Using this limit prevents one virtqueue from starving others. */
|
||||
#define VHOST_VSOCK_WEIGHT 0x80000
|
||||
/* Max number of packets transferred before requeueing the job.
|
||||
* Using this limit prevents one virtqueue from starving others with
|
||||
* small pkts.
|
||||
*/
|
||||
#define VHOST_VSOCK_PKT_WEIGHT 256
|
||||
|
||||
enum {
|
||||
VHOST_VSOCK_FEATURES = VHOST_FEATURES,
|
||||
|
@ -529,7 +537,9 @@ static int vhost_vsock_dev_open(struct inode *inode, struct file *file)
|
|||
vsock->vqs[VSOCK_VQ_TX].handle_kick = vhost_vsock_handle_tx_kick;
|
||||
vsock->vqs[VSOCK_VQ_RX].handle_kick = vhost_vsock_handle_rx_kick;
|
||||
|
||||
vhost_dev_init(&vsock->dev, vqs, ARRAY_SIZE(vsock->vqs));
|
||||
vhost_dev_init(&vsock->dev, vqs, ARRAY_SIZE(vsock->vqs),
|
||||
VHOST_VSOCK_PKT_WEIGHT,
|
||||
VHOST_VSOCK_WEIGHT);
|
||||
|
||||
file->private_data = vsock;
|
||||
spin_lock_init(&vsock->send_pkt_list_lock);
|
||||
|
|
|
@ -115,13 +115,12 @@ static int pm_ctrl_write(struct pci_dev *dev, int offset, u16 new_value,
|
|||
{
|
||||
int err;
|
||||
u16 old_value;
|
||||
pci_power_t new_state, old_state;
|
||||
pci_power_t new_state;
|
||||
|
||||
err = pci_read_config_word(dev, offset, &old_value);
|
||||
if (err)
|
||||
goto out;
|
||||
|
||||
old_state = (pci_power_t)(old_value & PCI_PM_CTRL_STATE_MASK);
|
||||
new_state = (pci_power_t)(new_value & PCI_PM_CTRL_STATE_MASK);
|
||||
|
||||
new_value &= PM_OK_BITS;
|
||||
|
|
|
@ -23,7 +23,7 @@ obj-$(CONFIG_PROC_FS) += proc_namespace.o
|
|||
|
||||
obj-y += notify/
|
||||
obj-$(CONFIG_EPOLL) += eventpoll.o
|
||||
obj-$(CONFIG_ANON_INODES) += anon_inodes.o
|
||||
obj-y += anon_inodes.o
|
||||
obj-$(CONFIG_SIGNALFD) += signalfd.o
|
||||
obj-$(CONFIG_TIMERFD) += timerfd.o
|
||||
obj-$(CONFIG_EVENTFD) += eventfd.o
|
||||
|
|
|
@ -168,7 +168,7 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon)
|
|||
if (tcon == NULL)
|
||||
return 0;
|
||||
|
||||
if (smb2_command == SMB2_TREE_CONNECT)
|
||||
if (smb2_command == SMB2_TREE_CONNECT || smb2_command == SMB2_IOCTL)
|
||||
return 0;
|
||||
|
||||
if (tcon->tidStatus == CifsExiting) {
|
||||
|
@ -660,7 +660,12 @@ SMB2_sess_alloc_buffer(struct SMB2_sess_data *sess_data)
|
|||
else
|
||||
req->SecurityMode = 0;
|
||||
|
||||
#ifdef CONFIG_CIFS_DFS_UPCALL
|
||||
req->Capabilities = cpu_to_le32(SMB2_GLOBAL_CAP_DFS);
|
||||
#else
|
||||
req->Capabilities = 0;
|
||||
#endif /* DFS_UPCALL */
|
||||
|
||||
req->Channel = 0; /* MBZ */
|
||||
|
||||
sess_data->iov[0].iov_base = (char *)req;
|
||||
|
|
|
@ -38,6 +38,8 @@ EXPORT_TRACEPOINT_SYMBOL(android_fs_datawrite_start);
|
|||
EXPORT_TRACEPOINT_SYMBOL(android_fs_datawrite_end);
|
||||
EXPORT_TRACEPOINT_SYMBOL(android_fs_dataread_start);
|
||||
EXPORT_TRACEPOINT_SYMBOL(android_fs_dataread_end);
|
||||
EXPORT_TRACEPOINT_SYMBOL(android_fs_fsync_start);
|
||||
EXPORT_TRACEPOINT_SYMBOL(android_fs_fsync_end);
|
||||
|
||||
/*
|
||||
* I/O completion handler for multipage BIOs.
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
config FANOTIFY
|
||||
bool "Filesystem wide access notification"
|
||||
select FSNOTIFY
|
||||
select ANON_INODES
|
||||
default n
|
||||
---help---
|
||||
Say Y here to enable fanotify support. fanotify is a file access
|
||||
|
|
|
@ -1,6 +1,5 @@
|
|||
config INOTIFY_USER
|
||||
bool "Inotify support for userspace"
|
||||
select ANON_INODES
|
||||
select FSNOTIFY
|
||||
default y
|
||||
---help---
|
||||
|
|
|
@ -3832,7 +3832,6 @@ static int ocfs2_xattr_bucket_find(struct inode *inode,
|
|||
u16 blk_per_bucket = ocfs2_blocks_per_xattr_bucket(inode->i_sb);
|
||||
int low_bucket = 0, bucket, high_bucket;
|
||||
struct ocfs2_xattr_bucket *search;
|
||||
u32 last_hash;
|
||||
u64 blkno, lower_blkno = 0;
|
||||
|
||||
search = ocfs2_xattr_bucket_new(inode);
|
||||
|
@ -3876,8 +3875,6 @@ static int ocfs2_xattr_bucket_find(struct inode *inode,
|
|||
if (xh->xh_count)
|
||||
xe = &xh->xh_entries[le16_to_cpu(xh->xh_count) - 1];
|
||||
|
||||
last_hash = le32_to_cpu(xe->xe_name_hash);
|
||||
|
||||
/* record lower_blkno which may be the insert place. */
|
||||
lower_blkno = blkno;
|
||||
|
||||
|
|
|
@ -3254,6 +3254,15 @@ static const struct file_operations proc_tgid_base_operations = {
|
|||
.llseek = generic_file_llseek,
|
||||
};
|
||||
|
||||
struct pid *tgid_pidfd_to_pid(const struct file *file)
|
||||
{
|
||||
if (!d_is_dir(file->f_path.dentry) ||
|
||||
(file->f_op != &proc_tgid_base_operations))
|
||||
return ERR_PTR(-EBADF);
|
||||
|
||||
return proc_pid(file_inode(file));
|
||||
}
|
||||
|
||||
static struct dentry *proc_tgid_base_lookup(struct inode *dir, struct dentry *dentry, unsigned int flags)
|
||||
{
|
||||
return proc_pident_lookup(dir, dentry,
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue