Compare commits

..

119 Commits

Author SHA1 Message Date
Linus Torvalds
6f7da29041 Linux 4.12 2017-07-02 16:07:02 -07:00
Sylvain 'ythier' Hitier
401e000ab9 moduleparam: fix doc: hwparam_irq configures an IRQ
Signed-off-by: Sylvain 'ythier' Hitier <sylvain.hitier@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-02 15:37:23 -07:00
Linus Torvalds
79c4968169 Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus
Pull MIPS fixes from Ralf Baechle:
 "Here's a final round of fixes for 4.12:

   - Fix misordered instructions in assembly code making kenel startup
     via UHB unreliable.

   - Fix special case of MADDF and MADDF emulation.

   - Fix alignment issue in address calculation in pm-cps on 64 bit.

   - Fix IRQ tracing & lockdep when rescheduling

   - Systems with MAARs require post-DMA cache flushes.

  The reordering fix and the MADDF/MSUBF fix have sat in linux-next for
  a number of days. The others haven't propagated from my pull tree to
  linux-next yet but all have survived manual testing and Imagination's
  automated test system and there are no pending bug reports"

* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus:
  MIPS: Avoid accidental raw backtrace
  MIPS: Perform post-DMA cache flushes on systems with MAARs
  MIPS: Fix IRQ tracing & lockdep when rescheduling
  MIPS: pm-cps: Drop manual cache-line alignment of ready_count
  MIPS: math-emu: Handle zero accumulator case in MADDF and MSUBF separately
  MIPS: head: Reorder instructions missing a delay slot
2017-07-02 11:53:44 -07:00
Linus Torvalds
3a61a54cd7 Merge branch 'fixes' of git://git.armlinux.org.uk/~rmk/linux-arm
Pull ARM fix from Russell King:
 "One final fix for 4.12 - Doug found a boot failure case triggered by
  requesting a non-even MB vmalloc size"

* 'fixes' of git://git.armlinux.org.uk/~rmk/linux-arm:
  ARM: 8685/1: ensure memblock-limit is pmd-aligned
2017-07-02 10:09:40 -07:00
Linus Torvalds
e18aca0236 Merge branch 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Thomas Gleixner:
 "Fixlets for x86:

   - Prevent kexec crash when KASLR is enabled, which was caused by an
     address calculation bug

   - Restore the freeing of PUDs on memory hot remove

   - Correct a negated pointer check in the intel uncore performance
     monitoring driver

   - Plug a memory leak in an error exit path in the RDT code"

* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/intel_rdt: Fix memory leak on mount failure
  x86/boot/KASLR: Fix kexec crash due to 'virt_addr' calculation bug
  x86/boot/KASLR: Add checking for the offset of kernel virtual address randomization
  perf/x86/intel/uncore: Fix wrong box pointer check
  x86/mm/hotplug: Fix BUG_ON() after hot-remove by not freeing PUD
2017-07-01 09:10:17 -07:00
Linus Torvalds
a527bf6140 Merge branch 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf fix from Thomas Gleixner:
 "The last fix for perf for this cycles:

   - Prevent a segfault when kernel.kptr_restrict=2 is set by avoiding a
     null pointer dereference"

* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  perf machine: Fix segfault for kernel.kptr_restrict=2
2017-07-01 08:46:52 -07:00
Linus Torvalds
46589d7ab7 Merge tag 'pinctrl-v4.12-4' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl
Pull pinctrl fix from Linus Walleij:
 "Brian noticed that this regression has not got a proper fix for the
  entire merge window and consequently we need to revert the offending
  commit.

  It's part of the RT-mainstream work, the dance goes like this, two
  steps forward, one step back.

  Summary:

   - A last fix for v4.12, an IRQ problem reported early in the merge
     window appears not to have been properly fixed, so the offending
     commit will be reverted and we will find the proper fix for v4.13.
     Hopefully"

* tag 'pinctrl-v4.12-4' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl:
  Revert "pinctrl: rockchip: avoid hardirq-unsafe functions in irq_chip"
2017-07-01 08:39:13 -07:00
Linus Torvalds
fc93274ab5 Merge tag 'gpio-v4.12-4' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio
Pull last minute fixes for GPIO from Linus Walleij:

 - Fix another ACPI problem with broken BIOSes.

 - Filter out the right GPIO events, making a very user-visible bug go
   away.

* tag 'gpio-v4.12-4' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio:
  gpio: acpi: Skip _AEI entries without a handler rather then aborting the scan
  gpiolib: fix filtering out unwanted events
2017-07-01 08:24:54 -07:00
Linus Torvalds
c0a0c7a4e1 Merge tag 'trace-v4.12-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull last-minute tracing fixes from Steven Rostedt:
 "Two fixes:

  One is for a crash when using the :mod: trace probe command into
  stack_trace_filter. This bug was introduced during the last merge
  window.

  The other was there forever. It's a small bug that makes it impossible
  to name a module function for kprobes when the module starts with a
  digit"

* tag 'trace-v4.12-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
  tracing/kprobes: Allow to create probe with a module name starting with a digit
  ftrace: Fix regression with module command in stack_trace_filter
2017-06-30 17:18:57 -07:00
Zack Weinberg
fbd576295d uapi/linux/a.out.h: don't use deprecated system-specific predefines.
uapi/linux/a.out.h uses a number of predefined macros that are
deprecated because they're in the application namespace
(e.g. '#ifdef linux' instead of '#ifdef __linux__').
This patch either corrects or just removes them if they are not
applicable to Linux.

The primary reason this is worth bothering to fix, considering how
obsolete a.out binary support is, is that the GCC build process
considers this such a severe error that it will copy the header into a
private directory and change the macro names, which causes future
updates to the header to be masked.  This header probably doesn't get
updated very often anymore, but it is the _only_ uapi header that gets
this treatment, so IMHO it is worth patching just to drive that number
all the way to zero.

Signed-off-by: Zack Weinberg <zackw@panix.com>
[hch: removed dead conditionals]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-06-30 13:53:07 -07:00
Jakub Kicinski
dbd1877754 hashtable: remove repeated phrase from a comment
"in a rcu enabled hashtable" is repeated twice in a comment.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-06-30 13:49:53 -07:00
Vikas Shivappa
79298acc4b x86/intel_rdt: Fix memory leak on mount failure
If mount fails, the kn_info directory is not freed causing memory leak.

Add the missing error handling path.

Fixes: 4e978d06de ("x86/intel_rdt: Add "info" files to resctrl file system")
Signed-off-by: Vikas Shivappa <vikas.shivappa@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: ravi.v.shankar@intel.com
Cc: tony.luck@intel.com
Cc: fenghua.yu@intel.com
Cc: peterz@infradead.org
Cc: vikas.shivappa@intel.com
Cc: andi.kleen@intel.com
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/1498503368-20173-3-git-send-email-vikas.shivappa@linux.intel.com
2017-06-30 21:20:00 +02:00
Linus Torvalds
b4df2e3537 Merge tag 'powerpc-4.12-8' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux
Pull powerpc fixes from Michael Ellerman:
 "Hopefully the last two powerpc fixes for 4.12.

  The CXL one is larger than I'd usually send at rc7, but it fixes new
  code this cycle, so better to have it working for the release. It was
  actually sent a few weeks back but got blocked in testing behind
  another fix that was causing issues.

  We are still tracking one crash in v4.12-rc7, but only one person has
  reproduced it and the commit identified by bisect doesn't touch any of
  the relevant code, so I think it's 50/50 whether that commit is
  actually the problem or it's some code layout / toolchain issue.

  Two fixes for code we merged this cycle:

   - cxl: Fixes for Coherent Accelerator Interface Architecture 2.0

   - Avoid miscompilation w/GCC 4.6.3 on 32-bit - don't inline
     copy_to/from_user()

  Thanks to Al Viro, Larry Finger, Christophe Lombard"

* tag 'powerpc-4.12-8' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
  powerpc/32: Avoid miscompilation w/GCC 4.6.3 - don't inline copy_to/from_user()
  cxl: Fixes for Coherent Accelerator Interface Architecture 2.0
2017-06-30 10:55:34 -07:00
Linus Torvalds
27ab862a3a Merge tag 'iommu-fixes-v4.12-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu
Pull IOMMU fixes from Joerg Roedel:
 "Two fixes:

   - A fix for AMD IOMMU interrupt remapping code when IRQs are
     forwarded directly to KVM guests

   - Fixed check in the recently merged code to allow tboot with
     Intel VT-d disabled"

* tag 'iommu-fixes-v4.12-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu:
  iommu/amd: Fix interrupt remapping when disable guest_mode
  iommu/vt-d: Correctly disable Intel IOMMU force on
2017-06-30 10:37:48 -07:00
Linus Torvalds
4adc6b9382 Merge tag 'sound-4.12' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound
Pull sound fixes from Takashi Iwai:
 "Two last-minute HD-audio fixes"

* tag 'sound-4.12' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound:
  ALSA: hda - Fix endless loop of codec configure
  ALSA: hda - set input_path bitmap to zero after moving it to new place
2017-06-30 10:30:26 -07:00
Linus Torvalds
86c3e00afd Merge branch 'overlayfs-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/vfs
Pull overlayfs fixes from Miklos Szeredi:
 "Fix two bugs in copy-up code. One introduced in 4.11 and one in
  4.12-rc"

* 'overlayfs-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/vfs:
  ovl: don't set origin on broken lower hardlink
  ovl: copy-up: don't unlock between lookup and link
2017-06-30 10:22:59 -07:00
Baoquan He
8eabf42ae5 x86/boot/KASLR: Fix kexec crash due to 'virt_addr' calculation bug
Kernel text KASLR is separated into physical address and virtual
address randomization. And for virtual address randomization, we
only randomiza to get an offset between 16M and KERNEL_IMAGE_SIZE.
So the initial value of 'virt_addr' should be LOAD_PHYSICAL_ADDR,
but not the original kernel loading address 'output'.

The bug will cause kernel boot failure if kernel is loaded at a different
position than the address, 16M, which is decided at compiled time.
Kexec/kdump is such practical case.

To fix it, just assign LOAD_PHYSICAL_ADDR to virt_addr as initial
value.

Tested-by: Dave Young <dyoung@redhat.com>
Signed-off-by: Baoquan He <bhe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: 8391c73 ("x86/KASLR: Randomize virtual address separately")
Link: http://lkml.kernel.org/r/1498567146-11990-3-git-send-email-bhe@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-30 08:53:14 +02:00
Baoquan He
b892cb873c x86/boot/KASLR: Add checking for the offset of kernel virtual address randomization
For kernel text KASLR, the virtual address is confined to area of 1G,
[0xffffffff80000000, 0xffffffffc0000000). For the implemenataion of
virtual address randomization, we only randomize to get an offset
between 16M and 1G, then add this offset to the starting address,
0xffffffff80000000. Here 16M is the offset which is decided at linking
stage. So the amount of the local variable 'virt_addr' which respresents
the offset plus the kernel output size can not exceed KERNEL_IMAGE_SIZE.

Add a debug check for the offset. If out of bounds, print error
message and hang there.

Suggested-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Baoquan He <bhe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1498567146-11990-2-git-send-email-bhe@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-30 08:53:14 +02:00
Sabrina Dubroca
9e52b32567 tracing/kprobes: Allow to create probe with a module name starting with a digit
Always try to parse an address, since kstrtoul() will safely fail when
given a symbol as input. If that fails (which will be the case for a
symbol), try to parse a symbol instead.

This allows creating a probe such as:

    p:probe/vlan_gro_receive 8021q:vlan_gro_receive+0

Which is necessary for this command to work:

    perf probe -m 8021q -a vlan_gro_receive

Link: http://lkml.kernel.org/r/fd72d666f45b114e2c5b9cf7e27b91de1ec966f1.1498122881.git.sd@queasysnail.net

Cc: stable@vger.kernel.org
Fixes: 413d37d1e ("tracing: Add kprobe-based event tracer")
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2017-06-29 23:13:23 -04:00
James Hogan
8542363633 MIPS: Avoid accidental raw backtrace
Since commit 81a76d7119 ("MIPS: Avoid using unwind_stack() with
usermode") show_backtrace() invokes the raw backtracer when
cp0_status & ST0_KSU indicates user mode to fix issues on EVA kernels
where user and kernel address spaces overlap.

However this is used by show_stack() which creates its own pt_regs on
the stack and leaves cp0_status uninitialised in most of the code paths.
This results in the non deterministic use of the raw back tracer
depending on the previous stack content.

show_stack() deals exclusively with kernel mode stacks anyway, so
explicitly initialise regs.cp0_status to KSU_KERNEL (i.e. 0) to ensure
we get a useful backtrace.

Fixes: 81a76d7119 ("MIPS: Avoid using unwind_stack() with usermode")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 3.15+
Patchwork: https://patchwork.linux-mips.org/patch/16656/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-30 04:42:15 +02:00
Paul Burton
cad482c1b1 MIPS: Perform post-DMA cache flushes on systems with MAARs
Recent CPUs from Imagination Technologies such as the I6400 or P6600 are
able to speculatively fetch data from memory into caches. This means
that if used in a system with non-coherent DMA they require that caches
be invalidated after a device performs DMA, and before the CPU reads the
DMA'd data, in order to ensure that stale values weren't speculatively
prefetched.

Such CPUs also introduced Memory Accessibility Attribute Registers
(MAARs) in order to control the regions in which they are allowed to
speculate. Thus we can use the presence of MAARs as a good indication
that the CPU requires the above cache maintenance. Use the presence of
MAARs to determine the result of cpu_needs_post_dma_flush() in the
default case, in order to handle these recent CPUs correctly.

Note that the return type of cpu_needs_post_dma_flush() is changed to
bool, such that it's clearer what's happening when cpu_has_maar is cast
to bool for the return value. If this patch were backported to a
pre-v4.7 kernel then MIPS_CPU_MAAR was 1ull<<34, so when cast to an int
we would incorrectly return 0. It so happens that MIPS_CPU_MAAR is
currently 1ull<<30, so when truncated to an int gives a non-zero value
anyway, but even so the implicit conversion from long long int to bool
makes it clearer to understand what will happen than the implicit
conversion from long long int to int would. The bool return type also
fits this usage better semantically, so seems like an all-round win.

Thanks to Ed for spotting the issue for pre-v4.7 kernels & suggesting
the return type change.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: Bryan O'Donoghue <pure.logic@nexus-software.ie>
Tested-by: Bryan O'Donoghue <pure.logic@nexus-software.ie>
Cc: Ed Blake <ed.blake@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16363/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-30 04:40:46 +02:00
Paul Burton
d8550860d9 MIPS: Fix IRQ tracing & lockdep when rescheduling
When the scheduler sets TIF_NEED_RESCHED & we call into the scheduler
from arch/mips/kernel/entry.S we disable interrupts. This is true
regardless of whether we reach work_resched from syscall_exit_work,
resume_userspace or by looping after calling schedule(). Although we
disable interrupts in these paths we don't call trace_hardirqs_off()
before calling into C code which may acquire locks, and we therefore
leave lockdep with an inconsistent view of whether interrupts are
disabled or not when CONFIG_PROVE_LOCKING & CONFIG_DEBUG_LOCKDEP are
both enabled.

Without tracing this interrupt state lockdep will print warnings such
as the following once a task returns from a syscall via
syscall_exit_partial with TIF_NEED_RESCHED set:

[   49.927678] ------------[ cut here ]------------
[   49.934445] WARNING: CPU: 0 PID: 1 at kernel/locking/lockdep.c:3687 check_flags.part.41+0x1dc/0x1e8
[   49.946031] DEBUG_LOCKS_WARN_ON(current->hardirqs_enabled)
[   49.946355] CPU: 0 PID: 1 Comm: init Not tainted 4.10.0-00439-gc9fd5d362289-dirty #197
[   49.963505] Stack : 0000000000000000 ffffffff81bb5d6a 0000000000000006 ffffffff801ce9c4
[   49.974431]         0000000000000000 0000000000000000 0000000000000000 000000000000004a
[   49.985300]         ffffffff80b7e487 ffffffff80a24498 a8000000ff160000 ffffffff80ede8b8
[   49.996194]         0000000000000001 0000000000000000 0000000000000000 0000000077c8030c
[   50.007063]         000000007fd8a510 ffffffff801cd45c 0000000000000000 a8000000ff127c88
[   50.017945]         0000000000000000 ffffffff801cf928 0000000000000001 ffffffff80a24498
[   50.028827]         0000000000000000 0000000000000001 0000000000000000 0000000000000000
[   50.039688]         0000000000000000 a8000000ff127bd0 0000000000000000 ffffffff805509bc
[   50.050575]         00000000140084e0 0000000000000000 0000000000000000 0000000000040a00
[   50.061448]         0000000000000000 ffffffff8010e1b0 0000000000000000 ffffffff805509bc
[   50.072327]         ...
[   50.076087] Call Trace:
[   50.079869] [<ffffffff8010e1b0>] show_stack+0x80/0xa8
[   50.086577] [<ffffffff805509bc>] dump_stack+0x10c/0x190
[   50.093498] [<ffffffff8015dde0>] __warn+0xf0/0x108
[   50.099889] [<ffffffff8015de34>] warn_slowpath_fmt+0x3c/0x48
[   50.107241] [<ffffffff801c15b4>] check_flags.part.41+0x1dc/0x1e8
[   50.114961] [<ffffffff801c239c>] lock_is_held_type+0x8c/0xb0
[   50.122291] [<ffffffff809461b8>] __schedule+0x8c0/0x10f8
[   50.129221] [<ffffffff80946a60>] schedule+0x30/0x98
[   50.135659] [<ffffffff80106278>] work_resched+0x8/0x34
[   50.142397] ---[ end trace 0cb4f6ef5b99fe21 ]---
[   50.148405] possible reason: unannotated irqs-off.
[   50.154600] irq event stamp: 400463
[   50.159566] hardirqs last  enabled at (400463): [<ffffffff8094edc8>] _raw_spin_unlock_irqrestore+0x40/0xa8
[   50.171981] hardirqs last disabled at (400462): [<ffffffff8094eb98>] _raw_spin_lock_irqsave+0x30/0xb0
[   50.183897] softirqs last  enabled at (400450): [<ffffffff8016580c>] __do_softirq+0x4ac/0x6a8
[   50.195015] softirqs last disabled at (400425): [<ffffffff80165e78>] irq_exit+0x110/0x128

Fix this by using the TRACE_IRQS_OFF macro to call trace_hardirqs_off()
when CONFIG_TRACE_IRQFLAGS is enabled. This is done before invoking
schedule() following the work_resched label because:

 1) Interrupts are disabled regardless of the path we take to reach
    work_resched() & schedule().

 2) Performing the tracing here avoids the need to do it in paths which
    disable interrupts but don't call out to C code before hitting a
    path which uses the RESTORE_SOME macro that will call
    trace_hardirqs_on() or trace_hardirqs_off() as appropriate.

We call trace_hardirqs_on() using the TRACE_IRQS_ON macro before calling
syscall_trace_leave() for similar reasons, ensuring that lockdep has a
consistent view of state after we re-enable interrupts.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Fixes: 1da177e4c3 ("Linux-2.6.12-rc2")
Cc: linux-mips@linux-mips.org
Cc: stable <stable@vger.kernel.org>
Patchwork: https://patchwork.linux-mips.org/patch/15385/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-30 04:40:18 +02:00
Paul Burton
161c51ccb7 MIPS: pm-cps: Drop manual cache-line alignment of ready_count
We allocate memory for a ready_count variable per-CPU, which is accessed
via a cached non-coherent TLB mapping to perform synchronisation between
threads within the core using LL/SC instructions. In order to ensure
that the variable is contained within its own data cache line we
allocate 2 lines worth of memory & align the resulting pointer to a line
boundary. This is however unnecessary, since kmalloc is guaranteed to
return memory which is at least cache-line aligned (see
ARCH_DMA_MINALIGN). Stop the redundant manual alignment.

Besides cleaning up the code & avoiding needless work, this has the side
effect of avoiding an arithmetic error found by Bryan on 64 bit systems
due to the 32 bit size of the former dlinesz. This led the ready_count
variable to have its upper 32b cleared erroneously for MIPS64 kernels,
causing problems when ready_count was later used on MIPS64 via cpuidle.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Fixes: 3179d37ee1 ("MIPS: pm-cps: add PM state entry code for CPS systems")
Reported-by: Bryan O'Donoghue <bryan.odonoghue@imgtec.com>
Reviewed-by: Bryan O'Donoghue <bryan.odonoghue@imgtec.com>
Tested-by: Bryan O'Donoghue <bryan.odonoghue@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable <stable@vger.kernel.org> # v3.16+
Patchwork: https://patchwork.linux-mips.org/patch/15383/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-30 04:38:55 +02:00
Doug Berger
9e25ebfe56 ARM: 8685/1: ensure memblock-limit is pmd-aligned
The pmd containing memblock_limit is cleared by prepare_page_table()
which creates the opportunity for early_alloc() to allocate unmapped
memory if memblock_limit is not pmd aligned causing a boot-time hang.

Commit 965278dcb8 ("ARM: 8356/1: mm: handle non-pmd-aligned end of RAM")
attempted to resolve this problem, but there is a path through the
adjust_lowmem_bounds() routine where if all memory regions start and
end on pmd-aligned addresses the memblock_limit will be set to
arm_lowmem_limit.

Since arm_lowmem_limit can be affected by the vmalloc early parameter,
the value of arm_lowmem_limit may not be pmd-aligned. This commit
corrects this oversight such that memblock_limit is always rounded
down to pmd-alignment.

Fixes: 965278dcb8 ("ARM: 8356/1: mm: handle non-pmd-aligned end of RAM")
Signed-off-by: Doug Berger <opendmb@gmail.com>
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-06-29 23:10:12 +01:00
Linus Torvalds
4d8a991d46 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Pull networking fixes from David Miller:

 1) Need to access netdev->num_rx_queues behind an accessor in netvsc
    driver otherwise the build breaks with some configs, from Arnd
    Bergmann.

 2) Add dummy xfrm_dev_event() so that build doesn't fail when
    CONFIG_XFRM_OFFLOAD is not set. From Hangbin Liu.

 3) Don't OOPS when pfkey_msg2xfrm_state() signals an erros, from Dan
    Carpenter.

 4) Fix MCDI command size for filter operations in sfc driver, from
    Martin Habets.

 5) Fix UFO segmenting so that we don't calculate incorrect checksums,
    from Michal Kubecek.

 6) When ipv6 datagram connects fail, reset destination address and
    port. From Wei Wang.

 7) TCP disconnect must reset the cached receive DST, from WANG Cong.

 8) Fix sign extension bug on 32-bit in dev_get_stats(), from Eric
    Dumazet.

 9) fman driver has to depend on HAS_DMA, from Madalin Bucur.

10) Fix bpf pointer leak with xadd in verifier, from Daniel Borkmann.

11) Fix negative page counts with GFO, from Michal Kubecek.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (41 commits)
  sfc: fix attempt to translate invalid filter ID
  net: handle NAPI_GRO_FREE_STOLEN_HEAD case also in napi_frags_finish()
  bpf: prevent leaking pointer via xadd on unpriviledged
  arcnet: com20020-pci: add missing pdev setup in netdev structure
  arcnet: com20020-pci: fix dev_id calculation
  arcnet: com20020: remove needless base_addr assignment
  Trivial fix to spelling mistake in arc_printk message
  arcnet: change irq handler to lock irqsave
  rocker: move dereference before free
  mlxsw: spectrum_router: Fix NULL pointer dereference
  net: sched: Fix one possible panic when no destroy callback
  virtio-net: serialize tx routine during reset
  net: usb: asix88179_178a: Add support for the Belkin B2B128
  fsl/fman: add dependency on HAS_DMA
  net: prevent sign extension in dev_get_stats()
  tcp: reset sk_rx_dst in tcp_disconnect()
  net: ipv6: reset daddr and dport in sk if connect() fails
  bnx2x: Don't log mc removal needlessly
  bnxt_en: Fix netpoll handling.
  bnxt_en: Add missing logic to handle TPA end error conditions.
  ...
2017-06-29 14:30:07 -07:00
Linus Torvalds
27bc344014 Merge tag 'for-4.12/dm-fixes-5' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm
Pull device mapper fixes from Mike Snitzer:

 - dm thinp fix for crash that will occur when metadata device failure
   races with discard passdown to the underlying data device.

 - dm raid fix to not access the superblock's >= 1.9.0 'sectors' member
   unconditionally.

* tag 'for-4.12/dm-fixes-5' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
  dm thin: do not queue freed thin mapping for next stage processing
  dm raid: fix oops on upgrading to extended superblock format
2017-06-29 14:23:02 -07:00
Linus Torvalds
374bf8831a Merge branch 'for-linus' of git://git.kernel.dk/linux-block
Pull block fixes from Jens Axboe:
 "Two fixes that should go into this release.

  One is an nvme regression fix from Keith, fixing a missing queue
  freeze if the controller is being reset. This causes the reset to
  hang.

  The other is a fix for a leak of the bio protection info, if smaller
  sized O_DIRECT is used. This fix should be more involved as we have
  other problematic paths in the kernel, but given as this isn't a
  regression in this series, we'll tackle those for 4.13"

* 'for-linus' of git://git.kernel.dk/linux-block:
  block: provide bio_uninit() free freeing integrity/task associations
  nvme/pci: Fix stuck nvme reset
2017-06-29 14:10:37 -07:00
Edward Cree
d58299a478 sfc: fix attempt to translate invalid filter ID
When filter insertion fails with no rollback, we were trying to convert
 EFX_EF10_FILTER_ID_INVALID to an id to store in 'ids' (which is either
 vlan->uc or vlan->mc).  This would WARN_ON_ONCE and then record a bogus
 filter ID of 0x1fff, neither of which is a good thing.

Fixes: 0ccb998bf4 ("sfc: fix filter_id misinterpretation in edge case")
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29 15:59:38 -04:00
Michal Kubeček
e44699d2c2 net: handle NAPI_GRO_FREE_STOLEN_HEAD case also in napi_frags_finish()
Recently I started seeing warnings about pages with refcount -1. The
problem was traced to packets being reused after their head was merged into
a GRO packet by skb_gro_receive(). While bisecting the issue pointed to
commit c21b48cc1b ("net: adjust skb->truesize in ___pskb_trim()") and
I have never seen it on a kernel with it reverted, I believe the real
problem appeared earlier when the option to merge head frag in GRO was
implemented.

Handling NAPI_GRO_FREE_STOLEN_HEAD state was only added to GRO_MERGED_FREE
branch of napi_skb_finish() so that if the driver uses napi_gro_frags()
and head is merged (which in my case happens after the skb_condense()
call added by the commit mentioned above), the skb is reused including the
head that has been merged. As a result, we release the page reference
twice and eventually end up with negative page refcount.

To fix the problem, handle NAPI_GRO_FREE_STOLEN_HEAD in napi_frags_finish()
the same way it's done in napi_skb_finish().

Fixes: d7e8883cfc ("net: make GRO aware of skb->head_frag")
Signed-off-by: Michal Kubecek <mkubecek@suse.cz>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29 15:54:13 -04:00
Daniel Borkmann
6bdf6abc56 bpf: prevent leaking pointer via xadd on unpriviledged
Leaking kernel addresses on unpriviledged is generally disallowed,
for example, verifier rejects the following:

  0: (b7) r0 = 0
  1: (18) r2 = 0xffff897e82304400
  3: (7b) *(u64 *)(r1 +48) = r2
  R2 leaks addr into ctx

Doing pointer arithmetic on them is also forbidden, so that they
don't turn into unknown value and then get leaked out. However,
there's xadd as a special case, where we don't check the src reg
for being a pointer register, e.g. the following will pass:

  0: (b7) r0 = 0
  1: (7b) *(u64 *)(r1 +48) = r0
  2: (18) r2 = 0xffff897e82304400 ; map
  4: (db) lock *(u64 *)(r1 +48) += r2
  5: (95) exit

We could store the pointer into skb->cb, loose the type context,
and then read it out from there again to leak it eventually out
of a map value. Or more easily in a different variant, too:

   0: (bf) r6 = r1
   1: (7a) *(u64 *)(r10 -8) = 0
   2: (bf) r2 = r10
   3: (07) r2 += -8
   4: (18) r1 = 0x0
   6: (85) call bpf_map_lookup_elem#1
   7: (15) if r0 == 0x0 goto pc+3
   R0=map_value(ks=8,vs=8,id=0),min_value=0,max_value=0 R6=ctx R10=fp
   8: (b7) r3 = 0
   9: (7b) *(u64 *)(r0 +0) = r3
  10: (db) lock *(u64 *)(r0 +0) += r6
  11: (b7) r0 = 0
  12: (95) exit

  from 7 to 11: R0=inv,min_value=0,max_value=0 R6=ctx R10=fp
  11: (b7) r0 = 0
  12: (95) exit

Prevent this by checking xadd src reg for pointer types. Also
add a couple of test cases related to this.

Fixes: 1be7f75d16 ("bpf: enable non-root eBPF programs")
Fixes: 17a5267067 ("bpf: verifier (add verifier core)")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29 15:44:34 -04:00
Kan Liang
80c65fdb4c perf/x86/intel/uncore: Fix wrong box pointer check
Should not init a NULL box. It will cause system crash.
The issue looks like caused by a typo.

This was not noticed because there is no NULL box. Also, for most
boxes, they are enabled by default. The init code is not critical.

Fixes: fff4b87e59 ("perf/x86/intel/uncore: Make package handling more robust")
Signed-off-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/20170629190926.2456-1-kan.liang@intel.com
2017-06-29 21:28:13 +02:00
David S. Miller
00778f7cad Merge branch 'arcnet-fixes'
Michael Grzeschik says:

====================
arcnet: Collection of latest fixes

Here we sum up the recent fixes I collected on the way to use and
stabilise the framework. Part of it is an possible deadlock that we
prevent as well to fix the calculation of the dev_id that can be setup
by an rotary encoder. Beside that we added an trivial spelling patch and
fix some wrong and missing assignments that improves the code footprint.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29 15:18:38 -04:00
Michael Grzeschik
2a0ea04c83 arcnet: com20020-pci: add missing pdev setup in netdev structure
We add the pdev data to the pci devices netdev structure. This way
the interface get consistent device names in the userspace (udev).

Signed-off-by: Michael Grzeschik <m.grzeschik@pengutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29 15:18:37 -04:00
Michael Grzeschik
cb108619f2 arcnet: com20020-pci: fix dev_id calculation
The dev_id was miscalculated. Only the two bits 4-5 are relevant for the
MA1 card. PCIARC1 and PCIFB2 use the four bits 4-7 for id selection.

Signed-off-by: Michael Grzeschik <m.grzeschik@pengutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29 15:18:36 -04:00
Michael Grzeschik
0d494fcf86 arcnet: com20020: remove needless base_addr assignment
The assignment is superfluous.

Signed-off-by: Michael Grzeschik <m.grzeschik@pengutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29 15:18:36 -04:00
Colin Ian King
06908d7aee Trivial fix to spelling mistake in arc_printk message
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Michael Grzeschik <m.grzeschik@pengutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29 15:18:36 -04:00
Michael Grzeschik
5b85840320 arcnet: change irq handler to lock irqsave
This patch prevents the arcnet driver from the following deadlock.

[   41.273910] ======================================================
[   41.280397] [ INFO: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected ]
[   41.287433] 4.4.0-00034-gc0ae784 #536 Not tainted
[   41.292366] ------------------------------------------------------
[   41.298863] arcecho/233 [HC0[0]:SC0[2]:HE0:SE0] is trying to acquire:
[   41.305628]  (&(&lp->lock)->rlock){+.+...}, at: [<bf083bc8>] arcnet_send_packet+0x60/0x1c0 [arcnet]
[   41.315199]
[   41.315199] and this task is already holding:
[   41.321324]  (_xmit_ARCNET#2){+.-...}, at: [<c06b934c>] packet_direct_xmit+0xfc/0x1c8
[   41.329593] which would create a new lock dependency:
[   41.334893]  (_xmit_ARCNET#2){+.-...} -> (&(&lp->lock)->rlock){+.+...}
[   41.341801]
[   41.341801] but this new dependency connects a SOFTIRQ-irq-safe lock:
[   41.350108]  (_xmit_ARCNET#2){+.-...}
... which became SOFTIRQ-irq-safe at:
[   41.357539]   [<c06f8fc8>] _raw_spin_lock+0x30/0x40
[   41.362677]   [<c063ab8c>] dev_watchdog+0x5c/0x264
[   41.367723]   [<c0094edc>] call_timer_fn+0x6c/0xf4
[   41.372759]   [<c00950b8>] run_timer_softirq+0x154/0x210
[   41.378340]   [<c0036b30>] __do_softirq+0x144/0x298
[   41.383469]   [<c0036fb4>] irq_exit+0xcc/0x130
[   41.388138]   [<c0085c50>] __handle_domain_irq+0x60/0xb4
[   41.393728]   [<c0014578>] __irq_svc+0x58/0x78
[   41.398402]   [<c0010274>] arch_cpu_idle+0x24/0x3c
[   41.403443]   [<c007127c>] cpu_startup_entry+0x1f8/0x25c
[   41.409029]   [<c09adc90>] start_kernel+0x3c0/0x3cc
[   41.414170]
[   41.414170] to a SOFTIRQ-irq-unsafe lock:
[   41.419931]  (&(&lp->lock)->rlock){+.+...}
... which became SOFTIRQ-irq-unsafe at:
[   41.427996] ...  [<c06f8fc8>] _raw_spin_lock+0x30/0x40
[   41.433409]   [<bf083d54>] arcnet_interrupt+0x2c/0x800 [arcnet]
[   41.439646]   [<c0089120>] handle_nested_irq+0x8c/0xec
[   41.445063]   [<c03c1170>] regmap_irq_thread+0x190/0x314
[   41.450661]   [<c0087244>] irq_thread_fn+0x1c/0x34
[   41.455700]   [<c0087548>] irq_thread+0x13c/0x1dc
[   41.460649]   [<c0050f10>] kthread+0xe4/0xf8
[   41.465158]   [<c000f810>] ret_from_fork+0x14/0x24
[   41.470207]
[   41.470207] other info that might help us debug this:
[   41.470207]
[   41.478627]  Possible interrupt unsafe locking scenario:
[   41.478627]
[   41.485763]        CPU0                    CPU1
[   41.490521]        ----                    ----
[   41.495279]   lock(&(&lp->lock)->rlock);
[   41.499414]                                local_irq_disable();
[   41.505636]                                lock(_xmit_ARCNET#2);
[   41.511967]                                lock(&(&lp->lock)->rlock);
[   41.518741]   <Interrupt>
[   41.521490]     lock(_xmit_ARCNET#2);
[   41.525356]
[   41.525356]  *** DEADLOCK ***
[   41.525356]
[   41.531587] 1 lock held by arcecho/233:
[   41.535617]  #0:  (_xmit_ARCNET#2){+.-...}, at: [<c06b934c>] packet_direct_xmit+0xfc/0x1c8
[   41.544355]
the dependencies between SOFTIRQ-irq-safe lock and the holding lock:
[   41.552362] -> (_xmit_ARCNET#2){+.-...} ops: 27 {
[   41.557357]    HARDIRQ-ON-W at:
[   41.560664]                     [<c06f8fc8>] _raw_spin_lock+0x30/0x40
[   41.567445]                     [<c063ba28>] dev_deactivate_many+0x114/0x304
[   41.574866]                     [<c063bc3c>] dev_deactivate+0x24/0x38
[   41.581646]                     [<c0630374>] linkwatch_do_dev+0x40/0x74
[   41.588613]                     [<c06305d8>] __linkwatch_run_queue+0xec/0x140
[   41.596120]                     [<c0630658>] linkwatch_event+0x2c/0x34
[   41.602991]                     [<c004af30>] process_one_work+0x188/0x40c
[   41.610131]                     [<c004b200>] worker_thread+0x4c/0x480
[   41.616912]                     [<c0050f10>] kthread+0xe4/0xf8
[   41.623048]                     [<c000f810>] ret_from_fork+0x14/0x24
[   41.629735]    IN-SOFTIRQ-W at:
[   41.633039]                     [<c06f8fc8>] _raw_spin_lock+0x30/0x40
[   41.639820]                     [<c063ab8c>] dev_watchdog+0x5c/0x264
[   41.646508]                     [<c0094edc>] call_timer_fn+0x6c/0xf4
[   41.653190]                     [<c00950b8>] run_timer_softirq+0x154/0x210
[   41.660425]                     [<c0036b30>] __do_softirq+0x144/0x298
[   41.667201]                     [<c0036fb4>] irq_exit+0xcc/0x130
[   41.673518]                     [<c0085c50>] __handle_domain_irq+0x60/0xb4
[   41.680754]                     [<c0014578>] __irq_svc+0x58/0x78
[   41.687077]                     [<c0010274>] arch_cpu_idle+0x24/0x3c
[   41.693769]                     [<c007127c>] cpu_startup_entry+0x1f8/0x25c
[   41.701006]                     [<c09adc90>] start_kernel+0x3c0/0x3cc
[   41.707791]    INITIAL USE at:
[   41.711003]                    [<c06f8fc8>] _raw_spin_lock+0x30/0x40
[   41.717696]                    [<c063ba28>] dev_deactivate_many+0x114/0x304
[   41.725026]                    [<c063bc3c>] dev_deactivate+0x24/0x38
[   41.731718]                    [<c0630374>] linkwatch_do_dev+0x40/0x74
[   41.738593]                    [<c06305d8>] __linkwatch_run_queue+0xec/0x140
[   41.746011]                    [<c0630658>] linkwatch_event+0x2c/0x34
[   41.752789]                    [<c004af30>] process_one_work+0x188/0x40c
[   41.759847]                    [<c004b200>] worker_thread+0x4c/0x480
[   41.766541]                    [<c0050f10>] kthread+0xe4/0xf8
[   41.772596]                    [<c000f810>] ret_from_fork+0x14/0x24
[   41.779198]  }
[   41.780945]  ... key      at: [<c124d620>] netdev_xmit_lock_key+0x38/0x1c8
[   41.788192]  ... acquired at:
[   41.791309]    [<c007bed8>] lock_acquire+0x70/0x90
[   41.796361]    [<c06f9140>] _raw_spin_lock_irqsave+0x40/0x54
[   41.802324]    [<bf083bc8>] arcnet_send_packet+0x60/0x1c0 [arcnet]
[   41.808844]    [<c06b9380>] packet_direct_xmit+0x130/0x1c8
[   41.814622]    [<c06bc7e4>] packet_sendmsg+0x3b8/0x680
[   41.820034]    [<c05fe8b0>] sock_sendmsg+0x14/0x24
[   41.825091]    [<c05ffd68>] SyS_sendto+0xb8/0xe0
[   41.829956]    [<c05ffda8>] SyS_send+0x18/0x20
[   41.834638]    [<c000f780>] ret_fast_syscall+0x0/0x1c
[   41.839954]
[   41.841514]
the dependencies between the lock to be acquired and SOFTIRQ-irq-unsafe lock:
[   41.850302] -> (&(&lp->lock)->rlock){+.+...} ops: 5 {
[   41.855644]    HARDIRQ-ON-W at:
[   41.858945]                     [<c06f8fc8>] _raw_spin_lock+0x30/0x40
[   41.865726]                     [<bf083d54>] arcnet_interrupt+0x2c/0x800 [arcnet]
[   41.873607]                     [<c0089120>] handle_nested_irq+0x8c/0xec
[   41.880666]                     [<c03c1170>] regmap_irq_thread+0x190/0x314
[   41.887901]                     [<c0087244>] irq_thread_fn+0x1c/0x34
[   41.894593]                     [<c0087548>] irq_thread+0x13c/0x1dc
[   41.901195]                     [<c0050f10>] kthread+0xe4/0xf8
[   41.907338]                     [<c000f810>] ret_from_fork+0x14/0x24
[   41.914025]    SOFTIRQ-ON-W at:
[   41.917328]                     [<c06f8fc8>] _raw_spin_lock+0x30/0x40
[   41.924106]                     [<bf083d54>] arcnet_interrupt+0x2c/0x800 [arcnet]
[   41.931981]                     [<c0089120>] handle_nested_irq+0x8c/0xec
[   41.939028]                     [<c03c1170>] regmap_irq_thread+0x190/0x314
[   41.946264]                     [<c0087244>] irq_thread_fn+0x1c/0x34
[   41.952954]                     [<c0087548>] irq_thread+0x13c/0x1dc
[   41.959548]                     [<c0050f10>] kthread+0xe4/0xf8
[   41.965689]                     [<c000f810>] ret_from_fork+0x14/0x24
[   41.972379]    INITIAL USE at:
[   41.975595]                    [<c06f8fc8>] _raw_spin_lock+0x30/0x40
[   41.982283]                    [<bf083d54>] arcnet_interrupt+0x2c/0x800 [arcnet]
[   41.990063]                    [<c0089120>] handle_nested_irq+0x8c/0xec
[   41.997027]                    [<c03c1170>] regmap_irq_thread+0x190/0x314
[   42.004172]                    [<c0087244>] irq_thread_fn+0x1c/0x34
[   42.010766]                    [<c0087548>] irq_thread+0x13c/0x1dc
[   42.017267]                    [<c0050f10>] kthread+0xe4/0xf8
[   42.023314]                    [<c000f810>] ret_from_fork+0x14/0x24
[   42.029903]  }
[   42.031648]  ... key      at: [<bf0854cc>] __key.42091+0x0/0xfffff0f8 [arcnet]
[   42.039255]  ... acquired at:
[   42.042372]    [<c007bed8>] lock_acquire+0x70/0x90
[   42.047413]    [<c06f9140>] _raw_spin_lock_irqsave+0x40/0x54
[   42.053364]    [<bf083bc8>] arcnet_send_packet+0x60/0x1c0 [arcnet]
[   42.059872]    [<c06b9380>] packet_direct_xmit+0x130/0x1c8
[   42.065634]    [<c06bc7e4>] packet_sendmsg+0x3b8/0x680
[   42.071030]    [<c05fe8b0>] sock_sendmsg+0x14/0x24
[   42.076069]    [<c05ffd68>] SyS_sendto+0xb8/0xe0
[   42.080926]    [<c05ffda8>] SyS_send+0x18/0x20
[   42.085601]    [<c000f780>] ret_fast_syscall+0x0/0x1c
[   42.090918]
[   42.092481]
[   42.092481] stack backtrace:
[   42.097065] CPU: 0 PID: 233 Comm: arcecho Not tainted 4.4.0-00034-gc0ae784 #536
[   42.104751] Hardware name: Generic AM33XX (Flattened Device Tree)
[   42.111183] [<c0017ec8>] (unwind_backtrace) from [<c00139d0>] (show_stack+0x10/0x14)
[   42.119337] [<c00139d0>] (show_stack) from [<c02a82c4>] (dump_stack+0x8c/0x9c)
[   42.126937] [<c02a82c4>] (dump_stack) from [<c0078260>] (check_usage+0x4bc/0x63c)
[   42.134815] [<c0078260>] (check_usage) from [<c0078438>] (check_irq_usage+0x58/0xb0)
[   42.142964] [<c0078438>] (check_irq_usage) from [<c007aaa0>] (__lock_acquire+0x1524/0x20b0)
[   42.151740] [<c007aaa0>] (__lock_acquire) from [<c007bed8>] (lock_acquire+0x70/0x90)
[   42.159886] [<c007bed8>] (lock_acquire) from [<c06f9140>] (_raw_spin_lock_irqsave+0x40/0x54)
[   42.168768] [<c06f9140>] (_raw_spin_lock_irqsave) from [<bf083bc8>] (arcnet_send_packet+0x60/0x1c0 [arcnet])
[   42.179115] [<bf083bc8>] (arcnet_send_packet [arcnet]) from [<c06b9380>] (packet_direct_xmit+0x130/0x1c8)
[   42.189182] [<c06b9380>] (packet_direct_xmit) from [<c06bc7e4>] (packet_sendmsg+0x3b8/0x680)
[   42.198059] [<c06bc7e4>] (packet_sendmsg) from [<c05fe8b0>] (sock_sendmsg+0x14/0x24)
[   42.206199] [<c05fe8b0>] (sock_sendmsg) from [<c05ffd68>] (SyS_sendto+0xb8/0xe0)
[   42.213978] [<c05ffd68>] (SyS_sendto) from [<c05ffda8>] (SyS_send+0x18/0x20)
[   42.221388] [<c05ffda8>] (SyS_send) from [<c000f780>] (ret_fast_syscall+0x0/0x1c)

Signed-off-by: Michael Grzeschik <m.grzeschik@pengutronix.de>

   ---
   v1 -> v2: removed unneeded zero assignment of flags
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29 15:18:35 -04:00
Dan Carpenter
acb4b7df48 rocker: move dereference before free
My static checker complains that ofdpa_neigh_del() can sometimes free
"found".   It just makes sense to use it first before deleting it.

Fixes: ecf244f753 ("rocker: fix maybe-uninitialized warning")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29 14:19:24 -04:00
Ido Schimmel
6b27c8adf2 mlxsw: spectrum_router: Fix NULL pointer dereference
In case a VLAN device is enslaved to a bridge we shouldn't create a
router interface (RIF) for it when it's configured with an IP address.
This is already handled by the driver for other types of netdevs, such
as physical ports and LAG devices.

If this IP address is then removed and the interface is subsequently
unlinked from the bridge, a NULL pointer dereference can happen, as the
original 802.1d FID was replaced with an rFID which was then deleted.

To reproduce:
$ ip link set dev enp3s0np9 up
$ ip link add name enp3s0np9.111 link enp3s0np9 type vlan id 111
$ ip link set dev enp3s0np9.111 up
$ ip link add name br0 type bridge
$ ip link set dev br0 up
$ ip link set enp3s0np9.111 master br0
$ ip address add dev enp3s0np9.111 192.168.0.1/24
$ ip address del dev enp3s0np9.111 192.168.0.1/24
$ ip link set dev enp3s0np9.111 nomaster

Fixes: 99724c18fc ("mlxsw: spectrum: Introduce support for router interfaces")
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reported-by: Petr Machata <petrm@mellanox.com>
Tested-by: Petr Machata <petrm@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29 12:59:48 -04:00
Gao Feng
c1a4872ebf net: sched: Fix one possible panic when no destroy callback
When qdisc fail to init, qdisc_create would invoke the destroy callback
to cleanup. But there is no check if the callback exists really. So it
would cause the panic if there is no real destroy callback like the qdisc
codel, fq, and so on.

Take codel as an example following:
When a malicious user constructs one invalid netlink msg, it would cause
codel_init->codel_change->nla_parse_nested failed.
Then kernel would invoke the destroy callback directly but qdisc codel
doesn't define one. It causes one panic as a result.

Now add one the check for destroy to avoid the possible panic.

Fixes: 87b60cfacf ("net_sched: fix error recovery at qdisc creation")
Signed-off-by: Gao Feng <gfree.wind@vip.163.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29 12:55:12 -04:00
Jason Wang
713a98d90c virtio-net: serialize tx routine during reset
We don't hold any tx lock when trying to disable TX during reset, this
would lead a use after free since ndo_start_xmit() tries to access
the virtqueue which has already been freed. Fix this by using
netif_tx_disable() before freeing the vqs, this could make sure no tx
after vq freeing.

Reported-by: Jean-Philippe Menil <jpmenil@gmail.com>
Tested-by: Jean-Philippe Menil <jpmenil@gmail.com>
Fixes commit f600b69050 ("virtio_net: Add XDP support")
Cc: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Robert McCabe <robert.mccabe@rockwellcollins.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-29 12:51:59 -04:00
Steven Rostedt (VMware)
0f17976568 ftrace: Fix regression with module command in stack_trace_filter
When doing the following command:

 # echo ":mod:kvm_intel" > /sys/kernel/tracing/stack_trace_filter

it triggered a crash.

This happened with the clean up of probes. It required all callers to the
regex function (doing ftrace filtering) to have ops->private be a pointer to
a trace_array. But for the stack tracer, that is not the case.

Allow for the ops->private to be NULL, and change the function command
callbacks to handle the trace_array pointer being NULL as well.

Fixes: d2afd57a4b ("tracing/ftrace: Allow instances to have their own function probes")
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2017-06-29 10:05:45 -04:00
Brian Norris
1d80df93d9 Revert "pinctrl: rockchip: avoid hardirq-unsafe functions in irq_chip"
This reverts commit 88bb94216f.

It introduced a new CONFIG_DEBUG_ATOMIC_SLEEP warning in v4.12-rc1:

[ 7226.716713] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:238
[ 7226.716716] in_atomic(): 0, irqs_disabled(): 0, pid: 1708, name: bash
[ 7226.716722] CPU: 1 PID: 1708 Comm: bash Not tainted 4.12.0-rc6+ #1213
[ 7226.716724] Hardware name: Google Kevin (DT)
[ 7226.716726] Call trace:
[ 7226.716738] [<ffffff8008089928>] dump_backtrace+0x0/0x24c
[ 7226.716743] [<ffffff8008089b94>] show_stack+0x20/0x28
[ 7226.716749] [<ffffff8008371370>] dump_stack+0x90/0xb0
[ 7226.716755] [<ffffff80080cd2a0>] ___might_sleep+0x10c/0x124
[ 7226.716760] [<ffffff80080cd330>] __might_sleep+0x78/0x88
[ 7226.716765] [<ffffff800879e210>] mutex_lock+0x2c/0x64
[ 7226.716771] [<ffffff80083ad678>] rockchip_irq_bus_lock+0x30/0x3c
[ 7226.716777] [<ffffff80080f6d40>] __irq_get_desc_lock+0x78/0x98
[ 7226.716782] [<ffffff80080f7e6c>] irq_set_irq_wake+0x44/0x12c
[ 7226.716787] [<ffffff8008486e18>] dev_pm_arm_wake_irq+0x4c/0x58
[ 7226.716792] [<ffffff800848b80c>] device_wakeup_arm_wake_irqs+0x3c/0x58
[ 7226.716796] [<ffffff80084896fc>] dpm_suspend_noirq+0xf8/0x3a0
[ 7226.716800] [<ffffff80080f1384>] suspend_devices_and_enter+0x1a4/0x9a8
[ 7226.716803] [<ffffff80080f21ec>] pm_suspend+0x664/0x6a4
[ 7226.716807] [<ffffff80080f04d8>] state_store+0xd4/0xf8
...

It was reported on -rc1, and it's still not fixed in -rc6, so it should
just be reverted.

Cc: John Keeping <john@metanate.com>
Signed-off-by: Brian Norris <briannorris@chromium.org>
Reviewed-by: Heiko Stuebner <heiko@sntech.de>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2017-06-29 15:03:24 +02:00
Hans de Goede
c06632ea05 gpio: acpi: Skip _AEI entries without a handler rather then aborting the scan
acpi_walk_resources will stop as soon as the callback passed in returns
an error status. On a x86 tablet I have the first GpioInt in the _AEI
resource list has no handler defined in the DSDT, causing
acpi_walk_resources to abort scanning the rest of the resource list,
which does define valid ACPI GPIO events.

This commit changes the return for not finding a handler from
AE_BAD_PARAMETER to AE_OK so that the rest of the resource list will
get scanned normally in case of missing event handlers.

Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2017-06-29 14:55:08 +02:00
Bartosz Golaszewski
ad537b8225 gpiolib: fix filtering out unwanted events
GPIOEVENT_REQUEST_BOTH_EDGES is not a single flag, but a binary OR of
GPIOEVENT_REQUEST_RISING_EDGE and GPIOEVENT_REQUEST_FALLING_EDGE.

The expression 'le->eflags & GPIOEVENT_REQUEST_BOTH_EDGES' we'll get
evaluated to true even if only one event type was requested.

Fix it by checking both RISING & FALLING flags explicitly.

Cc: stable@vger.kernel.org
Fixes: 61f922db72 ("gpio: userspace ABI for reading GPIO line events")
Signed-off-by: Bartosz Golaszewski <brgl@bgdev.pl>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2017-06-29 11:33:46 +02:00
Tobias Klauser
6474924e2b arch: remove unused macro/function thread_saved_pc()
The only user of thread_saved_pc() in non-arch-specific code was removed
in commit 8243d55977 ("sched/core: Remove pointless printout in
sched_show_task()").  Remove the implementations as well.

Some architectures use thread_saved_pc() in their arch-specific code.
Leave their thread_saved_pc() intact.

Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-06-28 16:13:57 -07:00
Jens Axboe
9ae3b3f52c block: provide bio_uninit() free freeing integrity/task associations
Wen reports significant memory leaks with DIF and O_DIRECT:

"With nvme devive + T10 enabled, On a system it has 256GB and started
logging /proc/meminfo & /proc/slabinfo for every minute and in an hour
it increased by 15968128 kB or ~15+GB.. Approximately 256 MB / minute
leaking.

/proc/meminfo | grep SUnreclaim...

SUnreclaim:      6752128 kB
SUnreclaim:      6874880 kB
SUnreclaim:      7238080 kB
....
SUnreclaim:     22307264 kB
SUnreclaim:     22485888 kB
SUnreclaim:     22720256 kB

When testcases with T10 enabled call into __blkdev_direct_IO_simple,
code doesn't free memory allocated by bio_integrity_alloc. The patch
fixes the issue. HTX has been run with +60 hours without failure."

Since __blkdev_direct_IO_simple() allocates the bio on the stack, it
doesn't go through the regular bio free. This means that any ancillary
data allocated with the bio through the stack is not freed. Hence, we
can leak the integrity data associated with the bio, if the device is
using DIF/DIX.

Fix this by providing a bio_uninit() and export it, so that we can use
it to free this data. Note that this is a minimal fix for this issue.
Any current user of bio's that are allocated outside of
bio_alloc_bioset() suffers from this issue, most notably some drivers.
We will fix those in a more comprehensive patch for 4.13. This also
means that the commit marked as being fixed by this isn't the real
culprit, it's just the most obvious one out there.

Fixes: 542ff7bf18 ("block: new direct I/O implementation")
Reported-by: Wen Xiong <wenxiong@linux.vnet.ibm.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-06-28 15:30:13 -06:00
Linus Torvalds
e547204f1f Merge tag 'nfs-for-4.12-3' of git://git.linux-nfs.org/projects/trondmy/linux-nfs
Pull NFS client bugfixes from Trond Myklebust:
 "Bugfixes include:

   - stable fix for exclusive create if the server supports the umask
     attribute

   - trunking detection should handle ERESTARTSYS/EINTR

   - stable fix for a race in the LAYOUTGET function

   - stable fix to revert "nfs_rename() handle -ERESTARTSYS dentry left
     behind"

   - nfs4_callback_free_slot() cannot call nfs4_slot_tbl_drain_complete()"

* tag 'nfs-for-4.12-3' of git://git.linux-nfs.org/projects/trondmy/linux-nfs:
  NFSv4.1: nfs4_callback_free_slot() cannot call nfs4_slot_tbl_drain_complete()
  Revert "NFS: nfs_rename() handle -ERESTARTSYS dentry left behind"
  NFSv4.1: Fix a race in nfs4_proc_layoutget
  NFS: Trunking detection should handle ERESTARTSYS/EINTR
  NFSv4.2: Don't send mode again in post-EXCLUSIVE4_1 SETATTR with umask
2017-06-28 13:27:15 -07:00
Linus Torvalds
5a37be4b51 Merge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linux
Pull drm fixes from Dave Airlie:
 "This is the final set of fixes for -rc8, just a few i915 and one
  vmwgfx ones.

  I'm off on holidays for a week, so if anything shows up for fixes I've
  asked Daniel or Sean Paul to herd it in the right direction"

[ The additional etnaviv fixes were already herded towards me as seen in
  my previous pull - Linus ]

* 'drm-fixes' of git://people.freedesktop.org/~airlied/linux:
  drm/vmwgfx: Free hash table allocated by cmdbuf managed res mgr
  drm/i915: Disable EXEC_OBJECT_ASYNC when doing relocations
  drm/i915: Hold struct_mutex for per-file stats in debugfs/i915_gem_object
  drm/i915: Retire the VMA's fence tracker before unbinding
2017-06-28 13:22:26 -07:00
Linus Torvalds
cf723497f2 Merge branch 'etnaviv/fixes' of git://git.pengutronix.de/git/lst/linux
Pull drm/etnaviv fixes from Lucas Stach:
 "I realized I just missed the cut-off point for the final drm fixes
  pull, but I have 2 more etnaviv fixes that need to go into 4.12, as
  they fix fallout from the explicit sync work introduced in the last
  merge window"

[ Pulling directly because Dave is on vacation. Noted by Daniel Vetter,
  and acked by Dave Airlie  - Linus ]

* 'etnaviv/fixes' of git://git.pengutronix.de/git/lst/linux:
  drm/etnaviv: Fix implicit/explicit sync sense inversion
  drm/etnaviv: fix submit flags getting overwritten by BO content
2017-06-28 13:13:48 -07:00
Suravee Suthikulpanit
84a21dbdef iommu/amd: Fix interrupt remapping when disable guest_mode
Pass-through devices to VM guest can get updated IRQ affinity
information via irq_set_affinity() when not running in guest mode.
Currently, AMD IOMMU driver in GA mode ignores the updated information
if the pass-through device is setup to use vAPIC regardless of guest_mode.
This could cause invalid interrupt remapping.

Also, the guest_mode bit should be set and cleared only when
SVM updates posted-interrupt interrupt remapping information.

Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Joerg Roedel <jroedel@suse.de>
Fixes: d98de49a53 ('iommu/amd: Enable vAPIC interrupt remapping mode by default')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-06-28 14:44:56 +02:00
Miklos Szeredi
fbaf94ee3c ovl: don't set origin on broken lower hardlink
When copying up a file that has multiple hard links we need to break any
association with the origin file.  This makes copy-up be essentially an
atomic replace.

The new file has nothing to do with the old one (except having the same
data and metadata initially), so don't set the overlay.origin attribute.

We can relax this in the future when we are able to index upper object by
origin.

Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Fixes: 3a1e819b4e ("ovl: store file handle of lower inode on copy up")
2017-06-28 13:41:22 +02:00
Miklos Szeredi
e85f82ff9b ovl: copy-up: don't unlock between lookup and link
Nothing prevents mischief on upper layer while we are busy copying up the
data.

Move the lookup right before the looked up dentry is actually used.

Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Fixes: 01ad3eb8a0 ("ovl: concurrent copy up of regular files")
Cc: <stable@vger.kernel.org> # v4.11
2017-06-28 13:41:22 +02:00
Takashi Iwai
d94815f917 ALSA: hda - Fix endless loop of codec configure
azx_codec_configure() loops over the codecs found on the given
controller via a linked list.  The code used to work in the past, but
in the current version, this may lead to an endless loop when a codec
binding returns an error.

The culprit is that the snd_hda_codec_configure() unregisters the
device upon error, and this eventually deletes the given codec object
from the bus.  Since the list is initialized via list_del_init(), the
next object points to the same device itself.  This behavior change
was introduced at splitting the HD-audio code code, and forgotten to
adapt it here.

For fixing this bug, just use a *_safe() version of list iteration.

Fixes: d068ebc25e ("ALSA: hda - Move some codes up to hdac_bus struct")
Reported-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2017-06-28 12:10:05 +02:00
Daniel Stone
426ef1bb40 drm/etnaviv: Fix implicit/explicit sync sense inversion
We were reading the no-implicit sync flag the wrong way around,
synchronizing too much for the explicit case, and not at all for the
implicit case. Oops.

Signed-off-by: Daniel Stone <daniels@collabora.com>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
2017-06-28 10:35:53 +02:00
Lucas Stach
f4a4381ba4 drm/etnaviv: fix submit flags getting overwritten by BO content
The addition of the flags member to etnaviv_gem_submit structure didn't
take into account that the last member of this structure is a variable
length array.

Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
2017-06-28 10:35:46 +02:00
Dave Airlie
9ff1beb1d1 Merge tag 'drm-intel-fixes-2017-06-27' of git://anongit.freedesktop.org/git/drm-intel into drm-fixes
Just a few minor fixes. Important one is the execbuf async fix (aka
ANDROID_native_sync). There was another patch for a display coherency
corner case on APL, but we've random-walked in that space too much,
and the cherry-pick looked really invasive.

* tag 'drm-intel-fixes-2017-06-27' of git://anongit.freedesktop.org/git/drm-intel:
  drm/i915: Disable EXEC_OBJECT_ASYNC when doing relocations
  drm/i915: Hold struct_mutex for per-file stats in debugfs/i915_gem_object
  drm/i915: Retire the VMA's fence tracker before unbinding
2017-06-28 17:07:15 +10:00
Dave Airlie
5193c08c7e Merge branch 'vmwgfx-fixes-4.12' of git://people.freedesktop.org/~thomash/linux into drm-fixes
Single vmwgfx fix
* 'vmwgfx-fixes-4.12' of git://people.freedesktop.org/~thomash/linux:
  drm/vmwgfx: Free hash table allocated by cmdbuf managed res mgr
2017-06-28 17:06:58 +10:00
Hui Wang
a8f20fd25b ALSA: hda - set input_path bitmap to zero after moving it to new place
Recently we met a problem, the codec has valid adcs and input pins,
and they can form valid input paths, but the driver does not build
valid controls for them like "Mic boost", "Capture Volume" and
"Capture Switch".

Through debugging, I found the driver needs to shrink the invalid
adcs and input paths for this machine, so it will move the whole
column bitmap value to the previous column, after moving it, the
driver forgets to set the original column bitmap value to zero, as a
result, the driver will invalidate the path whose index value is the
original colume bitmap value. After executing this function, all
valid input paths are invalidated by a mistake, there are no any
valid input paths, so the driver won't build controls for them.

Fixes: 3a65bcdc57 ("ALSA: hda - Fix inconsistent input_paths after ADC reduction")
Cc: <stable@vger.kernel.org>
Signed-off-by: Hui Wang <hui.wang@canonical.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2017-06-28 07:09:19 +02:00
Trond Myklebust
2e31b4cb89 NFSv4.1: nfs4_callback_free_slot() cannot call nfs4_slot_tbl_drain_complete()
The current code works only for the case where we have exactly one slot,
which is no longer true.
nfs4_free_slot() will automatically declare the callback channel to be
drained when all slots have been returned.

Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2017-06-27 22:26:23 -04:00
Benjamin Coddington
d9f2950006 Revert "NFS: nfs_rename() handle -ERESTARTSYS dentry left behind"
This reverts commit 920b4530fb which could
call d_move() without holding the directory's i_mutex, and reverts commit
d4ea7e3c5c "NFS: Fix old dentry rehash after
move", which was a follow-up fix.

Signed-off-by: Benjamin Coddington <bcodding@redhat.com>
Fixes: 920b4530fb ("NFS: nfs_rename() handle -ERESTARTSYS dentry left behind")
Cc: stable@vger.kernel.org # v4.10+
Reviewed-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2017-06-27 21:58:14 -04:00
Trond Myklebust
bd171930e6 NFSv4.1: Fix a race in nfs4_proc_layoutget
If the task calling layoutget is signalled, then it is possible for the
calls to nfs4_sequence_free_slot() and nfs4_layoutget_prepare() to race,
in which case we leak a slot.
The fix is to move the call to nfs4_sequence_free_slot() into the
nfs4_layoutget_release() so that it gets called at task teardown time.

Fixes: 2e80dbe7ac ("NFSv4.1: Close callback races for OPEN, LAYOUTGET...")
Cc: stable@vger.kernel.org # v4.8+
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2017-06-27 21:44:58 -04:00
Trond Myklebust
898fc11bb2 NFS: Trunking detection should handle ERESTARTSYS/EINTR
Currently, it will return EIO in those cases.

Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2017-06-27 21:44:58 -04:00
Aleksandar Markovic
ddbfff7429 MIPS: math-emu: Handle zero accumulator case in MADDF and MSUBF separately
If accumulator value is zero, just return the value of previously
calculated product. This brings logic in MADDF/MSUBF implementation
closer to the logic in ADD/SUB case.

Signed-off-by: Miodrag Dinic <miodrag.dinic@imgtec.com>
Signed-off-by: Goran Ferenc <goran.ferenc@imgtec.com>
Signed-off-by: Aleksandar Markovic <aleksandar.markovic@imgtec.com>
Cc: James.Hogan@imgtec.com
Cc: Paul.Burton@imgtec.com
Cc: Raghu.Gandham@imgtec.com
Cc: Leonid.Yegoshin@imgtec.com
Cc: Douglas.Leung@imgtec.com
Cc: Petar.Jovanovic@imgtec.com
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16512/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-28 02:54:30 +02:00
Keith Busch
ebef736857 nvme/pci: Fix stuck nvme reset
The controller state is set to resetting prior to disabling the
controller, so this patch accounts for that state when deciding if it
needs to freeze the queues. Without this, an 'nvme reset /dev/nvme0'
blocks forever because the queues were never frozen.

Fixes: 82b057caef ("nvme-pci: fix multiple ctrl removal scheduling")
Signed-off-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-06-27 17:44:05 -06:00
Karl Beldan
25d8b92e0a MIPS: head: Reorder instructions missing a delay slot
In this sequence the 'move' is assumed in the delay slot of the 'beq',
but head.S is in reorder mode and the former gets pushed one 'nop'
farther by the assembler.

The corrected behavior made booting with an UHI supplied dtb erratic.

Fixes: 15f37e1588 ("MIPS: store the appended dtb address in a variable")
Signed-off-by: Karl Beldan <karl.beldan+oss@gmail.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: Jonas Gorski <jogo@openwrt.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Cc: stable@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/16614/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2017-06-27 23:35:21 +02:00
Andrew F. Davis
e20bd60bf6 net: usb: asix88179_178a: Add support for the Belkin B2B128
The Belkin B2B128 is a USB 3.0 Hub + Gigabit Ethernet Adapter, the
Ethernet adapter uses the ASIX AX88179 USB 3.0 to Gigabit Ethernet
chip supported by this driver, add the USB ID for the same.

This patch is based on work by Geoffrey Tran <geoffrey.tran@gmail.com>
who has indicated they would like this upstreamed by someone more
familiar with the upstreaming process.

Signed-off-by: Andrew F. Davis <afd@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-27 15:46:07 -04:00
Madalin Bucur
85688d9adf fsl/fman: add dependency on HAS_DMA
A previous commit (5567e98919) inserted a dependency on DMA
API that requires HAS_DMA to be added in Kconfig.

Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-27 15:42:30 -04:00
Vallish Vaidyeshwara
00a0ea33b4 dm thin: do not queue freed thin mapping for next stage processing
process_prepared_discard_passdown_pt1() should cleanup
dm_thin_new_mapping in cases of error.

dm_pool_inc_data_range() can fail trying to get a block reference:

metadata operation 'dm_pool_inc_data_range' failed: error = -61

When dm_pool_inc_data_range() fails, dm thin aborts current metadata
transaction and marks pool as PM_READ_ONLY. Memory for thin mapping
is released as well. However, current thin mapping will be queued
onto next stage as part of queue_passdown_pt2() or passdown_endio().
This dangling thin mapping memory when processed and accessed in
next stage will lead to device mapper crashing.

Code flow without fix:
-> process_prepared_discard_passdown_pt1(m)
   -> dm_thin_remove_range()
   -> discard passdown
      --> passdown_endio(m) queues m onto next stage
   -> dm_pool_inc_data_range() fails, frees memory m
            but does not remove it from next stage queue

-> process_prepared_discard_passdown_pt2(m)
   -> processes freed memory m and crashes

One such stack:

Call Trace:
[<ffffffffa037a46f>] dm_cell_release_no_holder+0x2f/0x70 [dm_bio_prison]
[<ffffffffa039b6dc>] cell_defer_no_holder+0x3c/0x80 [dm_thin_pool]
[<ffffffffa039b88b>] process_prepared_discard_passdown_pt2+0x4b/0x90 [dm_thin_pool]
[<ffffffffa0399611>] process_prepared+0x81/0xa0 [dm_thin_pool]
[<ffffffffa039e735>] do_worker+0xc5/0x820 [dm_thin_pool]
[<ffffffff8152bf54>] ? __schedule+0x244/0x680
[<ffffffff81087e72>] ? pwq_activate_delayed_work+0x42/0xb0
[<ffffffff81089f53>] process_one_work+0x153/0x3f0
[<ffffffff8108a71b>] worker_thread+0x12b/0x4b0
[<ffffffff8108a5f0>] ? rescuer_thread+0x350/0x350
[<ffffffff8108fd6a>] kthread+0xca/0xe0
[<ffffffff8108fca0>] ? kthread_park+0x60/0x60
[<ffffffff81530b45>] ret_from_fork+0x25/0x30

The fix is to first take the block ref count for discarded block and
then do a passdown discard of this block. If block ref count fails,
then bail out aborting current metadata transaction, mark pool as
PM_READ_ONLY and also free current thin mapping memory (existing error
handling code) without queueing this thin mapping onto next stage of
processing. If block ref count succeeds, then passdown discard of this
block. Discard callback of passdown_endio() will queue this thin mapping
onto next stage of processing.

Code flow with fix:
-> process_prepared_discard_passdown_pt1(m)
   -> dm_thin_remove_range()
   -> dm_pool_inc_data_range()
      --> if fails, free memory m and bail out
   -> discard passdown
      --> passdown_endio(m) queues m onto next stage

Cc: stable <stable@vger.kernel.org> # v4.9+
Reviewed-by: Eduardo Valentin <eduval@amazon.com>
Reviewed-by: Cristian Gafton <gafton@amazon.com>
Reviewed-by: Anchal Agarwal <anchalag@amazon.com>
Signed-off-by: Vallish Vaidyeshwara <vallish@amazon.com>
Reviewed-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-06-27 15:14:34 -04:00
Eric Dumazet
6f64ec7451 net: prevent sign extension in dev_get_stats()
Similar to the fix provided by Dominik Heidler in commit
9b3dc0a17d ("l2tp: cast l2tp traffic counter to unsigned")
we need to take care of 32bit kernels in dev_get_stats().

When using atomic_long_read(), we add a 'long' to u64 and
might misinterpret high order bit, unless we cast to unsigned.

Fixes: caf586e5f2 ("net: add a core netdev->rx_dropped counter")
Fixes: 015f0688f5 ("net: net: add a core netdev->tx_dropped counter")
Fixes: 6e7333d315 ("net: add rx_nohandler stat counter")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Jarod Wilson <jarod@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-27 14:45:12 -04:00
Linus Torvalds
3c2bfbaadf Merge branch 'fixes' of git://git.armlinux.org.uk/~rmk/linux-arm
Pull ARM fixes from Russell King:
 "Three more fixes:

   - Fix the previous fix merged in the last pull for the Thumb2
     decompressor.

   - A fix from Vladimir to correctly identify the V7M cache type.

   - The optimised 3G vmsplit case does not work with LPAE, so don't
     allow this to be selected for LPAE configurations"

* 'fixes' of git://git.armlinux.org.uk/~rmk/linux-arm:
  ARM: 8682/1: V7M: Set cacheid iff DminLine or IminLine is nonzero
  ARM: 8681/1: make VMSPLIT_3G_OPT depends on !ARM_LPAE
  ARM: 8680/1: boot/compressed: fix inappropriate Thumb2 mnemonic for __nop
2017-06-27 08:56:52 -07:00
Ingo Molnar
e3c2c4fb52 Merge tag 'perf-urgent-for-mingo-4.12-20170626' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent
Pull perf/urgent fix from Arnaldo Carvalho de Melo:

 - Fix segfault for kernel.kptr_restrict=2 (Jiri Olsa)

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-27 09:17:02 +02:00
Linus Torvalds
da8b14e45c Merge tag 'for-linus' of git://linux-c6x.org/git/projects/linux-c6x-upstreaming
Pull c6x fixlet from Mark Salter:
 "Update maintainer email"

* tag 'for-linus' of git://linux-c6x.org/git/projects/linux-c6x-upstreaming:
  MAINTAINERS: update email address for C6x maintainer
2017-06-26 12:25:59 -07:00
Linus Torvalds
9d646c97e1 Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Pull s390 bugfix from Martin Schwidefsky:
 "One last s390 patch for 4.12

  Revert the re-IPL semantics back to the v4.7 state. It turned out that
  the memory layout may change due to memory hotplug if load-normal is
  used"

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
  s390/ipl: revert Load Normal semantics for LPAR CCW-type re-IPL
2017-06-26 11:58:21 -07:00
Jiri Olsa
3f938ee2f6 perf machine: Fix segfault for kernel.kptr_restrict=2
Michael reported the segfault when kernel.kptr_restrict=2 is set.

  $ perf record ls
  ...
  perf: Segmentation fault
  Obtained 16 stack frames.
  ./perf(dump_stack+0x2d) [0x5068df]
  ./perf(sighandler_dump_stack+0x2d) [0x5069bf]
  ./perf() [0x43e47b]
  /lib64/libc.so.6(+0x3594f) [0x7f762004794f]
  /lib64/libc.so.6(strlen+0x26) [0x7f762009ef86]
  /lib64/libc.so.6(__strdup+0xd) [0x7f762009ecbd]
  ./perf(maps__set_kallsyms_ref_reloc_sym+0x4d) [0x51590f]
  ./perf(machine__create_kernel_maps+0x136) [0x50a7de]
  ./perf(perf_session__create_kernel_maps+0x2c) [0x510a81]
  ./perf(perf_session__new+0x13d) [0x510e23]
  ./perf() [0x43fd61]
  ./perf(cmd_record+0x704) [0x441823]
  ./perf() [0x4bc1a0]
  ./perf() [0x4bc40d]
  ./perf() [0x4bc55f]
  ./perf(main+0x2d5) [0x4bc939]
  Segmentation fault (core dumped)

The reason is that with kernel.kptr_restrict=2, we don't get
the symbol from machine__get_running_kernel_start, which we
want to use in maps__set_kallsyms_ref_reloc_sym and we crash.

Check the symbol name value before calling
maps__set_kallsyms_ref_reloc_sym() and succeed without ref_reloc_sym
being set. It's safe because we check its existence before we use it.

Reported-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20170626095153.553-1-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-26 11:52:37 -03:00
Michael Ellerman
d6bd8194e2 powerpc/32: Avoid miscompilation w/GCC 4.6.3 - don't inline copy_to/from_user()
Larry Finger reported that his Powerbook G4 was no longer booting with v4.12-rc,
userspace was up but giving weird errors such as:

  udevd[64]: starting version 175
  udevd[64]: Unable to receive ctrl message: Bad address.
  modprobe: chdir(4.12-rc1): No such file or directory

He bisected the problem to commit 3448890c32 ("powerpc: get rid of zeroing,
switch to RAW_COPY_USER").

Al identified that the problem is actually a miscompilation by GCC 4.6.3, which
is exposed by the above commit.

Al also pointed out that inlining copy_to/from_user() is probably of little or
no benefit, which is correct. Using Anton's copy_to_user benchmark, with a
pathological single byte copy, we see a small increase in performance
by *removing* inlining:

  Before (inlined):
  # time ./copy_to_user -w -l 1 -i 10000000	( x 3 )
  real	0m22.063s
  real	0m22.059s
  real	0m22.076s

  After:
  # time ./copy_to_user -w -l 1 -i 10000000	( x 3 )
  real	0m21.325s
  real	0m21.299s
  real	0m21.364s

So as a small performance improvement and to avoid the miscompilation, drop
inlining copy_to/from_user() on 32-bit.

Fixes: 3448890c32 ("powerpc: get rid of zeroing, switch to RAW_COPY_USER")
Reported-by: Larry Finger <Larry.Finger@lwfinger.net>
Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-06-26 23:25:08 +10:00
Deepak Rawat
82fcee526b drm/vmwgfx: Free hash table allocated by cmdbuf managed res mgr
The hash table created during vmw_cmdbuf_res_man_create was
never freed. This causes memory leak in context creation.
Added the corresponding drm_ht_remove in vmw_cmdbuf_res_man_destroy.

Tested for memory leak by running piglit overnight and kernel
memory is not inflated which earlier was.

Cc: <stable@vger.kernel.org>
Signed-off-by: Deepak Rawat <drawat@vmware.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
2017-06-26 14:39:08 +02:00
Jérôme Glisse
98fe3633c5 x86/mm/hotplug: Fix BUG_ON() after hot-remove by not freeing PUD
Since commit:

  af2cf278ef ("x86/mm/hotplug: Don't remove PGD entries in remove_pagetable()")

we no longer free PUDs so that we do not have to synchronize
all PGDs on hot-remove/vfree().

But the new 5-level page table patchset reverted that for 4-level
page tables, in the following commit:

  f2a6a70501: ("x86: Convert the rest of the code to support p4d_t")

This patch restores the damage and disables free_pud() if we are in the
4-level page table case, thus avoiding BUG_ON() after hot-remove.

Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
[ Clarified the changelog and the code comments. ]
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mm@kvack.org
Link: http://lkml.kernel.org/r/20170624180514.3821-1-jglisse@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-06-26 11:44:19 +02:00
Chris Wilson
611cdf3695 drm/i915: Disable EXEC_OBJECT_ASYNC when doing relocations
If we write a relocation into the buffer, we require our own implicit
synchronisation added after the start of the execbuf, outside of the
user's control. As we may end up clflushing, or doing the patch itself
on the GPU, asynchronously we need to look at the implicit serialisation
on obj->resv and hence need to disable EXEC_OBJECT_ASYNC for this
object.

If the user does trigger a stall for relocations, we make sure the stall
is complete enough so that the batch is not submitted before we complete
those relocations.

Fixes: 77ae995789 ("drm/i915: Enable userspace to opt-out of implicit fencing")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
(cherry picked from commit 071750e550)
[danvet: Resolve conflicts, resolution reviewed by Tvrtko on irc.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2017-06-26 10:43:26 +02:00
Chris Wilson
2c73676267 drm/i915: Hold struct_mutex for per-file stats in debugfs/i915_gem_object
As we walk the obj->vma_list in per_file_stats(), we need to hold
struct_mutex to prevent alteration of that list.

Fixes: 1d2ac403ae ("drm: Protect dev->filelist with its own mutex")
Fixes: c84455b4ba ("drm/i915: Move debug only per-request pid tracking from request to ctx")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=101460
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170617115744.4452-1-chris@chris-wilson.co.uk
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
(cherry picked from commit 0caf81b5c5)
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2017-06-26 09:53:47 +02:00
Chris Wilson
d8462d0ad3 drm/i915: Retire the VMA's fence tracker before unbinding
Since we may track unfenced access (GPU access to the vma that
explicitly requires no fence), vma->last_fence may be set without any
attached fence (vma->fence) and so will not be flushed when we call
i915_vma_put_fence(). Since we stopped doing a full retire of the
activity trackers for unbind, we need to explicitly retire each tracker.

Fixes: b0decaf75b ("drm/i915: Track active vma requests")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170620124321.1108-1-chris@chris-wilson.co.uk
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
(cherry picked from commit 760a898d80)
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2017-06-26 09:53:39 +02:00
WANG Cong
d747a7a51b tcp: reset sk_rx_dst in tcp_disconnect()
We have to reset the sk->sk_rx_dst when we disconnect a TCP
connection, because otherwise when we re-connect it this
dst reference is simply overridden in tcp_finish_connect().

This fixes a dst leak which leads to a loopback dev refcnt
leak. It is a long-standing bug, Kevin reported a very similar
(if not same) bug before. Thanks to Andrei for providing such
a reliable reproducer which greatly narrows down the problem.

Fixes: 41063e9dd1 ("ipv4: Early TCP socket demux.")
Reported-by: Andrei Vagin <avagin@gmail.com>
Reported-by: Kevin Xu <kaiwen.xu@hulu.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-25 12:23:07 -04:00
Wei Wang
85cb73ff9b net: ipv6: reset daddr and dport in sk if connect() fails
In __ip6_datagram_connect(), reset sk->sk_v6_daddr and inet->dport if
error occurs.
In udp_v6_early_demux(), check for sk_state to make sure it is in
TCP_ESTABLISHED state.
Together, it makes sure unconnected UDP socket won't be considered as a
valid candidate for early demux.

v3: add TCP_ESTABLISHED state check in udp_v6_early_demux()
v2: fix compilation error

Fixes: 5425077d73 ("net: ipv6: Add early demux handler for UDP unicast")
Signed-off-by: Wei Wang <weiwan@google.com>
Acked-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-25 11:46:56 -04:00
Mintz, Yuval
d0c32a1623 bnx2x: Don't log mc removal needlessly
When mc configuration changes bnx2x_config_mcast() can return 0 for
success, negative for failure and positive for benign reason preventing
its immediate work, e.g., when the command awaits the completion of
a previously sent command.

When removing all configured macs on a 578xx adapter, if a positive
value would be returned driver would errneously log it as an error.

Fixes: c7b7b483cc ("bnx2x: Don't flush multicast MACs")
Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-25 11:36:56 -04:00
David S. Miller
92cc8a5105 Merge branch 'bnxt_en-fixes'
Michael Chan says:

====================
bnxt_en: Error handling and netpoll fixes.

Add missing error handling and fix netpoll handling.  The current code
handles RX and TX events in netpoll mode and is causing lots of warnings
and errors in the RX code path in netpoll mode.  The fix is to only handle
TX events in netpoll mode.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-23 14:48:28 -04:00
Michael Chan
2270bc5da3 bnxt_en: Fix netpoll handling.
To handle netpoll properly, the driver must only handle TX packets
during NAPI.  Handling RX events cause warnings and errors in
netpoll mode. The ndo_poll_controller() method should call
napi_schedule() directly so that a NAPI weight of zero will be used
during netpoll mode.

The bnxt_en driver supports 2 ring modes: combined, and separate rx/tx.
In separate rx/tx mode, the ndo_poll_controller() method will only
process the tx rings.  In combined mode, the rx and tx completion
entries are mixed in the completion ring and we need to drop the rx
entries and recycle the rx buffers.

Add a function bnxt_force_rx_discard() to handle this in netpoll mode
when we see rx entries in combined ring mode.

Reported-by: Calvin Owens <calvinowens@fb.com>
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-23 14:48:27 -04:00
Michael Chan
69c149e2e3 bnxt_en: Add missing logic to handle TPA end error conditions.
When we get a TPA_END completion to handle a completed LRO packet, it
is possible that hardware would indicate errors.  The current code is
not checking for the error condition.  Define the proper error bits and
the macro to check for this error and abort properly.

Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-23 14:48:27 -04:00
Richard Cochran
db9d8b29d1 net: dp83640: Avoid NULL pointer dereference.
The function, skb_complete_tx_timestamp(), used to allow passing in a
NULL pointer for the time stamps, but that was changed in commit
62bccb8cdb ("net-timestamp: Make the
clone operation stand-alone from phy timestamping"), and the existing
call sites, all of which are in the dp83640 driver, were fixed up.

Even though the kernel-doc was subsequently updated in commit
7a76a021cd ("net-timestamp: Update
skb_complete_tx_timestamp comment"), still a bug fix from Manfred
Rudigier came into the driver using the old semantics.  Probably
Manfred derived that patch from an older kernel version.

This fix should be applied to the stable trees as well.

Fixes: 81e8f2e930 ("net: dp83640: Fix tx timestamp overflow handling.")
Signed-off-by: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-23 14:38:16 -04:00
David S. Miller
43b786c676 Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec
Steffen Klassert says:

====================
pull request (net): ipsec 2017-06-23

1) Fix xfrm garbage collecting when unregistering a netdevice.
   From Hangbin Liu.

2) Fix NULL pointer derefernce when exiting a network namespace.
   From Hangbin Liu.

3) Fix some error codes in pfkey to prevent a NULL pointer derefernce.
   From Dan Carpenter.

4) Fix NULL pointer derefernce on allocation failure in pfkey.
   From Dan Carpenter.

5) Adjust IPv6 payload_len to include extension headers. Otherwise
   we corrupt the packets when doing ESP GRO on transport mode.
   From Yossi Kuperman.

6) Set nhoff to the proper offset of the IPv6 nexthdr when doing ESP GRO.
   From Yossi Kuperman.

Please pull or let me know if there are problems.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-23 14:11:26 -04:00
WANG Cong
0ccc22f425 sit: use __GFP_NOWARN for user controlled allocation
The memory allocation size is controlled by user-space,
if it is too large just fail silently and return NULL,
not to mention there is a fallback allocation later.

Reported-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-23 14:08:40 -04:00
Michal Kubeček
a5cb659bbc net: account for current skb length when deciding about UFO
Our customer encountered stuck NFS writes for blocks starting at specific
offsets w.r.t. page boundary caused by networking stack sending packets via
UFO enabled device with wrong checksum. The problem can be reproduced by
composing a long UDP datagram from multiple parts using MSG_MORE flag:

  sendto(sd, buff, 1000, MSG_MORE, ...);
  sendto(sd, buff, 1000, MSG_MORE, ...);
  sendto(sd, buff, 3000, 0, ...);

Assume this packet is to be routed via a device with MTU 1500 and
NETIF_F_UFO enabled. When second sendto() gets into __ip_append_data(),
this condition is tested (among others) to decide whether to call
ip_ufo_append_data():

  ((length + fragheaderlen) > mtu) || (skb && skb_is_gso(skb))

At the moment, we already have skb with 1028 bytes of data which is not
marked for GSO so that the test is false (fragheaderlen is usually 20).
Thus we append second 1000 bytes to this skb without invoking UFO. Third
sendto(), however, has sufficient length to trigger the UFO path so that we
end up with non-UFO skb followed by a UFO one. Later on, udp_send_skb()
uses udp_csum() to calculate the checksum but that assumes all fragments
have correct checksum in skb->csum which is not true for UFO fragments.

When checking against MTU, we need to add skb->len to length of new segment
if we already have a partially filled skb and fragheaderlen only if there
isn't one.

In the IPv6 case, skb can only be null if this is the first segment so that
we have to use headersize (length of the first IPv6 header) rather than
fragheaderlen (length of IPv6 header of further fragments) for skb == NULL.

Fixes: e89e9cf539 ("[IPv4/IPv6]: UFO Scatter-gather approach")
Fixes: e4c5e13aa4 ("ipv6: Should use consistent conditional judgement for
	ip6 fragment between __ip6_append_data and ip6_finish_output")
Signed-off-by: Michal Kubecek <mkubecek@suse.cz>
Acked-by: Vlad Yasevich <vyasevic@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-23 13:29:38 -04:00
Heinz Mauelshagen
c4d097d130 dm raid: fix oops on upgrading to extended superblock format
When a RAID set was created on dm-raid version < 1.9.0 (old RAID
superblock format), all of the new 1.9.0 members of the superblock are
uninitialized (zero) -- including the device sectors member needed to
support shrinking.

All the other accesses to superblock fields new in 1.9.0 were reviewed
and verified to be properly guarded against invalid use.  The 'sectors'
member was the only one used when the superblock version is < 1.9.

Don't access the superblock's >= 1.9.0 'sectors' member unconditionally.
Also add respective comments.

Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-06-23 12:16:15 -04:00
Christophe Lombard
797625deae cxl: Fixes for Coherent Accelerator Interface Architecture 2.0
A previous set of patches "cxl: Add support for Coherent Accelerator
Interface Architecture 2.0" has introduced a new support for the CAPI
cards. These patches have been tested on Simulation environment and
quite a bit of them have been tested on real hardware.

This patch brings new fixes after a series of tests carried out on new
equipment:
  - Add POWER9 definition.
  - Re-enable any masked interrupts when the AFU is not activated
    after resetting the AFU.
  - Remove the api cxl_is_psl8/9 which is no longer useful.
  - Do not dump CAPI1 registers.
  - Rewrite cxl_is_page_fault() function.
  - Do not register slb callack on P9.

Fixes: f24be42aab ("cxl: Add psl9 specific code")
Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-06-23 16:26:23 +10:00
Martin Habets
bb53f4d4f5 sfc: Fix MCDI command size for filter operations
The 8000 series adapters uses catch-all filters for encapsulated traffic
to support filtering VXLAN, NVGRE and GENEVE traffic.
This new filter functionality requires a longer MCDI command.
This patch increases the size of buffers on stack that were missed, which
fixes a kernel panic from the stack protector.

Fixes: 9b41080125 ("sfc: insert catch-all filters for encapsulated traffic")
Signed-off-by: Martin Habets <mhabets@solarflare.com>
Acked-by: Edward Cree <ecree@solarflare.com>
Acked-by: Bert Kenward bkenward@solarflare.com
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-22 13:41:09 -04:00
Arnd Bergmann
b92b7d3312 netvsc: don't access netdev->num_rx_queues directly
This structure member is hidden behind CONFIG_SYSFS, and we
get a build error when that is disabled:

drivers/net/hyperv/netvsc_drv.c: In function 'netvsc_set_channels':
drivers/net/hyperv/netvsc_drv.c:754:49: error: 'struct net_device' has no member named 'num_rx_queues'; did you mean 'num_tx_queues'?
drivers/net/hyperv/netvsc_drv.c: In function 'netvsc_set_rxfh':
drivers/net/hyperv/netvsc_drv.c:1181:25: error: 'struct net_device' has no member named 'num_rx_queues'; did you mean 'num_tx_queues'?

As the value is only set once to the argument of alloc_netdev_mq(),
we can compare against that constant directly.

Fixes: ff4a441990 ("netvsc: allow get/set of RSS indirection table")
Fixes: 2b01888d1b ("netvsc: allow more flexible setting of number of channels")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-22 13:27:28 -04:00
WANG Cong
60abc0be96 ipv6: avoid unregistering inet6_dev for loopback
The per netns loopback_dev->ip6_ptr is unregistered and set to
NULL when its mtu is set to smaller than IPV6_MIN_MTU, this
leads to that we could set rt->rt6i_idev NULL after a
rt6_uncached_list_flush_dev() and then crash after another
call.

In this case we should just bring its inet6_dev down, rather
than unregistering it, at least prior to commit 176c39af29
("netns: fix addrconf_ifdown kernel panic") we always
override the case for loopback.

Thanks a lot to Andrey for finding a reliable reproducer.

Fixes: 176c39af29 ("netns: fix addrconf_ifdown kernel panic")
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Cc: Andrey Konovalov <andreyknvl@google.com>
Cc: Daniel Lezcano <dlezcano@fr.ibm.com>
Cc: David Ahern <dsahern@gmail.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Acked-by: David Ahern <dsahern@gmail.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-22 13:21:44 -04:00
David S. Miller
8c4354ef59 Merge branch 'macvlan-Fix-some-issues-with-changing-mac-addresses'
Vladislav Yasevich says:

====================
macvlan: Fix some issues with changing mac addresses

There are some issues in macvlan wrt to changing it's mac address.
* An error is returned in the specified address is the same as an already
  assigned address.
* In passthru mode, the mac address of the macvlan device doesn't change.
* After changing the mac address of a passthru macvlan and then removing it,
  the mac address of the physical device remains changed.

This patch series attempts to resolve these issues.

V2: Address a small issue in p4 where we save the address from the lowerdev
    (from girish.moodalbail@oracle.com)
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-22 11:17:43 -04:00
Vlad Yasevich
18c8c54de9 macvlan: Let passthru macvlan correctly restore lower mac address
Passthru macvlans directly change the mac address of the lower
level device.  That's OK, but after the macvlan is deleted,
the lower device is left with changed address and one needs to
reboot to bring back the origina HW addresses.

This scenario is actually quite common with passthru macvtap devices.

This patch attempts to solve this, by storing the mac address
of the lower device in macvlan_port structure and keeping track of
it through the changes.

After this patch, any changes to the lower device mac address
done trough the macvlan device, will be reverted back.  Any
changs done directly to the lower device mac address will be kept.

Signed-off-by: Vladislav Yasevich <vyasevic@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-22 11:17:42 -04:00
Vlad Yasevich
43c2d578a0 macvlan: convert port passthru to flags.
Convert the port passthru boolean into flags with accesor functions.

Signed-off-by: Vladislav Yasevich <vyasevic@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-22 11:17:42 -04:00
Vlad Yasevich
e696cda7bd macvlan: Fix passthru macvlan mac address inheritance
When a lower device of the passthru macvlan changes it's address,
passthru macvlan is supposed to change it's own address as well.
However, that doesn't happen correctly because the check in
macvlan_addr_busy() will catch the fact that the lower level
(port) mac address is the same as the address we are trying to
assign to the macvlan, and return an error.  As a reasult,
the address of the passthru macvlan device is never changed.

The same thing happens when the user attempts to change the
mac address of the passthru macvlan.

The simple solution appers to be to not check against
the lower device in case of passthru macvlan device, since
the 2 addresses are _supposed_ to be the same.

Signed-off-by: Vladislav Yasevich <vyasevic@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-22 11:17:41 -04:00
Vlad Yasevich
e26f43faa0 macvlan: Do not return error when setting the same mac address
The user currently gets an EBUSY error when attempting to set
the mac address on a macvlan device to the same value.

This should really be a no-op as nothing changes.  Catch
the condition and return early.

Signed-off-by: Vladislav Yasevich <vyasevic@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-22 11:17:41 -04:00
Wei Liu
dfa523ae9f xen-netback: correctly schedule rate-limited queues
Add a flag to indicate if a queue is rate-limited. Test the flag in
NAPI poll handler and avoid rescheduling the queue if true, otherwise
we risk locking up the host. The rescheduling will be done in the
timer callback function.

Reported-by: Jean-Louis Dupond <jean-louis@dupond.be>
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Tested-by: Jean-Louis Dupond <jean-louis@dupond.be>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-22 11:15:42 -04:00
Serhey Popovych
191cdb3822 veth: Be more robust on network device creation when no attributes
There are number of problems with configuration peer
network device in absence of IFLA_VETH_PEER attributes
where attributes for main network device shared with
peer.

First it is not feasible to configure both network
devices with same MAC address since this makes
communication in such configuration problematic.

This case can be reproduced with following sequence:

  # ip link add address 02:11:22:33:44:55 type veth
  # ip li sh
  ...
  26: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc \
  noop state DOWN mode DEFAULT qlen 1000
      link/ether 00:11:22:33:44:55 brd ff:ff:ff:ff:ff:ff
  27: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc \
  noop state DOWN mode DEFAULT qlen 1000
      link/ether 00:11:22:33:44:55 brd ff:ff:ff:ff:ff:ff

Second it is not possible to register both main and
peer network devices with same name, that happens
when name for main interface is given with IFLA_IFNAME
and same attribute reused for peer.

This case can be reproduced with following sequence:

  # ip link add dev veth1a type veth
  RTNETLINK answers: File exists

To fix both of the cases check if corresponding netlink
attributes are taken from peer_tb when valid or
name based on rtnl ops kind and random address is used.

Signed-off-by: Serhey Popovych <serhe.popovych@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-22 11:15:15 -04:00
Lokesh Vutla
b88ff4f8c9 drivers: net: cpsw-common: Fix reading of mac address for am43 SoCs
cpsw driver tries to get macid for am43xx SoCs using the compatible
ti,am4372. But not all variants of am43x uses this complatible like
epos evm uses ti,am438x. So use a generic compatible ti,am43 to get
macid for all am43 based platforms.

Reviewed-by: Dave Gerlach <d-gerlach@ti.com>
Signed-off-by: Lokesh Vutla <lokeshvutla@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-22 11:14:47 -04:00
WANG Cong
76da070450 ipv6: only call ip6_route_dev_notify() once for NETDEV_UNREGISTER
In commit 242d3a49a2 ("ipv6: reorder ip6_route_dev_notifier after ipv6_dev_notf")
I assumed NETDEV_REGISTER and NETDEV_UNREGISTER are paired,
unfortunately, as reported by jeffy, netdev_wait_allrefs()
could rebroadcast NETDEV_UNREGISTER event until all refs are
gone.

We have to add an additional check to avoid this corner case.
For netdev_wait_allrefs() dev->reg_state is NETREG_UNREGISTERED,
for dev_change_net_namespace(), dev->reg_state is
NETREG_REGISTERED. So check for dev->reg_state != NETREG_UNREGISTERED.

Fixes: 242d3a49a2 ("ipv6: reorder ip6_route_dev_notifier after ipv6_dev_notf")
Reported-by: jeffy <jeffy.chen@rock-chips.com>
Cc: David Ahern <dsahern@gmail.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Acked-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-22 11:06:06 -04:00
Zach Brown
b866203d87 net/phy: micrel: configure intterupts after autoneg workaround
The commit ("net/phy: micrel: Add workaround for bad autoneg") fixes an
autoneg failure case by resetting the hardware. This turns off
intterupts. Things will work themselves out if the phy polls, as it will
figure out it's state during a poll. However if the phy uses only
intterupts, the phy will stall, since interrupts are off. This patch
fixes the issue by calling config_intr after resetting the phy.

Fixes: d2fd719bcb ("net/phy: micrel: Add workaround for bad autoneg ")
Signed-off-by: Zach Brown <zach.brown@ni.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-22 11:05:16 -04:00
Yossi Kuperman
ca3a1b8566 esp6_offload: Fix IP6CB(skb)->nhoff for ESP GRO
IP6CB(skb)->nhoff is the offset of the nexthdr field in an IPv6
header, unless there are extension headers present, in which case
nhoff points to the nexthdr field of the last extension header.

In non-GRO code path, nhoff is set by ipv6_rcv before any XFRM code
is executed. Conversely, in GRO code path (when esp6_offload is loaded),
nhoff is not set. The following functions fail to read the correct value
and eventually the packet is dropped:

    xfrm6_transport_finish
    xfrm6_tunnel_input
    xfrm6_rcv_tnl

Set nhoff to the proper offset of nexthdr in esp6_gro_receive.

Fixes: 7785bba299 ("esp: Add a software GRO codepath")
Signed-off-by: Yossi Kuperman <yossiku@mellanox.com>
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2017-06-22 10:49:14 +02:00
Yossi Kuperman
7c88e21aef xfrm6: Fix IPv6 payload_len in xfrm6_transport_finish
IPv6 payload length indicates the size of the payload, including any
extension headers.

In xfrm6_transport_finish, ipv6_hdr(skb)->payload_len is set to the
payload size only, regardless of the presence of any extension headers.
After ESP GRO transport mode decapsulation, ipv6_rcv trims the packet
according to the wrong payload_len, thus corrupting the packet.

Set payload_len to account for extension headers as well.

Fixes: 7785bba299 ("esp: Add a software GRO codepath")
Signed-off-by: Yossi Kuperman <yossiku@mellanox.com>
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2017-06-22 10:49:14 +02:00
Aurelien Jacquiot
91ebcd1b97 MAINTAINERS: update email address for C6x maintainer
Aurelien has moved.

Signed-off-by: Aurelien Jacquiot <jacquiot.aurelien@gmail.com>
Signed-off-by: Mark Salter <msalter@redhat.com>
2017-06-15 17:04:15 -04:00
Shaohua Li
7304e8f28b iommu/vt-d: Correctly disable Intel IOMMU force on
I made a mistake in commit bfd20f1. We should skip the force on with the
option enabled instead of vice versa. Not sure why this passed our
performance test, sorry.

Fixes: bfd20f1cc8 ('x86, iommu/vt-d: Add an option to disable Intel IOMMU force on')
Signed-off-by: Shaohua Li <shli@fb.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-06-15 16:41:10 +02:00
Heiko Carstens
4130b28f56 s390/ipl: revert Load Normal semantics for LPAR CCW-type re-IPL
This reverts the two commits

7afbeb6df2 ("s390/ipl: always use load normal for CCW-type re-IPL")
0f7451ff3a ("s390/ipl: use load normal for LPAR re-ipl")

The two commits did not take into account that behavior of standby
memory changes fundamentally if the re-IPL method is changed from
Load Clear to Load Normal.

In case of the old re-IPL clear method all memory that was initially
in standby state will be put into standby state again within the
re-IPL process. Or in other words: memory that was brought online
before a re-IPL will be offline again after a reboot.

Given that we use different re-IPL methods depending on the hypervisor
and CCW-type vs SCSI re-IPL it is not easy to tell in advance when and
why memory will stay online or will be offline after a re-IPL.
This does also have other side effects, since memory that is online
from the beginning will be in ZONE_NORMAL by default vs ZONE_MOVABLE
for memory that is offline.

Therefore, before the change, a user could online and offline memory
easily since standby memory was always in ZONE_NORMAL.  After the
change, and a re-IPL, this depended on which memory parts were online
before the re-IPL.

From a usability point of view the current behavior is more than
suboptimal. Therefore revert these changes until we have a better
solution and get back to a consistent behavior. The bad thing about
this is that the time required for a re-IPL will be significantly
increased for configurations with several 100GB or 1TB of memory.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-06-14 15:35:31 +02:00
Dan Carpenter
e747f64336 xfrm: NULL dereference on allocation failure
The default error code in pfkey_msg2xfrm_state() is -ENOBUFS.  We
added a new call to security_xfrm_state_alloc() which sets "err" to zero
so there several places where we can return ERR_PTR(0) if kmalloc()
fails.  The caller is expecting error pointers so it leads to a NULL
dereference.

Fixes: df71837d50 ("[LSM-IPSec]: Security association restriction.")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2017-06-14 12:40:49 +02:00
Dan Carpenter
1e3d0c2c70 xfrm: Oops on error in pfkey_msg2xfrm_state()
There are some missing error codes here so we accidentally return NULL
instead of an error pointer.  It results in a NULL pointer dereference.

Fixes: df71837d50 ("[LSM-IPSec]: Security association restriction.")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2017-06-14 12:40:49 +02:00
Vladimir Murzin
d360a687d9 ARM: 8682/1: V7M: Set cacheid iff DminLine or IminLine is nonzero
Cache support is optional feature in M-class cores, thus DminLine or
IminLine of Cache Type Register is zero if caches are not implemented,
but we check the whole CTR which has other features encoded there.
Let's be more precise and check for DminLine and IminLine of CTR
before we set cacheid.

Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-06-12 15:47:29 +01:00
Yisheng Xie
bbeedfda8e ARM: 8681/1: make VMSPLIT_3G_OPT depends on !ARM_LPAE
When both enable CONFIG_ARM_LPAE=y and CONFIG_VMSPLIT_3G_OPT=y, which
means use PAGE_OFFSET=0xB0000000 with ARM_LPAE, the kernel will boot
fail and stop after uncompressed:

   Starting kernel ...

   Uart base = 0x20001000
   watchdog reg = 0x20013000
   dtb addr = 0x80840308
   Uncompressing Linux... done, booting the kernel.

For ARM_LPAE only support 3:1, 2:2, 1:3 split of TTBR1, which mention in:
   http://elinux.org/images/6/6a/Elce11_marinas.pdf - p16

So we should make VMSPLIT_3G_OPT depends on !ARM_LPAE to avoid trigger
this bug.

Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Yisheng Xie <xieyisheng1@huawei.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-06-12 15:47:28 +01:00
Ard Biesheuvel
60ce285851 ARM: 8680/1: boot/compressed: fix inappropriate Thumb2 mnemonic for __nop
Commit 06a4b6d009 ("ARM: 8677/1: boot/compressed: fix decompressor
header layout for v7-M") fixed an issue in the layout of the header
of the compressed kernel image that was caused by the assembler
emitting narrow opcodes for 'mov r0, r0', and for this reason, the
mnemonic was updated to use the W() macro, which will append the .w
suffix (which forces a wide encoding) if required, i.e., when building
the kernel in Thumb2 mode.

However, this failed to take into account that on Thumb2 kernels built
for CPUs that are also ARM capable, the entry point is entered in ARM
mode, and so the instructions emitted here will be ARM instructions
that only exist in a wide encoding to begin with, which is why the
assembler rejects the .w suffix here and aborts the build with the
following message:

  head.S: Assembler messages:
  head.S:132: Error: width suffixes are invalid in ARM mode -- `mov.w r0,r0'

So replace the W(mov) with separate ARM and Thumb2 instructions, where
the latter will only be used for THUMB2_ONLY builds.

Fixes: 06a4b6d009 ("ARM: 8677/1: boot/compressed: fix decompressor ...")
Reported-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-06-12 15:47:27 +01:00
Hangbin Liu
138437f591 xfrm: move xfrm_garbage_collect out of xfrm_policy_flush
Now we will force to do garbage collection if any policy removed in
xfrm_policy_flush(). But during xfrm_net_exit(). We call flow_cache_fini()
first and set set fc->percpu to NULL. Then after we call xfrm_policy_fini()
-> frxm_policy_flush() -> flow_cache_flush(), we will get NULL pointer
dereference when check percpu_empty. The code path looks like:

flow_cache_fini()
  - fc->percpu = NULL
xfrm_policy_fini()
  - xfrm_policy_flush()
    - xfrm_garbage_collect()
      - flow_cache_flush()
        - flow_cache_percpu_empty()
	  - fcp = per_cpu_ptr(fc->percpu, cpu)

To reproduce, just add ipsec in netns and then remove the netns.

v2:
As Xin Long suggested, since only two other places need to call it. move
xfrm_garbage_collect() outside xfrm_policy_flush().

v3:
Fix subject mismatch after v2 fix.

Fixes: 35db069121 ("xfrm: do the garbage collection after flushing policy")
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Reviewed-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2017-06-12 11:51:21 +02:00
Hangbin Liu
b81f884a54 xfrm: fix xfrm_dev_event() missing when compile without CONFIG_XFRM_OFFLOAD
In commit d77e38e612 ("xfrm: Add an IPsec hardware offloading API") we
make xfrm_device.o only compiled when enable option CONFIG_XFRM_OFFLOAD.
But this will make xfrm_dev_event() missing if we only enable default XFRM
options.

Then if we set down and unregister an interface with IPsec on it. there
will no xfrm_garbage_collect(), which will cause dev usage count hold and
get error like:

unregister_netdevice: waiting for <dev> to become free. Usage count = 4

Fixes: d77e38e612 ("xfrm: Add an IPsec hardware offloading API")
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2017-06-07 08:16:27 +02:00
Benjamin Coddington
501e7a4689 NFSv4.2: Don't send mode again in post-EXCLUSIVE4_1 SETATTR with umask
Now that we have umask support, we shouldn't re-send the mode in a SETATTR
following an exclusive CREATE, or we risk having the same problem fixed in
commit 5334c5bdac ("NFS: Send attributes in OPEN request for
NFS4_CREATE_EXCLUSIVE4_1"), which is that files with S_ISGID will have that
bit stripped away.

Signed-off-by: Benjamin Coddington <bcodding@redhat.com>
Fixes: dff25ddb48 ("nfs: add support for the umask attribute")
Cc: stable@vger.kernel.org # v4.10+
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2017-06-05 12:23:15 -04:00
144 changed files with 638 additions and 614 deletions

View File

@@ -2964,7 +2964,7 @@ F: sound/pci/oxygen/
C6X ARCHITECTURE
M: Mark Salter <msalter@redhat.com>
M: Aurelien Jacquiot <a-jacquiot@ti.com>
M: Aurelien Jacquiot <jacquiot.aurelien@gmail.com>
L: linux-c6x-dev@linux-c6x.org
W: http://www.linux-c6x.org/wiki/index.php/Main_Page
S: Maintained

View File

@@ -1,7 +1,7 @@
VERSION = 4
PATCHLEVEL = 12
SUBLEVEL = 0
EXTRAVERSION = -rc7
EXTRAVERSION =
NAME = Fearless Coyote
# *DOCUMENTATION*

View File

@@ -86,8 +86,6 @@ struct task_struct;
#define TSK_K_BLINK(tsk) TSK_K_REG(tsk, 4)
#define TSK_K_FP(tsk) TSK_K_REG(tsk, 0)
#define thread_saved_pc(tsk) TSK_K_BLINK(tsk)
extern void start_thread(struct pt_regs * regs, unsigned long pc,
unsigned long usp);

View File

@@ -1416,6 +1416,7 @@ choice
config VMSPLIT_3G
bool "3G/1G user/kernel split"
config VMSPLIT_3G_OPT
depends on !ARM_LPAE
bool "3G/1G user/kernel split (for full 1G low memory)"
config VMSPLIT_2G
bool "2G/2G user/kernel split"

View File

@@ -17,7 +17,8 @@
@ there.
.inst 'M' | ('Z' << 8) | (0x1310 << 16) @ tstne r0, #0x4d000
#else
W(mov) r0, r0
AR_CLASS( mov r0, r0 )
M_CLASS( nop.w )
#endif
.endm

View File

@@ -315,7 +315,7 @@ static void __init cacheid_init(void)
if (arch >= CPU_ARCH_ARMv6) {
unsigned int cachetype = read_cpuid_cachetype();
if ((arch == CPU_ARCH_ARMv7M) && !cachetype) {
if ((arch == CPU_ARCH_ARMv7M) && !(cachetype & 0xf000f)) {
cacheid = 0;
} else if ((cachetype & (7 << 29)) == 4 << 29) {
/* ARMv7 register format */

View File

@@ -1218,15 +1218,15 @@ void __init adjust_lowmem_bounds(void)
high_memory = __va(arm_lowmem_limit - 1) + 1;
if (!memblock_limit)
memblock_limit = arm_lowmem_limit;
/*
* Round the memblock limit down to a pmd size. This
* helps to ensure that we will allocate memory from the
* last full pmd, which should be mapped.
*/
if (memblock_limit)
memblock_limit = round_down(memblock_limit, PMD_SIZE);
if (!memblock_limit)
memblock_limit = arm_lowmem_limit;
memblock_limit = round_down(memblock_limit, PMD_SIZE);
if (!IS_ENABLED(CONFIG_HIGHMEM) || cache_is_vipt_aliasing()) {
if (memblock_end_of_DRAM() > arm_lowmem_limit) {

View File

@@ -75,11 +75,6 @@ static inline void release_thread(struct task_struct *dead_task)
{
}
/*
* Return saved PC of a blocked thread.
*/
#define thread_saved_pc(tsk) (tsk->thread.pc)
unsigned long get_wchan(struct task_struct *p);
#define KSTK_EIP(tsk) \

View File

@@ -95,11 +95,6 @@ static inline void release_thread(struct task_struct *dead_task)
#define copy_segments(tsk, mm) do { } while (0)
#define release_segments(mm) do { } while (0)
/*
* saved PC of a blocked thread.
*/
#define thread_saved_pc(tsk) (task_pt_regs(tsk)->pc)
/*
* saved kernel SP and DP of a blocked thread.
*/

View File

@@ -69,14 +69,6 @@ void hard_reset_now (void)
while(1) /* waiting for RETRIBUTION! */ ;
}
/*
* Return saved PC of a blocked thread.
*/
unsigned long thread_saved_pc(struct task_struct *t)
{
return task_pt_regs(t)->irp;
}
/* setup the child's kernel stack with a pt_regs and switch_stack on it.
* it will be un-nested during _resume and _ret_from_sys_call when the
* new thread is scheduled.

View File

@@ -84,14 +84,6 @@ hard_reset_now(void)
; /* Wait for reset. */
}
/*
* Return saved PC of a blocked thread.
*/
unsigned long thread_saved_pc(struct task_struct *t)
{
return task_pt_regs(t)->erp;
}
/*
* Setup the child's kernel stack with a pt_regs and call switch_stack() on it.
* It will be unnested during _resume and _ret_from_sys_call when the new thread

View File

@@ -52,8 +52,6 @@ unsigned long get_wchan(struct task_struct *p);
#define KSTK_ESP(tsk) ((tsk) == current ? rdusp() : (tsk)->thread.usp)
extern unsigned long thread_saved_pc(struct task_struct *tsk);
/* Free all resources held by a thread. */
static inline void release_thread(struct task_struct *dead_task)
{

View File

@@ -96,11 +96,6 @@ extern asmlinkage void *restore_user_regs(const struct user_context *target, ...
#define release_segments(mm) do { } while (0)
#define forget_segments() do { } while (0)
/*
* Return saved PC of a blocked thread.
*/
extern unsigned long thread_saved_pc(struct task_struct *tsk);
unsigned long get_wchan(struct task_struct *p);
#define KSTK_EIP(tsk) ((tsk)->thread.frame0->pc)

View File

@@ -198,15 +198,6 @@ unsigned long get_wchan(struct task_struct *p)
return 0;
}
unsigned long thread_saved_pc(struct task_struct *tsk)
{
/* Check whether the thread is blocked in resume() */
if (in_sched_functions(tsk->thread.pc))
return ((unsigned long *)tsk->thread.fp)[2];
else
return tsk->thread.pc;
}
int elf_check_arch(const struct elf32_hdr *hdr)
{
unsigned long hsr0 = __get_HSR(0);

View File

@@ -110,10 +110,6 @@ static inline void release_thread(struct task_struct *dead_task)
{
}
/*
* Return saved PC of a blocked thread.
*/
unsigned long thread_saved_pc(struct task_struct *tsk);
unsigned long get_wchan(struct task_struct *p);
#define KSTK_EIP(tsk) \

View File

@@ -129,11 +129,6 @@ int copy_thread(unsigned long clone_flags,
return 0;
}
unsigned long thread_saved_pc(struct task_struct *tsk)
{
return ((struct pt_regs *)tsk->thread.esp0)->pc;
}
unsigned long get_wchan(struct task_struct *p)
{
unsigned long fp, pc;

View File

@@ -33,9 +33,6 @@
/* task_struct, defined elsewhere, is the "process descriptor" */
struct task_struct;
/* this is defined in arch/process.c */
extern unsigned long thread_saved_pc(struct task_struct *tsk);
extern void start_thread(struct pt_regs *, unsigned long, unsigned long);
/*

View File

@@ -60,14 +60,6 @@ void arch_cpu_idle(void)
local_irq_enable();
}
/*
* Return saved PC of a blocked thread
*/
unsigned long thread_saved_pc(struct task_struct *tsk)
{
return 0;
}
/*
* Copy architecture-specific thread state
*/

View File

@@ -601,23 +601,6 @@ ia64_set_unat (__u64 *unat, void *spill_addr, unsigned long nat)
*unat = (*unat & ~mask) | (nat << bit);
}
/*
* Return saved PC of a blocked thread.
* Note that the only way T can block is through a call to schedule() -> switch_to().
*/
static inline unsigned long
thread_saved_pc (struct task_struct *t)
{
struct unw_frame_info info;
unsigned long ip;
unw_init_from_blocked_task(&info, t);
if (unw_unwind(&info) < 0)
return 0;
unw_get_ip(&info, &ip);
return ip;
}
/*
* Get the current instruction/program counter value.
*/

View File

@@ -122,8 +122,6 @@ extern void release_thread(struct task_struct *);
extern void copy_segments(struct task_struct *p, struct mm_struct * mm);
extern void release_segments(struct mm_struct * mm);
extern unsigned long thread_saved_pc(struct task_struct *);
/* Copy and release all segment info associated with a VM */
#define copy_segments(p, mm) do { } while (0)
#define release_segments(mm) do { } while (0)

View File

@@ -39,14 +39,6 @@
#include <linux/err.h>
/*
* Return saved PC of a blocked thread.
*/
unsigned long thread_saved_pc(struct task_struct *tsk)
{
return tsk->thread.lr;
}
void (*pm_power_off)(void) = NULL;
EXPORT_SYMBOL(pm_power_off);

View File

@@ -130,8 +130,6 @@ static inline void release_thread(struct task_struct *dead_task)
{
}
extern unsigned long thread_saved_pc(struct task_struct *tsk);
unsigned long get_wchan(struct task_struct *p);
#define KSTK_EIP(tsk) \

View File

@@ -40,20 +40,6 @@
asmlinkage void ret_from_fork(void);
asmlinkage void ret_from_kernel_thread(void);
/*
* Return saved PC from a blocked thread
*/
unsigned long thread_saved_pc(struct task_struct *tsk)
{
struct switch_stack *sw = (struct switch_stack *)tsk->thread.ksp;
/* Check whether the thread is blocked in resume() */
if (in_sched_functions(sw->retpc))
return ((unsigned long *)sw->a6)[1];
else
return sw->retpc;
}
void arch_cpu_idle(void)
{
#if defined(MACH_ATARI_ONLY)

View File

@@ -69,8 +69,6 @@ static inline void release_thread(struct task_struct *dead_task)
{
}
extern unsigned long thread_saved_pc(struct task_struct *t);
extern unsigned long get_wchan(struct task_struct *p);
# define KSTK_EIP(tsk) (0)
@@ -121,10 +119,6 @@ static inline void release_thread(struct task_struct *dead_task)
{
}
/* Return saved (kernel) PC of a blocked thread. */
# define thread_saved_pc(tsk) \
((tsk)->thread.regs ? (tsk)->thread.regs->r15 : 0)
unsigned long get_wchan(struct task_struct *p);
/* The size allocated for kernel stacks. This _must_ be a power of two! */

View File

@@ -119,23 +119,6 @@ int copy_thread(unsigned long clone_flags, unsigned long usp,
return 0;
}
#ifndef CONFIG_MMU
/*
* Return saved PC of a blocked thread.
*/
unsigned long thread_saved_pc(struct task_struct *tsk)
{
struct cpu_context *ctx =
&(((struct thread_info *)(tsk->stack))->cpu_context);
/* Check whether the thread is blocked in resume() */
if (in_sched_functions(ctx->r15))
return (unsigned long)ctx->r15;
else
return ctx->r14;
}
#endif
unsigned long get_wchan(struct task_struct *p)
{
/* TBD (used by procfs) */

View File

@@ -11,6 +11,7 @@
#include <asm/asm.h>
#include <asm/asmmacro.h>
#include <asm/compiler.h>
#include <asm/irqflags.h>
#include <asm/regdef.h>
#include <asm/mipsregs.h>
#include <asm/stackframe.h>
@@ -119,6 +120,7 @@ work_pending:
andi t0, a2, _TIF_NEED_RESCHED # a2 is preloaded with TI_FLAGS
beqz t0, work_notifysig
work_resched:
TRACE_IRQS_OFF
jal schedule
local_irq_disable # make sure need_resched and
@@ -155,6 +157,7 @@ syscall_exit_work:
beqz t0, work_pending # trace bit set?
local_irq_enable # could let syscall_trace_leave()
# call schedule() instead
TRACE_IRQS_ON
move a0, sp
jal syscall_trace_leave
b resume_userspace

View File

@@ -106,8 +106,8 @@ NESTED(kernel_entry, 16, sp) # kernel entry point
beq t0, t1, dtb_found
#endif
li t1, -2
beq a0, t1, dtb_found
move t2, a1
beq a0, t1, dtb_found
li t2, 0
dtb_found:

View File

@@ -56,7 +56,6 @@ DECLARE_BITMAP(state_support, CPS_PM_STATE_COUNT);
* state. Actually per-core rather than per-CPU.
*/
static DEFINE_PER_CPU_ALIGNED(u32*, ready_count);
static DEFINE_PER_CPU_ALIGNED(void*, ready_count_alloc);
/* Indicates online CPUs coupled with the current CPU */
static DEFINE_PER_CPU_ALIGNED(cpumask_t, online_coupled);
@@ -642,7 +641,6 @@ static int cps_pm_online_cpu(unsigned int cpu)
{
enum cps_pm_state state;
unsigned core = cpu_data[cpu].core;
unsigned dlinesz = cpu_data[cpu].dcache.linesz;
void *entry_fn, *core_rc;
for (state = CPS_PM_NC_WAIT; state < CPS_PM_STATE_COUNT; state++) {
@@ -662,16 +660,11 @@ static int cps_pm_online_cpu(unsigned int cpu)
}
if (!per_cpu(ready_count, core)) {
core_rc = kmalloc(dlinesz * 2, GFP_KERNEL);
core_rc = kmalloc(sizeof(u32), GFP_KERNEL);
if (!core_rc) {
pr_err("Failed allocate core %u ready_count\n", core);
return -ENOMEM;
}
per_cpu(ready_count_alloc, core) = core_rc;
/* Ensure ready_count is aligned to a cacheline boundary */
core_rc += dlinesz - 1;
core_rc = (void *)((unsigned long)core_rc & ~(dlinesz - 1));
per_cpu(ready_count, core) = core_rc;
}

View File

@@ -201,6 +201,8 @@ void show_stack(struct task_struct *task, unsigned long *sp)
{
struct pt_regs regs;
mm_segment_t old_fs = get_fs();
regs.cp0_status = KSU_KERNEL;
if (sp) {
regs.regs[29] = (unsigned long)sp;
regs.regs[31] = 0;

View File

@@ -54,7 +54,7 @@ static union ieee754dp _dp_maddf(union ieee754dp z, union ieee754dp x,
return ieee754dp_nanxcpt(z);
case IEEE754_CLASS_DNORM:
DPDNORMZ;
/* QNAN is handled separately below */
/* QNAN and ZERO cases are handled separately below */
}
switch (CLPAIR(xc, yc)) {
@@ -210,6 +210,9 @@ static union ieee754dp _dp_maddf(union ieee754dp z, union ieee754dp x,
}
assert(rm & (DP_HIDDEN_BIT << 3));
if (zc == IEEE754_CLASS_ZERO)
return ieee754dp_format(rs, re, rm);
/* And now the addition */
assert(zm & DP_HIDDEN_BIT);

View File

@@ -54,7 +54,7 @@ static union ieee754sp _sp_maddf(union ieee754sp z, union ieee754sp x,
return ieee754sp_nanxcpt(z);
case IEEE754_CLASS_DNORM:
SPDNORMZ;
/* QNAN is handled separately below */
/* QNAN and ZERO cases are handled separately below */
}
switch (CLPAIR(xc, yc)) {
@@ -203,6 +203,9 @@ static union ieee754sp _sp_maddf(union ieee754sp z, union ieee754sp x,
}
assert(rm & (SP_HIDDEN_BIT << 3));
if (zc == IEEE754_CLASS_ZERO)
return ieee754sp_format(rs, re, rm);
/* And now the addition */
assert(zm & SP_HIDDEN_BIT);

View File

@@ -68,12 +68,25 @@ static inline struct page *dma_addr_to_page(struct device *dev,
* systems and only the R10000 and R12000 are used in such systems, the
* SGI IP28 Indigo² rsp. SGI IP32 aka O2.
*/
static inline int cpu_needs_post_dma_flush(struct device *dev)
static inline bool cpu_needs_post_dma_flush(struct device *dev)
{
return !plat_device_is_coherent(dev) &&
(boot_cpu_type() == CPU_R10000 ||
boot_cpu_type() == CPU_R12000 ||
boot_cpu_type() == CPU_BMIPS5000);
if (plat_device_is_coherent(dev))
return false;
switch (boot_cpu_type()) {
case CPU_R10000:
case CPU_R12000:
case CPU_BMIPS5000:
return true;
default:
/*
* Presence of MAARs suggests that the CPU supports
* speculatively prefetching data, and therefore requires
* the post-DMA flush/invalidate.
*/
return cpu_has_maar;
}
}
static gfp_t massage_gfp_flags(const struct device *dev, gfp_t gfp)

View File

@@ -132,11 +132,6 @@ static inline void start_thread(struct pt_regs *regs,
/* Free all resources held by a thread. */
extern void release_thread(struct task_struct *);
/*
* Return saved PC of a blocked thread.
*/
extern unsigned long thread_saved_pc(struct task_struct *tsk);
unsigned long get_wchan(struct task_struct *p);
#define task_pt_regs(task) ((task)->thread.uregs)

View File

@@ -39,14 +39,6 @@
#include <asm/gdb-stub.h>
#include "internal.h"
/*
* return saved PC of a blocked thread.
*/
unsigned long thread_saved_pc(struct task_struct *tsk)
{
return ((unsigned long *) tsk->thread.sp)[3];
}
/*
* power off function, if any
*/

View File

@@ -75,9 +75,6 @@ static inline void release_thread(struct task_struct *dead_task)
{
}
/* Return saved PC of a blocked thread. */
#define thread_saved_pc(tsk) ((tsk)->thread.kregs->ea)
extern unsigned long get_wchan(struct task_struct *p);
#define task_pt_regs(p) \

View File

@@ -84,11 +84,6 @@ void start_thread(struct pt_regs *regs, unsigned long nip, unsigned long sp);
void release_thread(struct task_struct *);
unsigned long get_wchan(struct task_struct *p);
/*
* Return saved PC of a blocked thread. For now, this is the "user" PC
*/
extern unsigned long thread_saved_pc(struct task_struct *t);
#define init_stack (init_thread_union.stack)
#define cpu_relax() barrier()

View File

@@ -110,11 +110,6 @@ void show_regs(struct pt_regs *regs)
show_registers(regs);
}
unsigned long thread_saved_pc(struct task_struct *t)
{
return (unsigned long)user_regs(t->stack)->pc;
}
void release_thread(struct task_struct *dead_task)
{
}

View File

@@ -163,12 +163,7 @@ struct thread_struct {
.flags = 0 \
}
/*
* Return saved PC of a blocked thread. This is used by ps mostly.
*/
struct task_struct;
unsigned long thread_saved_pc(struct task_struct *t);
void show_trace(struct task_struct *task, unsigned long *stack);
/*

View File

@@ -239,11 +239,6 @@ copy_thread(unsigned long clone_flags, unsigned long usp,
return 0;
}
unsigned long thread_saved_pc(struct task_struct *t)
{
return t->thread.regs.kpc;
}
unsigned long
get_wchan(struct task_struct *p)
{

View File

@@ -378,12 +378,6 @@ struct thread_struct {
}
#endif
/*
* Return saved PC of a blocked thread. For now, this is the "user" PC
*/
#define thread_saved_pc(tsk) \
((tsk)->thread.regs? (tsk)->thread.regs->nip: 0)
#define task_pt_regs(tsk) ((struct pt_regs *)(tsk)->thread.regs)
unsigned long get_wchan(struct task_struct *p);

View File

@@ -267,13 +267,7 @@ do { \
extern unsigned long __copy_tofrom_user(void __user *to,
const void __user *from, unsigned long size);
#ifndef __powerpc64__
#define INLINE_COPY_FROM_USER
#define INLINE_COPY_TO_USER
#else /* __powerpc64__ */
#ifdef __powerpc64__
static inline unsigned long
raw_copy_in_user(void __user *to, const void __user *from, unsigned long n)
{

View File

@@ -221,11 +221,6 @@ extern void release_thread(struct task_struct *);
/* Free guarded storage control block for current */
void exit_thread_gs(void);
/*
* Return saved PC of a blocked thread.
*/
extern unsigned long thread_saved_pc(struct task_struct *t);
unsigned long get_wchan(struct task_struct *p);
#define task_pt_regs(tsk) ((struct pt_regs *) \
(task_stack_page(tsk) + THREAD_SIZE) - 1)

View File

@@ -564,8 +564,6 @@ static struct kset *ipl_kset;
static void __ipl_run(void *unused)
{
if (MACHINE_IS_LPAR && ipl_info.type == IPL_TYPE_CCW)
diag308(DIAG308_LOAD_NORMAL_DUMP, NULL);
diag308(DIAG308_LOAD_CLEAR, NULL);
if (MACHINE_IS_VM)
__cpcmd("IPL", NULL, 0, NULL);
@@ -1088,10 +1086,7 @@ static void __reipl_run(void *unused)
break;
case REIPL_METHOD_CCW_DIAG:
diag308(DIAG308_SET, reipl_block_ccw);
if (MACHINE_IS_LPAR)
diag308(DIAG308_LOAD_NORMAL_DUMP, NULL);
else
diag308(DIAG308_LOAD_CLEAR, NULL);
diag308(DIAG308_LOAD_CLEAR, NULL);
break;
case REIPL_METHOD_FCP_RW_DIAG:
diag308(DIAG308_SET, reipl_block_fcp);

View File

@@ -41,31 +41,6 @@
asmlinkage void ret_from_fork(void) asm ("ret_from_fork");
/*
* Return saved PC of a blocked thread. used in kernel/sched.
* resume in entry.S does not create a new stack frame, it
* just stores the registers %r6-%r15 to the frame given by
* schedule. We want to return the address of the caller of
* schedule, so we have to walk the backchain one time to
* find the frame schedule() store its return address.
*/
unsigned long thread_saved_pc(struct task_struct *tsk)
{
struct stack_frame *sf, *low, *high;
if (!tsk || !task_stack_page(tsk))
return 0;
low = task_stack_page(tsk);
high = (struct stack_frame *) task_pt_regs(tsk);
sf = (struct stack_frame *) tsk->thread.ksp;
if (sf <= low || sf > high)
return 0;
sf = (struct stack_frame *) sf->back_chain;
if (sf <= low || sf > high)
return 0;
return sf->gprs[8];
}
extern void kernel_thread_starter(void);
/*

View File

@@ -13,7 +13,6 @@ struct task_struct;
*/
extern void (*cpu_wait)(void);
extern unsigned long thread_saved_pc(struct task_struct *tsk);
extern void start_thread(struct pt_regs *regs,
unsigned long pc, unsigned long sp);
extern unsigned long get_wchan(struct task_struct *p);

View File

@@ -101,11 +101,6 @@ int dump_fpu(struct pt_regs *regs, elf_fpregset_t *r)
return 1;
}
unsigned long thread_saved_pc(struct task_struct *tsk)
{
return task_pt_regs(tsk)->cp0_epc;
}
unsigned long get_wchan(struct task_struct *task)
{
if (!task || task == current || task->state == TASK_RUNNING)

View File

@@ -67,9 +67,6 @@ struct thread_struct {
.current_ds = KERNEL_DS, \
}
/* Return saved PC of a blocked thread. */
unsigned long thread_saved_pc(struct task_struct *t);
/* Do necessary setup to start up a newly executed thread. */
static inline void start_thread(struct pt_regs * regs, unsigned long pc,
unsigned long sp)

View File

@@ -89,9 +89,7 @@ struct thread_struct {
#include <linux/types.h>
#include <asm/fpumacro.h>
/* Return saved PC of a blocked thread. */
struct task_struct;
unsigned long thread_saved_pc(struct task_struct *);
/* On Uniprocessor, even in RMO processes see TSO semantics */
#ifdef CONFIG_SMP

View File

@@ -176,14 +176,6 @@ void show_stack(struct task_struct *tsk, unsigned long *_ksp)
printk("\n");
}
/*
* Note: sparc64 has a pretty intricated thread_saved_pc, check it out.
*/
unsigned long thread_saved_pc(struct task_struct *tsk)
{
return task_thread_info(tsk)->kpc;
}
/*
* Free current thread data structures etc..
*/

View File

@@ -400,25 +400,6 @@ core_initcall(sparc_sysrq_init);
#endif
unsigned long thread_saved_pc(struct task_struct *tsk)
{
struct thread_info *ti = task_thread_info(tsk);
unsigned long ret = 0xdeadbeefUL;
if (ti && ti->ksp) {
unsigned long *sp;
sp = (unsigned long *)(ti->ksp + STACK_BIAS);
if (((unsigned long)sp & (sizeof(long) - 1)) == 0UL &&
sp[14]) {
unsigned long *fp;
fp = (unsigned long *)(sp[14] + STACK_BIAS);
if (((unsigned long)fp & (sizeof(long) - 1)) == 0UL)
ret = fp[15];
}
}
return ret;
}
/* Free current thread data structures etc.. */
void exit_thread(struct task_struct *tsk)
{

View File

@@ -214,13 +214,6 @@ static inline void release_thread(struct task_struct *dead_task)
extern void prepare_exit_to_usermode(struct pt_regs *regs, u32 flags);
/*
* Return saved (kernel) PC of a blocked thread.
* Only used in a printk() in kernel/sched/core.c, so don't work too hard.
*/
#define thread_saved_pc(t) ((t)->thread.pc)
unsigned long get_wchan(struct task_struct *p);
/* Return initial ksp value for given task. */

View File

@@ -58,8 +58,6 @@ static inline void release_thread(struct task_struct *task)
{
}
extern unsigned long thread_saved_pc(struct task_struct *t);
static inline void mm_copy_segments(struct mm_struct *from_mm,
struct mm_struct *new_mm)
{

View File

@@ -56,12 +56,6 @@ union thread_union cpu0_irqstack
__attribute__((__section__(".data..init_irqstack"))) =
{ INIT_THREAD_INFO(init_task) };
unsigned long thread_saved_pc(struct task_struct *task)
{
/* FIXME: Need to look up userspace_pid by cpu */
return os_process_pc(userspace_pid[0]);
}
/* Changed in setup_arch, which is called in early boot */
static char host_info[(__NEW_UTS_LEN + 1) * 5];

View File

@@ -564,9 +564,6 @@ void choose_random_location(unsigned long input,
{
unsigned long random_addr, min_addr;
/* By default, keep output position unchanged. */
*virt_addr = *output;
if (cmdline_find_option_bool("nokaslr")) {
warn("KASLR disabled: 'nokaslr' on cmdline.");
return;

View File

@@ -338,7 +338,7 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
unsigned long output_len)
{
const unsigned long kernel_total_size = VO__end - VO__text;
unsigned long virt_addr = (unsigned long)output;
unsigned long virt_addr = LOAD_PHYSICAL_ADDR;
/* Retain x86 boot parameters pointer passed from startup_32/64. */
boot_params = rmode;
@@ -390,6 +390,8 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
#ifdef CONFIG_X86_64
if (heap > 0x3fffffffffffUL)
error("Destination address too large");
if (virt_addr + max(output_len, kernel_total_size) > KERNEL_IMAGE_SIZE)
error("Destination virtual address is beyond the kernel mapping area");
#else
if (heap > ((-__PAGE_OFFSET-(128<<20)-1) & 0x7fffffff))
error("Destination address too large");
@@ -397,7 +399,7 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
#ifndef CONFIG_RELOCATABLE
if ((unsigned long)output != LOAD_PHYSICAL_ADDR)
error("Destination address does not match LOAD_PHYSICAL_ADDR");
if ((unsigned long)output != virt_addr)
if (virt_addr != LOAD_PHYSICAL_ADDR)
error("Destination virtual address changed when not relocatable");
#endif

View File

@@ -81,8 +81,6 @@ static inline void choose_random_location(unsigned long input,
unsigned long output_size,
unsigned long *virt_addr)
{
/* No change from existing output location. */
*virt_addr = *output;
}
#endif

View File

@@ -1170,7 +1170,7 @@ static int uncore_event_cpu_online(unsigned int cpu)
pmu = type->pmus;
for (i = 0; i < type->num_boxes; i++, pmu++) {
box = pmu->boxes[pkg];
if (!box && atomic_inc_return(&box->refcnt) == 1)
if (box && atomic_inc_return(&box->refcnt) == 1)
uncore_box_init(box);
}
}

View File

@@ -860,8 +860,6 @@ extern unsigned long KSTK_ESP(struct task_struct *task);
#endif /* CONFIG_X86_64 */
extern unsigned long thread_saved_pc(struct task_struct *tsk);
extern void start_thread(struct pt_regs *regs, unsigned long new_ip,
unsigned long new_sp);

View File

@@ -856,11 +856,13 @@ static struct dentry *rdt_mount(struct file_system_type *fs_type,
dentry = kernfs_mount(fs_type, flags, rdt_root,
RDTGROUP_SUPER_MAGIC, NULL);
if (IS_ERR(dentry))
goto out_cdp;
goto out_destroy;
static_branch_enable(&rdt_enable_key);
goto out;
out_destroy:
kernfs_remove(kn_info);
out_cdp:
cdp_disable();
out:

View File

@@ -544,17 +544,6 @@ unsigned long arch_randomize_brk(struct mm_struct *mm)
return randomize_page(mm->brk, 0x02000000);
}
/*
* Return saved PC of a blocked thread.
* What is this good for? it will be always the scheduler or ret_from_fork.
*/
unsigned long thread_saved_pc(struct task_struct *tsk)
{
struct inactive_task_frame *frame =
(struct inactive_task_frame *) READ_ONCE(tsk->thread.sp);
return READ_ONCE_NOCHECK(frame->ret_addr);
}
/*
* Called from fs/proc with a reference on @p to find the function
* which called into schedule(). This needs to be done carefully

View File

@@ -514,7 +514,7 @@ int tboot_force_iommu(void)
if (!tboot_enabled())
return 0;
if (!intel_iommu_tboot_noforce)
if (intel_iommu_tboot_noforce)
return 1;
if (no_iommu || swiotlb || dmar_disabled)

View File

@@ -990,7 +990,13 @@ remove_p4d_table(p4d_t *p4d_start, unsigned long addr, unsigned long end,
pud_base = pud_offset(p4d, 0);
remove_pud_table(pud_base, addr, next, direct);
free_pud_table(pud_base, p4d);
/*
* For 4-level page tables we do not want to free PUDs, but in the
* 5-level case we should free them. This code will have to change
* to adapt for boot-time switching between 4 and 5 level page tables.
*/
if (CONFIG_PGTABLE_LEVELS == 5)
free_pud_table(pud_base, p4d);
}
if (direct)

View File

@@ -213,8 +213,6 @@ struct mm_struct;
#define release_segments(mm) do { } while(0)
#define forget_segments() do { } while (0)
#define thread_saved_pc(tsk) (task_pt_regs(tsk)->pc)
extern unsigned long get_wchan(struct task_struct *p);
#define KSTK_EIP(tsk) (task_pt_regs(tsk)->pc)

View File

@@ -240,20 +240,21 @@ fallback:
return bvl;
}
static void __bio_free(struct bio *bio)
void bio_uninit(struct bio *bio)
{
bio_disassociate_task(bio);
if (bio_integrity(bio))
bio_integrity_free(bio);
}
EXPORT_SYMBOL(bio_uninit);
static void bio_free(struct bio *bio)
{
struct bio_set *bs = bio->bi_pool;
void *p;
__bio_free(bio);
bio_uninit(bio);
if (bs) {
bvec_free(bs->bvec_pool, bio->bi_io_vec, BVEC_POOL_IDX(bio));
@@ -271,6 +272,11 @@ static void bio_free(struct bio *bio)
}
}
/*
* Users of this function have their own bio allocation. Subsequently,
* they must remember to pair any call to bio_init() with bio_uninit()
* when IO has completed, or when the bio is released.
*/
void bio_init(struct bio *bio, struct bio_vec *table,
unsigned short max_vecs)
{
@@ -297,7 +303,7 @@ void bio_reset(struct bio *bio)
{
unsigned long flags = bio->bi_flags & (~0UL << BIO_RESET_BITS);
__bio_free(bio);
bio_uninit(bio);
memset(bio, 0, BIO_RESET_BYTES);
bio->bi_flags = flags;

View File

@@ -201,7 +201,7 @@ static acpi_status acpi_gpiochip_request_interrupt(struct acpi_resource *ares,
handler = acpi_gpio_irq_handler_evt;
}
if (!handler)
return AE_BAD_PARAMETER;
return AE_OK;
pin = acpi_gpiochip_pin_to_gpio_offset(chip->gpiodev, pin);
if (pin < 0)

View File

@@ -708,7 +708,8 @@ static irqreturn_t lineevent_irq_thread(int irq, void *p)
ge.timestamp = ktime_get_real_ns();
if (le->eflags & GPIOEVENT_REQUEST_BOTH_EDGES) {
if (le->eflags & GPIOEVENT_REQUEST_RISING_EDGE
&& le->eflags & GPIOEVENT_REQUEST_FALLING_EDGE) {
int level = gpiod_get_value_cansleep(le->desc);
if (level)

View File

@@ -106,9 +106,10 @@ struct etnaviv_gem_submit {
struct etnaviv_gpu *gpu;
struct ww_acquire_ctx ticket;
struct dma_fence *fence;
u32 flags;
unsigned int nr_bos;
struct etnaviv_gem_submit_bo bos[0];
u32 flags;
/* No new members here, the previous one is variable-length! */
};
int etnaviv_gem_wait_bo(struct etnaviv_gpu *gpu, struct drm_gem_object *obj,

View File

@@ -172,7 +172,7 @@ static int submit_fence_sync(const struct etnaviv_gem_submit *submit)
for (i = 0; i < submit->nr_bos; i++) {
struct etnaviv_gem_object *etnaviv_obj = submit->bos[i].obj;
bool write = submit->bos[i].flags & ETNA_SUBMIT_BO_WRITE;
bool explicit = !(submit->flags & ETNA_SUBMIT_NO_IMPLICIT);
bool explicit = !!(submit->flags & ETNA_SUBMIT_NO_IMPLICIT);
ret = etnaviv_gpu_fence_sync_obj(etnaviv_obj, context, write,
explicit);

View File

@@ -292,6 +292,8 @@ static int per_file_stats(int id, void *ptr, void *data)
struct file_stats *stats = data;
struct i915_vma *vma;
lockdep_assert_held(&obj->base.dev->struct_mutex);
stats->count++;
stats->total += obj->base.size;
if (!obj->bind_count)
@@ -476,6 +478,8 @@ static int i915_gem_object_info(struct seq_file *m, void *data)
struct drm_i915_gem_request *request;
struct task_struct *task;
mutex_lock(&dev->struct_mutex);
memset(&stats, 0, sizeof(stats));
stats.file_priv = file->driver_priv;
spin_lock(&file->table_lock);
@@ -487,7 +491,6 @@ static int i915_gem_object_info(struct seq_file *m, void *data)
* still alive (e.g. get_pid(current) => fork() => exit()).
* Therefore, we need to protect this ->comm access using RCU.
*/
mutex_lock(&dev->struct_mutex);
request = list_first_entry_or_null(&file_priv->mm.request_list,
struct drm_i915_gem_request,
client_link);
@@ -497,6 +500,7 @@ static int i915_gem_object_info(struct seq_file *m, void *data)
PIDTYPE_PID);
print_file_stats(m, task ? task->comm : "<unknown>", stats);
rcu_read_unlock();
mutex_unlock(&dev->struct_mutex);
}
mutex_unlock(&dev->filelist_mutex);

View File

@@ -546,11 +546,12 @@ repeat:
}
static int
i915_gem_execbuffer_relocate_entry(struct drm_i915_gem_object *obj,
i915_gem_execbuffer_relocate_entry(struct i915_vma *vma,
struct eb_vmas *eb,
struct drm_i915_gem_relocation_entry *reloc,
struct reloc_cache *cache)
{
struct drm_i915_gem_object *obj = vma->obj;
struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
struct drm_gem_object *target_obj;
struct drm_i915_gem_object *target_i915_obj;
@@ -628,6 +629,16 @@ i915_gem_execbuffer_relocate_entry(struct drm_i915_gem_object *obj,
return -EINVAL;
}
/*
* If we write into the object, we need to force the synchronisation
* barrier, either with an asynchronous clflush or if we executed the
* patching using the GPU (though that should be serialised by the
* timeline). To be completely sure, and since we are required to
* do relocations we are already stalling, disable the user's opt
* of our synchronisation.
*/
vma->exec_entry->flags &= ~EXEC_OBJECT_ASYNC;
ret = relocate_entry(obj, reloc, cache, target_offset);
if (ret)
return ret;
@@ -678,7 +689,7 @@ i915_gem_execbuffer_relocate_vma(struct i915_vma *vma,
do {
u64 offset = r->presumed_offset;
ret = i915_gem_execbuffer_relocate_entry(vma->obj, eb, r, &cache);
ret = i915_gem_execbuffer_relocate_entry(vma, eb, r, &cache);
if (ret)
goto out;
@@ -726,7 +737,7 @@ i915_gem_execbuffer_relocate_vma_slow(struct i915_vma *vma,
reloc_cache_init(&cache, eb->i915);
for (i = 0; i < entry->relocation_count; i++) {
ret = i915_gem_execbuffer_relocate_entry(vma->obj, eb, &relocs[i], &cache);
ret = i915_gem_execbuffer_relocate_entry(vma, eb, &relocs[i], &cache);
if (ret)
break;
}

View File

@@ -650,6 +650,11 @@ int i915_vma_unbind(struct i915_vma *vma)
break;
}
if (!ret) {
ret = i915_gem_active_retire(&vma->last_fence,
&vma->vm->i915->drm.struct_mutex);
}
__i915_vma_unpin(vma);
if (ret)
return ret;

View File

@@ -321,6 +321,7 @@ void vmw_cmdbuf_res_man_destroy(struct vmw_cmdbuf_res_manager *man)
list_for_each_entry_safe(entry, next, &man->list, head)
vmw_cmdbuf_res_free(man, entry);
drm_ht_remove(&man->resources);
kfree(man);
}

View File

@@ -3879,11 +3879,9 @@ static void irte_ga_prepare(void *entry,
u8 vector, u32 dest_apicid, int devid)
{
struct irte_ga *irte = (struct irte_ga *) entry;
struct iommu_dev_data *dev_data = search_dev_data(devid);
irte->lo.val = 0;
irte->hi.val = 0;
irte->lo.fields_remap.guest_mode = dev_data ? dev_data->use_vapic : 0;
irte->lo.fields_remap.int_type = delivery_mode;
irte->lo.fields_remap.dm = dest_mode;
irte->hi.fields.vector = vector;
@@ -3939,10 +3937,10 @@ static void irte_ga_set_affinity(void *entry, u16 devid, u16 index,
struct irte_ga *irte = (struct irte_ga *) entry;
struct iommu_dev_data *dev_data = search_dev_data(devid);
if (!dev_data || !dev_data->use_vapic) {
if (!dev_data || !dev_data->use_vapic ||
!irte->lo.fields_remap.guest_mode) {
irte->hi.fields.vector = vector;
irte->lo.fields_remap.destination = dest_apicid;
irte->lo.fields_remap.guest_mode = 0;
modify_irte_ga(devid, index, irte, NULL);
}
}

View File

@@ -1927,7 +1927,7 @@ struct dm_raid_superblock {
/********************************************************************
* BELOW FOLLOW V1.9.0 EXTENSIONS TO THE PRISTINE SUPERBLOCK FORMAT!!!
*
* FEATURE_FLAG_SUPPORTS_V190 in the features member indicates that those exist
* FEATURE_FLAG_SUPPORTS_V190 in the compat_features member indicates that those exist
*/
__le32 flags; /* Flags defining array states for reshaping */
@@ -2092,6 +2092,11 @@ static void super_sync(struct mddev *mddev, struct md_rdev *rdev)
sb->layout = cpu_to_le32(mddev->layout);
sb->stripe_sectors = cpu_to_le32(mddev->chunk_sectors);
/********************************************************************
* BELOW FOLLOW V1.9.0 EXTENSIONS TO THE PRISTINE SUPERBLOCK FORMAT!!!
*
* FEATURE_FLAG_SUPPORTS_V190 in the compat_features member indicates that those exist
*/
sb->new_level = cpu_to_le32(mddev->new_level);
sb->new_layout = cpu_to_le32(mddev->new_layout);
sb->new_stripe_sectors = cpu_to_le32(mddev->new_chunk_sectors);
@@ -2438,8 +2443,14 @@ static int super_validate(struct raid_set *rs, struct md_rdev *rdev)
mddev->bitmap_info.default_offset = mddev->bitmap_info.offset;
if (!test_and_clear_bit(FirstUse, &rdev->flags)) {
/* Retrieve device size stored in superblock to be prepared for shrink */
rdev->sectors = le64_to_cpu(sb->sectors);
/*
* Retrieve rdev size stored in superblock to be prepared for shrink.
* Check extended superblock members are present otherwise the size
* will not be set!
*/
if (le32_to_cpu(sb->compat_features) & FEATURE_FLAG_SUPPORTS_V190)
rdev->sectors = le64_to_cpu(sb->sectors);
rdev->recovery_offset = le64_to_cpu(sb->disk_recovery_offset);
if (rdev->recovery_offset == MaxSector)
set_bit(In_sync, &rdev->flags);

View File

@@ -1094,6 +1094,19 @@ static void process_prepared_discard_passdown_pt1(struct dm_thin_new_mapping *m)
return;
}
/*
* Increment the unmapped blocks. This prevents a race between the
* passdown io and reallocation of freed blocks.
*/
r = dm_pool_inc_data_range(pool->pmd, m->data_block, data_end);
if (r) {
metadata_operation_failed(pool, "dm_pool_inc_data_range", r);
bio_io_error(m->bio);
cell_defer_no_holder(tc, m->cell);
mempool_free(m, pool->mapping_pool);
return;
}
discard_parent = bio_alloc(GFP_NOIO, 1);
if (!discard_parent) {
DMWARN("%s: unable to allocate top level discard bio for passdown. Skipping passdown.",
@@ -1114,19 +1127,6 @@ static void process_prepared_discard_passdown_pt1(struct dm_thin_new_mapping *m)
end_discard(&op, r);
}
}
/*
* Increment the unmapped blocks. This prevents a race between the
* passdown io and reallocation of freed blocks.
*/
r = dm_pool_inc_data_range(pool->pmd, m->data_block, data_end);
if (r) {
metadata_operation_failed(pool, "dm_pool_inc_data_range", r);
bio_io_error(m->bio);
cell_defer_no_holder(tc, m->cell);
mempool_free(m, pool->mapping_pool);
return;
}
}
static void process_prepared_discard_passdown_pt2(struct dm_thin_new_mapping *m)

View File

@@ -45,7 +45,7 @@ int cxl_context_init(struct cxl_context *ctx, struct cxl_afu *afu, bool master)
mutex_init(&ctx->mapping_lock);
ctx->mapping = NULL;
if (cxl_is_psl8(afu)) {
if (cxl_is_power8()) {
spin_lock_init(&ctx->sste_lock);
/*
@@ -189,7 +189,7 @@ int cxl_context_iomap(struct cxl_context *ctx, struct vm_area_struct *vma)
if (start + len > ctx->afu->adapter->ps_size)
return -EINVAL;
if (cxl_is_psl9(ctx->afu)) {
if (cxl_is_power9()) {
/*
* Make sure there is a valid problem state
* area space for this AFU.
@@ -324,7 +324,7 @@ static void reclaim_ctx(struct rcu_head *rcu)
{
struct cxl_context *ctx = container_of(rcu, struct cxl_context, rcu);
if (cxl_is_psl8(ctx->afu))
if (cxl_is_power8())
free_page((u64)ctx->sstp);
if (ctx->ff_page)
__free_page(ctx->ff_page);

View File

@@ -357,6 +357,7 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An = {0x0A0};
#define CXL_PSL9_DSISR_An_PF_RGP 0x0000000000000090ULL /* PTE not found (Radix Guest (parent)) 0b10010000 */
#define CXL_PSL9_DSISR_An_PF_HRH 0x0000000000000094ULL /* PTE not found (HPT/Radix Host) 0b10010100 */
#define CXL_PSL9_DSISR_An_PF_STEG 0x000000000000009CULL /* PTE not found (STEG VA) 0b10011100 */
#define CXL_PSL9_DSISR_An_URTCH 0x00000000000000B4ULL /* Unsupported Radix Tree Configuration 0b10110100 */
/****** CXL_PSL_TFC_An ******************************************************/
#define CXL_PSL_TFC_An_A (1ull << (63-28)) /* Acknowledge non-translation fault */
@@ -844,24 +845,15 @@ static inline bool cxl_is_power8(void)
static inline bool cxl_is_power9(void)
{
/* intermediate solution */
if (!cxl_is_power8() &&
(cpu_has_feature(CPU_FTRS_POWER9) ||
cpu_has_feature(CPU_FTR_POWER9_DD1)))
if (pvr_version_is(PVR_POWER9))
return true;
return false;
}
static inline bool cxl_is_psl8(struct cxl_afu *afu)
static inline bool cxl_is_power9_dd1(void)
{
if (afu->adapter->caia_major == 1)
return true;
return false;
}
static inline bool cxl_is_psl9(struct cxl_afu *afu)
{
if (afu->adapter->caia_major == 2)
if ((pvr_version_is(PVR_POWER9)) &&
cpu_has_feature(CPU_FTR_POWER9_DD1))
return true;
return false;
}

View File

@@ -187,7 +187,7 @@ static struct mm_struct *get_mem_context(struct cxl_context *ctx)
static bool cxl_is_segment_miss(struct cxl_context *ctx, u64 dsisr)
{
if ((cxl_is_psl8(ctx->afu)) && (dsisr & CXL_PSL_DSISR_An_DS))
if ((cxl_is_power8() && (dsisr & CXL_PSL_DSISR_An_DS)))
return true;
return false;
@@ -195,15 +195,22 @@ static bool cxl_is_segment_miss(struct cxl_context *ctx, u64 dsisr)
static bool cxl_is_page_fault(struct cxl_context *ctx, u64 dsisr)
{
if ((cxl_is_psl8(ctx->afu)) && (dsisr & CXL_PSL_DSISR_An_DM))
u64 crs; /* Translation Checkout Response Status */
if ((cxl_is_power8()) && (dsisr & CXL_PSL_DSISR_An_DM))
return true;
if ((cxl_is_psl9(ctx->afu)) &&
((dsisr & CXL_PSL9_DSISR_An_CO_MASK) &
(CXL_PSL9_DSISR_An_PF_SLR | CXL_PSL9_DSISR_An_PF_RGC |
CXL_PSL9_DSISR_An_PF_RGP | CXL_PSL9_DSISR_An_PF_HRH |
CXL_PSL9_DSISR_An_PF_STEG)))
return true;
if (cxl_is_power9()) {
crs = (dsisr & CXL_PSL9_DSISR_An_CO_MASK);
if ((crs == CXL_PSL9_DSISR_An_PF_SLR) ||
(crs == CXL_PSL9_DSISR_An_PF_RGC) ||
(crs == CXL_PSL9_DSISR_An_PF_RGP) ||
(crs == CXL_PSL9_DSISR_An_PF_HRH) ||
(crs == CXL_PSL9_DSISR_An_PF_STEG) ||
(crs == CXL_PSL9_DSISR_An_URTCH)) {
return true;
}
}
return false;
}

View File

@@ -329,8 +329,15 @@ static int __init init_cxl(void)
cxl_debugfs_init();
if ((rc = register_cxl_calls(&cxl_calls)))
goto err;
/*
* we don't register the callback on P9. slb callack is only
* used for the PSL8 MMU and CX4.
*/
if (cxl_is_power8()) {
rc = register_cxl_calls(&cxl_calls);
if (rc)
goto err;
}
if (cpu_has_feature(CPU_FTR_HVMODE)) {
cxl_ops = &cxl_native_ops;
@@ -347,7 +354,8 @@ static int __init init_cxl(void)
return 0;
err1:
unregister_cxl_calls(&cxl_calls);
if (cxl_is_power8())
unregister_cxl_calls(&cxl_calls);
err:
cxl_debugfs_exit();
cxl_file_exit();
@@ -366,7 +374,8 @@ static void exit_cxl(void)
cxl_debugfs_exit();
cxl_file_exit();
unregister_cxl_calls(&cxl_calls);
if (cxl_is_power8())
unregister_cxl_calls(&cxl_calls);
idr_destroy(&cxl_adapter_idr);
}

View File

@@ -105,11 +105,16 @@ static int native_afu_reset(struct cxl_afu *afu)
CXL_AFU_Cntl_An_RS_MASK | CXL_AFU_Cntl_An_ES_MASK,
false);
/* Re-enable any masked interrupts */
serr = cxl_p1n_read(afu, CXL_PSL_SERR_An);
serr &= ~CXL_PSL_SERR_An_IRQ_MASKS;
cxl_p1n_write(afu, CXL_PSL_SERR_An, serr);
/*
* Re-enable any masked interrupts when the AFU is not
* activated to avoid side effects after attaching a process
* in dedicated mode.
*/
if (afu->current_mode == 0) {
serr = cxl_p1n_read(afu, CXL_PSL_SERR_An);
serr &= ~CXL_PSL_SERR_An_IRQ_MASKS;
cxl_p1n_write(afu, CXL_PSL_SERR_An, serr);
}
return rc;
}
@@ -139,9 +144,9 @@ int cxl_psl_purge(struct cxl_afu *afu)
pr_devel("PSL purge request\n");
if (cxl_is_psl8(afu))
if (cxl_is_power8())
trans_fault = CXL_PSL_DSISR_TRANS;
if (cxl_is_psl9(afu))
if (cxl_is_power9())
trans_fault = CXL_PSL9_DSISR_An_TF;
if (!cxl_ops->link_ok(afu->adapter, afu)) {
@@ -603,7 +608,7 @@ static u64 calculate_sr(struct cxl_context *ctx)
if (!test_tsk_thread_flag(current, TIF_32BIT))
sr |= CXL_PSL_SR_An_SF;
}
if (cxl_is_psl9(ctx->afu)) {
if (cxl_is_power9()) {
if (radix_enabled())
sr |= CXL_PSL_SR_An_XLAT_ror;
else
@@ -1117,10 +1122,10 @@ static irqreturn_t native_handle_psl_slice_error(struct cxl_context *ctx,
static bool cxl_is_translation_fault(struct cxl_afu *afu, u64 dsisr)
{
if ((cxl_is_psl8(afu)) && (dsisr & CXL_PSL_DSISR_TRANS))
if ((cxl_is_power8()) && (dsisr & CXL_PSL_DSISR_TRANS))
return true;
if ((cxl_is_psl9(afu)) && (dsisr & CXL_PSL9_DSISR_An_TF))
if ((cxl_is_power9()) && (dsisr & CXL_PSL9_DSISR_An_TF))
return true;
return false;
@@ -1194,10 +1199,10 @@ static void native_irq_wait(struct cxl_context *ctx)
if (ph != ctx->pe)
return;
dsisr = cxl_p2n_read(ctx->afu, CXL_PSL_DSISR_An);
if (cxl_is_psl8(ctx->afu) &&
if (cxl_is_power8() &&
((dsisr & CXL_PSL_DSISR_PENDING) == 0))
return;
if (cxl_is_psl9(ctx->afu) &&
if (cxl_is_power9() &&
((dsisr & CXL_PSL9_DSISR_PENDING) == 0))
return;
/*

View File

@@ -436,7 +436,7 @@ static int init_implementation_adapter_regs_psl9(struct cxl *adapter, struct pci
/* nMMU_ID Defaults to: b000001001*/
xsl_dsnctl |= ((u64)0x09 << (63-28));
if (cxl_is_power9() && !cpu_has_feature(CPU_FTR_POWER9_DD1)) {
if (!(cxl_is_power9_dd1())) {
/*
* Used to identify CAPI packets which should be sorted into
* the Non-Blocking queues by the PHB. This field should match
@@ -491,7 +491,7 @@ static int init_implementation_adapter_regs_psl9(struct cxl *adapter, struct pci
cxl_p1_write(adapter, CXL_PSL9_APCDEDTYPE, 0x40000003FFFF0000ULL);
/* Disable vc dd1 fix */
if ((cxl_is_power9() && cpu_has_feature(CPU_FTR_POWER9_DD1)))
if (cxl_is_power9_dd1())
cxl_p1_write(adapter, CXL_PSL9_GP_CT, 0x0400000000000001ULL);
return 0;
@@ -1439,8 +1439,7 @@ int cxl_pci_reset(struct cxl *adapter)
* The adapter is about to be reset, so ignore errors.
* Not supported on P9 DD1
*/
if ((cxl_is_power8()) ||
((cxl_is_power9() && !cpu_has_feature(CPU_FTR_POWER9_DD1))))
if ((cxl_is_power8()) || (!(cxl_is_power9_dd1())))
cxl_data_cache_flush(adapter);
/* pcie_warm_reset requests a fundamental pci reset which includes a
@@ -1750,7 +1749,6 @@ static const struct cxl_service_layer_ops psl9_ops = {
.debugfs_add_adapter_regs = cxl_debugfs_add_adapter_regs_psl9,
.debugfs_add_afu_regs = cxl_debugfs_add_afu_regs_psl9,
.psl_irq_dump_registers = cxl_native_irq_dump_regs_psl9,
.err_irq_dump_registers = cxl_native_err_irq_dump_regs,
.debugfs_stop_trace = cxl_stop_trace_psl9,
.write_timebase_ctrl = write_timebase_ctrl_psl9,
.timebase_read = timebase_read_psl9,
@@ -1889,8 +1887,7 @@ static void cxl_pci_remove_adapter(struct cxl *adapter)
* Flush adapter datacache as its about to be removed.
* Not supported on P9 DD1.
*/
if ((cxl_is_power8()) ||
((cxl_is_power9() && !cpu_has_feature(CPU_FTR_POWER9_DD1))))
if ((cxl_is_power8()) || (!(cxl_is_power9_dd1())))
cxl_data_cache_flush(adapter);
cxl_deconfigure_adapter(adapter);

View File

@@ -756,6 +756,7 @@ irqreturn_t arcnet_interrupt(int irq, void *dev_id)
struct net_device *dev = dev_id;
struct arcnet_local *lp;
int recbuf, status, diagstatus, didsomething, boguscount;
unsigned long flags;
int retval = IRQ_NONE;
arc_printk(D_DURING, dev, "\n");
@@ -765,7 +766,7 @@ irqreturn_t arcnet_interrupt(int irq, void *dev_id)
lp = netdev_priv(dev);
BUG_ON(!lp);
spin_lock(&lp->lock);
spin_lock_irqsave(&lp->lock, flags);
/* RESET flag was enabled - if device is not running, we must
* clear it right away (but nothing else).
@@ -774,7 +775,7 @@ irqreturn_t arcnet_interrupt(int irq, void *dev_id)
if (lp->hw.status(dev) & RESETflag)
lp->hw.command(dev, CFLAGScmd | RESETclear);
lp->hw.intmask(dev, 0);
spin_unlock(&lp->lock);
spin_unlock_irqrestore(&lp->lock, flags);
return retval;
}
@@ -998,7 +999,7 @@ irqreturn_t arcnet_interrupt(int irq, void *dev_id)
udelay(1);
lp->hw.intmask(dev, lp->intmask);
spin_unlock(&lp->lock);
spin_unlock_irqrestore(&lp->lock, flags);
return retval;
}
EXPORT_SYMBOL(arcnet_interrupt);

View File

@@ -212,7 +212,7 @@ static int ack_tx(struct net_device *dev, int acked)
ackpkt->soft.cap.proto = 0; /* using protocol 0 for acknowledge */
ackpkt->soft.cap.mes.ack = acked;
arc_printk(D_PROTO, dev, "Ackknowledge for cap packet %x.\n",
arc_printk(D_PROTO, dev, "Acknowledge for cap packet %x.\n",
*((int *)&ackpkt->soft.cap.cookie[0]));
ackskb->protocol = cpu_to_be16(ETH_P_ARCNET);

View File

@@ -135,6 +135,7 @@ static int com20020pci_probe(struct pci_dev *pdev,
for (i = 0; i < ci->devcount; i++) {
struct com20020_pci_channel_map *cm = &ci->chan_map_tbl[i];
struct com20020_dev *card;
int dev_id_mask = 0xf;
dev = alloc_arcdev(device);
if (!dev) {
@@ -166,6 +167,7 @@ static int com20020pci_probe(struct pci_dev *pdev,
arcnet_outb(0x00, ioaddr, COM20020_REG_W_COMMAND);
arcnet_inb(ioaddr, COM20020_REG_R_DIAGSTAT);
SET_NETDEV_DEV(dev, &pdev->dev);
dev->base_addr = ioaddr;
dev->dev_addr[0] = node;
dev->irq = pdev->irq;
@@ -179,8 +181,8 @@ static int com20020pci_probe(struct pci_dev *pdev,
/* Get the dev_id from the PLX rotary coder */
if (!strncmp(ci->name, "EAE PLX-PCI MA1", 15))
dev->dev_id = 0xc;
dev->dev_id ^= inb(priv->misc + ci->rotary) >> 4;
dev_id_mask = 0x3;
dev->dev_id = (inb(priv->misc + ci->rotary) >> 4) & dev_id_mask;
snprintf(dev->name, sizeof(dev->name), "arc%d-%d", dev->dev_id, i);

View File

@@ -246,8 +246,6 @@ int com20020_found(struct net_device *dev, int shared)
return -ENODEV;
}
dev->base_addr = ioaddr;
arc_printk(D_NORMAL, dev, "%s: station %02Xh found at %03lXh, IRQ %d.\n",
lp->card_name, dev->dev_addr[0], dev->base_addr, dev->irq);

View File

@@ -12729,7 +12729,7 @@ static int bnx2x_set_mc_list(struct bnx2x *bp)
} else {
/* If no mc addresses are required, flush the configuration */
rc = bnx2x_config_mcast(bp, &rparam, BNX2X_MCAST_CMD_DEL);
if (rc)
if (rc < 0)
BNX2X_ERR("Failed to clear multicast configuration %d\n",
rc);
}

View File

@@ -1301,10 +1301,11 @@ static inline struct sk_buff *bnxt_tpa_end(struct bnxt *bp,
cp_cons = NEXT_CMP(cp_cons);
}
if (unlikely(agg_bufs > MAX_SKB_FRAGS)) {
if (unlikely(agg_bufs > MAX_SKB_FRAGS || TPA_END_ERRORS(tpa_end1))) {
bnxt_abort_tpa(bp, bnapi, cp_cons, agg_bufs);
netdev_warn(bp->dev, "TPA frags %d exceeded MAX_SKB_FRAGS %d\n",
agg_bufs, (int)MAX_SKB_FRAGS);
if (agg_bufs > MAX_SKB_FRAGS)
netdev_warn(bp->dev, "TPA frags %d exceeded MAX_SKB_FRAGS %d\n",
agg_bufs, (int)MAX_SKB_FRAGS);
return NULL;
}
@@ -1562,6 +1563,45 @@ next_rx_no_prod:
return rc;
}
/* In netpoll mode, if we are using a combined completion ring, we need to
* discard the rx packets and recycle the buffers.
*/
static int bnxt_force_rx_discard(struct bnxt *bp, struct bnxt_napi *bnapi,
u32 *raw_cons, u8 *event)
{
struct bnxt_cp_ring_info *cpr = &bnapi->cp_ring;
u32 tmp_raw_cons = *raw_cons;
struct rx_cmp_ext *rxcmp1;
struct rx_cmp *rxcmp;
u16 cp_cons;
u8 cmp_type;
cp_cons = RING_CMP(tmp_raw_cons);
rxcmp = (struct rx_cmp *)
&cpr->cp_desc_ring[CP_RING(cp_cons)][CP_IDX(cp_cons)];
tmp_raw_cons = NEXT_RAW_CMP(tmp_raw_cons);
cp_cons = RING_CMP(tmp_raw_cons);
rxcmp1 = (struct rx_cmp_ext *)
&cpr->cp_desc_ring[CP_RING(cp_cons)][CP_IDX(cp_cons)];
if (!RX_CMP_VALID(rxcmp1, tmp_raw_cons))
return -EBUSY;
cmp_type = RX_CMP_TYPE(rxcmp);
if (cmp_type == CMP_TYPE_RX_L2_CMP) {
rxcmp1->rx_cmp_cfa_code_errors_v2 |=
cpu_to_le32(RX_CMPL_ERRORS_CRC_ERROR);
} else if (cmp_type == CMP_TYPE_RX_L2_TPA_END_CMP) {
struct rx_tpa_end_cmp_ext *tpa_end1;
tpa_end1 = (struct rx_tpa_end_cmp_ext *)rxcmp1;
tpa_end1->rx_tpa_end_cmp_errors_v2 |=
cpu_to_le32(RX_TPA_END_CMP_ERRORS);
}
return bnxt_rx_pkt(bp, bnapi, raw_cons, event);
}
#define BNXT_GET_EVENT_PORT(data) \
((data) & \
ASYNC_EVENT_CMPL_PORT_CONN_NOT_ALLOWED_EVENT_DATA1_PORT_ID_MASK)
@@ -1744,7 +1784,11 @@ static int bnxt_poll_work(struct bnxt *bp, struct bnxt_napi *bnapi, int budget)
if (unlikely(tx_pkts > bp->tx_wake_thresh))
rx_pkts = budget;
} else if ((TX_CMP_TYPE(txcmp) & 0x30) == 0x10) {
rc = bnxt_rx_pkt(bp, bnapi, &raw_cons, &event);
if (likely(budget))
rc = bnxt_rx_pkt(bp, bnapi, &raw_cons, &event);
else
rc = bnxt_force_rx_discard(bp, bnapi, &raw_cons,
&event);
if (likely(rc >= 0))
rx_pkts += rc;
else if (rc == -EBUSY) /* partial completion */
@@ -6663,12 +6707,11 @@ static void bnxt_poll_controller(struct net_device *dev)
struct bnxt *bp = netdev_priv(dev);
int i;
for (i = 0; i < bp->cp_nr_rings; i++) {
struct bnxt_irq *irq = &bp->irq_tbl[i];
/* Only process tx rings/combined rings in netpoll mode. */
for (i = 0; i < bp->tx_nr_rings; i++) {
struct bnxt_tx_ring_info *txr = &bp->tx_ring[i];
disable_irq(irq->vector);
irq->handler(irq->vector, bp->bnapi[i]);
enable_irq(irq->vector);
napi_schedule(&txr->bnapi->napi);
}
}
#endif

View File

@@ -374,12 +374,16 @@ struct rx_tpa_end_cmp_ext {
__le32 rx_tpa_end_cmp_errors_v2;
#define RX_TPA_END_CMP_V2 (0x1 << 0)
#define RX_TPA_END_CMP_ERRORS (0x7fff << 1)
#define RX_TPA_END_CMP_ERRORS (0x3 << 1)
#define RX_TPA_END_CMPL_ERRORS_SHIFT 1
u32 rx_tpa_end_cmp_start_opaque;
};
#define TPA_END_ERRORS(rx_tpa_end_ext) \
((rx_tpa_end_ext)->rx_tpa_end_cmp_errors_v2 & \
cpu_to_le32(RX_TPA_END_CMP_ERRORS))
#define DB_IDX_MASK 0xffffff
#define DB_IDX_VALID (0x1 << 26)
#define DB_IRQ_DIS (0x1 << 27)

View File

@@ -2,6 +2,7 @@ config FSL_FMAN
tristate "FMan support"
depends on FSL_SOC || ARCH_LAYERSCAPE || COMPILE_TEST
select GENERIC_ALLOCATOR
depends on HAS_DMA
select PHYLIB
default n
help

View File

@@ -3334,6 +3334,9 @@ static int mlxsw_sp_inetaddr_vlan_event(struct net_device *vlan_dev,
struct mlxsw_sp *mlxsw_sp = mlxsw_sp_lower_get(vlan_dev);
u16 vid = vlan_dev_vlan_id(vlan_dev);
if (netif_is_bridge_port(vlan_dev))
return 0;
if (mlxsw_sp_port_dev_check(real_dev))
return mlxsw_sp_inetaddr_vport_event(vlan_dev, real_dev, event,
vid);

View File

@@ -1505,8 +1505,8 @@ static int ofdpa_port_ipv4_nh(struct ofdpa_port *ofdpa_port,
*index = entry->index;
resolved = false;
} else if (removing) {
ofdpa_neigh_del(trans, found);
*index = found->index;
ofdpa_neigh_del(trans, found);
} else if (updating) {
ofdpa_neigh_update(found, trans, NULL, false);
resolved = !is_zero_ether_addr(found->eth_dst);

View File

@@ -4172,7 +4172,7 @@ found:
* recipients
*/
if (is_mc_recip) {
MCDI_DECLARE_BUF(inbuf, MC_CMD_FILTER_OP_IN_LEN);
MCDI_DECLARE_BUF(inbuf, MC_CMD_FILTER_OP_EXT_IN_LEN);
unsigned int depth, i;
memset(inbuf, 0, sizeof(inbuf));
@@ -4320,7 +4320,7 @@ static int efx_ef10_filter_remove_internal(struct efx_nic *efx,
efx_ef10_filter_set_entry(table, filter_idx, NULL, 0);
} else {
efx_mcdi_display_error(efx, MC_CMD_FILTER_OP,
MC_CMD_FILTER_OP_IN_LEN,
MC_CMD_FILTER_OP_EXT_IN_LEN,
NULL, 0, rc);
}
}
@@ -4453,7 +4453,7 @@ static s32 efx_ef10_filter_rfs_insert(struct efx_nic *efx,
struct efx_filter_spec *spec)
{
struct efx_ef10_filter_table *table = efx->filter_state;
MCDI_DECLARE_BUF(inbuf, MC_CMD_FILTER_OP_IN_LEN);
MCDI_DECLARE_BUF(inbuf, MC_CMD_FILTER_OP_EXT_IN_LEN);
struct efx_filter_spec *saved_spec;
unsigned int hash, i, depth = 1;
bool replacing = false;
@@ -4940,7 +4940,7 @@ not_restored:
static void efx_ef10_filter_table_remove(struct efx_nic *efx)
{
struct efx_ef10_filter_table *table = efx->filter_state;
MCDI_DECLARE_BUF(inbuf, MC_CMD_FILTER_OP_IN_LEN);
MCDI_DECLARE_BUF(inbuf, MC_CMD_FILTER_OP_EXT_IN_LEN);
struct efx_filter_spec *spec;
unsigned int filter_idx;
int rc;
@@ -5105,6 +5105,7 @@ static int efx_ef10_filter_insert_addr_list(struct efx_nic *efx,
/* Insert/renew filters */
for (i = 0; i < addr_count; i++) {
EFX_WARN_ON_PARANOID(ids[i] != EFX_EF10_FILTER_ID_INVALID);
efx_filter_init_rx(&spec, EFX_FILTER_PRI_AUTO, filter_flags, 0);
efx_filter_set_eth_local(&spec, vlan->vid, addr_list[i].addr);
rc = efx_ef10_filter_insert(efx, &spec, true);
@@ -5122,11 +5123,11 @@ static int efx_ef10_filter_insert_addr_list(struct efx_nic *efx,
}
return rc;
} else {
/* mark as not inserted, and carry on */
rc = EFX_EF10_FILTER_ID_INVALID;
/* keep invalid ID, and carry on */
}
} else {
ids[i] = efx_ef10_filter_get_unsafe_id(rc);
}
ids[i] = efx_ef10_filter_get_unsafe_id(rc);
}
if (multicast && rollback) {

View File

@@ -90,7 +90,7 @@ int ti_cm_get_macid(struct device *dev, int slave, u8 *mac_addr)
if (of_device_is_compatible(dev->of_node, "ti,dm816-emac"))
return cpsw_am33xx_cm_get_macid(dev, 0x30, slave, mac_addr);
if (of_machine_is_compatible("ti,am4372"))
if (of_machine_is_compatible("ti,am43"))
return cpsw_am33xx_cm_get_macid(dev, 0x630, slave, mac_addr);
if (of_machine_is_compatible("ti,dra7"))

View File

@@ -776,7 +776,7 @@ static int netvsc_set_channels(struct net_device *net,
channels->rx_count || channels->tx_count || channels->other_count)
return -EINVAL;
if (count > net->num_tx_queues || count > net->num_rx_queues)
if (count > net->num_tx_queues || count > VRSS_CHANNEL_MAX)
return -EINVAL;
if (!nvdev || nvdev->destroy)
@@ -1203,7 +1203,7 @@ static int netvsc_set_rxfh(struct net_device *dev, const u32 *indir,
rndis_dev = ndev->extension;
if (indir) {
for (i = 0; i < ITAB_NUM; i++)
if (indir[i] >= dev->num_rx_queues)
if (indir[i] >= VRSS_CHANNEL_MAX)
return -EINVAL;
for (i = 0; i < ITAB_NUM; i++)

View File

@@ -39,16 +39,20 @@
#define MACVLAN_HASH_SIZE (1<<MACVLAN_HASH_BITS)
#define MACVLAN_BC_QUEUE_LEN 1000
#define MACVLAN_F_PASSTHRU 1
#define MACVLAN_F_ADDRCHANGE 2
struct macvlan_port {
struct net_device *dev;
struct hlist_head vlan_hash[MACVLAN_HASH_SIZE];
struct list_head vlans;
struct sk_buff_head bc_queue;
struct work_struct bc_work;
bool passthru;
u32 flags;
int count;
struct hlist_head vlan_source_hash[MACVLAN_HASH_SIZE];
DECLARE_BITMAP(mc_filter, MACVLAN_MC_FILTER_SZ);
unsigned char perm_addr[ETH_ALEN];
};
struct macvlan_source_entry {
@@ -66,6 +70,31 @@ struct macvlan_skb_cb {
static void macvlan_port_destroy(struct net_device *dev);
static inline bool macvlan_passthru(const struct macvlan_port *port)
{
return port->flags & MACVLAN_F_PASSTHRU;
}
static inline void macvlan_set_passthru(struct macvlan_port *port)
{
port->flags |= MACVLAN_F_PASSTHRU;
}
static inline bool macvlan_addr_change(const struct macvlan_port *port)
{
return port->flags & MACVLAN_F_ADDRCHANGE;
}
static inline void macvlan_set_addr_change(struct macvlan_port *port)
{
port->flags |= MACVLAN_F_ADDRCHANGE;
}
static inline void macvlan_clear_addr_change(struct macvlan_port *port)
{
port->flags &= ~MACVLAN_F_ADDRCHANGE;
}
/* Hash Ethernet address */
static u32 macvlan_eth_hash(const unsigned char *addr)
{
@@ -181,11 +210,12 @@ static void macvlan_hash_change_addr(struct macvlan_dev *vlan,
static bool macvlan_addr_busy(const struct macvlan_port *port,
const unsigned char *addr)
{
/* Test to see if the specified multicast address is
/* Test to see if the specified address is
* currently in use by the underlying device or
* another macvlan.
*/
if (ether_addr_equal_64bits(port->dev->dev_addr, addr))
if (!macvlan_passthru(port) && !macvlan_addr_change(port) &&
ether_addr_equal_64bits(port->dev->dev_addr, addr))
return true;
if (macvlan_hash_lookup(port, addr))
@@ -445,7 +475,7 @@ static rx_handler_result_t macvlan_handle_frame(struct sk_buff **pskb)
}
macvlan_forward_source(skb, port, eth->h_source);
if (port->passthru)
if (macvlan_passthru(port))
vlan = list_first_or_null_rcu(&port->vlans,
struct macvlan_dev, list);
else
@@ -574,7 +604,7 @@ static int macvlan_open(struct net_device *dev)
struct net_device *lowerdev = vlan->lowerdev;
int err;
if (vlan->port->passthru) {
if (macvlan_passthru(vlan->port)) {
if (!(vlan->flags & MACVLAN_FLAG_NOPROMISC)) {
err = dev_set_promiscuity(lowerdev, 1);
if (err < 0)
@@ -649,7 +679,7 @@ static int macvlan_stop(struct net_device *dev)
dev_uc_unsync(lowerdev, dev);
dev_mc_unsync(lowerdev, dev);
if (vlan->port->passthru) {
if (macvlan_passthru(vlan->port)) {
if (!(vlan->flags & MACVLAN_FLAG_NOPROMISC))
dev_set_promiscuity(lowerdev, -1);
goto hash_del;
@@ -672,6 +702,7 @@ static int macvlan_sync_address(struct net_device *dev, unsigned char *addr)
{
struct macvlan_dev *vlan = netdev_priv(dev);
struct net_device *lowerdev = vlan->lowerdev;
struct macvlan_port *port = vlan->port;
int err;
if (!(dev->flags & IFF_UP)) {
@@ -682,7 +713,7 @@ static int macvlan_sync_address(struct net_device *dev, unsigned char *addr)
if (macvlan_addr_busy(vlan->port, addr))
return -EBUSY;
if (!vlan->port->passthru) {
if (!macvlan_passthru(port)) {
err = dev_uc_add(lowerdev, addr);
if (err)
return err;
@@ -692,6 +723,15 @@ static int macvlan_sync_address(struct net_device *dev, unsigned char *addr)
macvlan_hash_change_addr(vlan, addr);
}
if (macvlan_passthru(port) && !macvlan_addr_change(port)) {
/* Since addr_change isn't set, we are here due to lower
* device change. Save the lower-dev address so we can
* restore it later.
*/
ether_addr_copy(vlan->port->perm_addr,
lowerdev->dev_addr);
}
macvlan_clear_addr_change(port);
return 0;
}
@@ -703,7 +743,12 @@ static int macvlan_set_mac_address(struct net_device *dev, void *p)
if (!is_valid_ether_addr(addr->sa_data))
return -EADDRNOTAVAIL;
/* If the addresses are the same, this is a no-op */
if (ether_addr_equal(dev->dev_addr, addr->sa_data))
return 0;
if (vlan->mode == MACVLAN_MODE_PASSTHRU) {
macvlan_set_addr_change(vlan->port);
dev_set_mac_address(vlan->lowerdev, addr);
return 0;
}
@@ -928,7 +973,7 @@ static int macvlan_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
/* Support unicast filter only on passthru devices.
* Multicast filter should be allowed on all devices.
*/
if (!vlan->port->passthru && is_unicast_ether_addr(addr))
if (!macvlan_passthru(vlan->port) && is_unicast_ether_addr(addr))
return -EOPNOTSUPP;
if (flags & NLM_F_REPLACE)
@@ -952,7 +997,7 @@ static int macvlan_fdb_del(struct ndmsg *ndm, struct nlattr *tb[],
/* Support unicast filter only on passthru devices.
* Multicast filter should be allowed on all devices.
*/
if (!vlan->port->passthru && is_unicast_ether_addr(addr))
if (!macvlan_passthru(vlan->port) && is_unicast_ether_addr(addr))
return -EOPNOTSUPP;
if (is_unicast_ether_addr(addr))
@@ -1120,8 +1165,8 @@ static int macvlan_port_create(struct net_device *dev)
if (port == NULL)
return -ENOMEM;
port->passthru = false;
port->dev = dev;
ether_addr_copy(port->perm_addr, dev->dev_addr);
INIT_LIST_HEAD(&port->vlans);
for (i = 0; i < MACVLAN_HASH_SIZE; i++)
INIT_HLIST_HEAD(&port->vlan_hash[i]);
@@ -1161,6 +1206,18 @@ static void macvlan_port_destroy(struct net_device *dev)
kfree_skb(skb);
}
/* If the lower device address has been changed by passthru
* macvlan, put it back.
*/
if (macvlan_passthru(port) &&
!ether_addr_equal(port->dev->dev_addr, port->perm_addr)) {
struct sockaddr sa;
sa.sa_family = port->dev->type;
memcpy(&sa.sa_data, port->perm_addr, port->dev->addr_len);
dev_set_mac_address(port->dev, &sa);
}
kfree(port);
}
@@ -1326,7 +1383,7 @@ int macvlan_common_newlink(struct net *src_net, struct net_device *dev,
port = macvlan_port_get_rtnl(lowerdev);
/* Only 1 macvlan device can be created in passthru mode */
if (port->passthru) {
if (macvlan_passthru(port)) {
/* The macvlan port must be not created this time,
* still goto destroy_macvlan_port for readability.
*/
@@ -1352,7 +1409,7 @@ int macvlan_common_newlink(struct net *src_net, struct net_device *dev,
err = -EINVAL;
goto destroy_macvlan_port;
}
port->passthru = true;
macvlan_set_passthru(port);
eth_hw_addr_inherit(dev, lowerdev);
}
@@ -1434,7 +1491,7 @@ static int macvlan_changelink(struct net_device *dev,
if (data && data[IFLA_MACVLAN_FLAGS]) {
__u16 flags = nla_get_u16(data[IFLA_MACVLAN_FLAGS]);
bool promisc = (flags ^ vlan->flags) & MACVLAN_FLAG_NOPROMISC;
if (vlan->port->passthru && promisc) {
if (macvlan_passthru(vlan->port) && promisc) {
int err;
if (flags & MACVLAN_FLAG_NOPROMISC)
@@ -1597,7 +1654,7 @@ static int macvlan_device_event(struct notifier_block *unused,
}
break;
case NETDEV_CHANGEADDR:
if (!port->passthru)
if (!macvlan_passthru(port))
return NOTIFY_DONE;
vlan = list_first_entry_or_null(&port->vlans,

View File

@@ -908,7 +908,7 @@ static void decode_txts(struct dp83640_private *dp83640,
if (overflow) {
pr_debug("tx timestamp queue overflow, count %d\n", overflow);
while (skb) {
skb_complete_tx_timestamp(skb, NULL);
kfree_skb(skb);
skb = skb_dequeue(&dp83640->tx_queue);
}
return;

View File

@@ -619,6 +619,8 @@ static int ksz9031_read_status(struct phy_device *phydev)
if ((regval & 0xFF) == 0xFF) {
phy_init_hw(phydev);
phydev->link = 0;
if (phydev->drv->config_intr && phy_interrupt_is_valid(phydev))
phydev->drv->config_intr(phydev);
}
return 0;

View File

@@ -1722,6 +1722,18 @@ static const struct driver_info lenovo_info = {
.tx_fixup = ax88179_tx_fixup,
};
static const struct driver_info belkin_info = {
.description = "Belkin USB Ethernet Adapter",
.bind = ax88179_bind,
.unbind = ax88179_unbind,
.status = ax88179_status,
.link_reset = ax88179_link_reset,
.reset = ax88179_reset,
.flags = FLAG_ETHER | FLAG_FRAMING_AX,
.rx_fixup = ax88179_rx_fixup,
.tx_fixup = ax88179_tx_fixup,
};
static const struct usb_device_id products[] = {
{
/* ASIX AX88179 10/100/1000 */
@@ -1751,6 +1763,10 @@ static const struct usb_device_id products[] = {
/* Lenovo OneLinkDock Gigabit LAN */
USB_DEVICE(0x17ef, 0x304b),
.driver_info = (unsigned long)&lenovo_info,
}, {
/* Belkin B2B128 USB 3.0 Hub + Gigabit Ethernet Adapter */
USB_DEVICE(0x050d, 0x0128),
.driver_info = (unsigned long)&belkin_info,
},
{ },
};

View File

@@ -383,7 +383,7 @@ static int veth_newlink(struct net *src_net, struct net_device *dev,
tbp = tb;
}
if (tbp[IFLA_IFNAME]) {
if (ifmp && tbp[IFLA_IFNAME]) {
nla_strlcpy(ifname, tbp[IFLA_IFNAME], IFNAMSIZ);
name_assign_type = NET_NAME_USER;
} else {
@@ -402,7 +402,7 @@ static int veth_newlink(struct net *src_net, struct net_device *dev,
return PTR_ERR(peer);
}
if (tbp[IFLA_ADDRESS] == NULL)
if (!ifmp || !tbp[IFLA_ADDRESS])
eth_hw_addr_random(peer);
if (ifmp && (dev->ifindex != 0))

View File

@@ -1797,6 +1797,7 @@ static void virtnet_freeze_down(struct virtio_device *vdev)
flush_work(&vi->config_work);
netif_device_detach(vi->dev);
netif_tx_disable(vi->dev);
cancel_delayed_work_sync(&vi->refill);
if (netif_running(vi->dev)) {

Some files were not shown because too many files have changed in this diff Show More