Compare commits

...

201 Commits

Author SHA1 Message Date
Linus Torvalds
2c6625cd54 Linux 4.2-rc7 2015-08-16 16:34:13 -07:00
Linus Torvalds
8916e0b03e Merge tag 'armsoc-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC fixes from Olof Johansson:
 "A smallish batch of fixes, a little more than expected this late, but
  all fixes are contained to their platforms and seem reasonably low
  risk:

   - a somewhat large SMP fix for ux500 that still seemed warranted to
     include here
   - OMAP DT fixes for pbias regulator specification that broke due to
     some DT reshuffling
   - PCIe IRQ routing bugfix for i.MX
   - networking fixes for keystone
   - runtime PM for OMAP GPMC
   - a couple of error path bug fixes for exynos"

* tag 'armsoc-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
  ARM: dts: keystone: Fix the mdio bindings by moving it to soc specific file
  ARM: dts: keystone: fix the clock node for mdio
  memory: omap-gpmc: Don't try to save uninitialized GPMC context
  ARM: imx6: correct i.MX6 PCIe interrupt routing
  ARM: ux500: add an SMP enablement type and move cpu nodes
  ARM: dts: dra7: Fix broken pbias device creation
  ARM: dts: OMAP5: Fix broken pbias device creation
  ARM: dts: OMAP4: Fix broken pbias device creation
  ARM: dts: omap243x: Fix broken pbias device creation
  ARM: EXYNOS: fix double of_node_put() on error path
  ARM: EXYNOS: Fix potentian kfree() of ro memory
2015-08-16 15:44:33 -07:00
Linus Torvalds
0f405bf75f Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus
Pull MIPS bugfix from Ralf Baechle:
 "Only a single MIPS fix - the math when invoking syscall_trace_enter
  was wrong"

* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus:
  MIPS: Fix seccomp syscall argument for MIPS64
2015-08-16 15:39:31 -07:00
Linus Torvalds
01565479e9 Merge branch 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Merge x86 fixes from Ingo Molnar:
 "Two followup fixes related to the previous LDT fix"

Also applied a further FPU emulation fix from Andy Lutomirski to the
branch before actually merging it.

* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
  x86/ldt: Further fix FPU emulation
  x86/ldt: Correct FPU emulation access to LDT
  x86/ldt: Correct LDT access in single stepping logic
2015-08-16 15:11:25 -07:00
Andy Lutomirski
12e244f4b5 x86/ldt: Further fix FPU emulation
The previous fix confused a selector with a segment prefix.  Fix it.

Compile-tested only.

Cc: stable@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Fixes: 4809146b86 ("x86/ldt: Correct FPU emulation access to LDT")
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-16 15:11:05 -07:00
Jann Horn
8ed1f0e22f fs/fuse: fix ioctl type confusion
fuse_dev_ioctl() performed fuse_get_dev() on a user-supplied fd,
leading to a type confusion issue. Fix it by checking file->f_op.

Signed-off-by: Jann Horn <jann@thejh.net>
Acked-by: Miklos Szeredi <miklos@szeredi.hu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-16 12:35:44 -07:00
Olof Johansson
02149517ac Merge tag 'keystone-dts-late-fixes-v2' of git://git.kernel.org/pub/scm/linux/kernel/git/ssantosh/linux-keystone into fixes
ARM: Couple of Keysyone MDIO DTS fixes for 4.2-rc6+

These are necessary to get the NIC card working on all Keystone
EVMs. Couple of boards are broken without these two fixes.

* tag 'keystone-dts-late-fixes-v2' of git://git.kernel.org/pub/scm/linux/kernel/git/ssantosh/linux-keystone:
  ARM: dts: keystone: Fix the mdio bindings by moving it to soc specific file
  ARM: dts: keystone: fix the clock node for mdio

Signed-off-by: Olof Johansson <olof@lixom.net>
2015-08-16 21:29:57 +02:00
Markos Chandras
9f161439e4 MIPS: Fix seccomp syscall argument for MIPS64
Commit 4c21b8fd8f ("MIPS: seccomp: Handle indirect system calls (o32)")
fixed indirect system calls on O32 but it also introduced a bug for MIPS64
where it erroneously modified the v0 (syscall) register with the assumption
that the sycall offset hasn't been taken into consideration. This breaks
seccomp on MIPS64 n64 and n32 ABIs. We fix this by replacing the addition
with a move instruction.

Fixes: 4c21b8fd8f ("MIPS: seccomp: Handle indirect system calls (o32)")
Cc: <stable@vger.kernel.org> # 3.15+
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10951/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-16 15:00:59 +02:00
Linus Torvalds
1efdb5f0a9 Merge tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
Pull SCSI fixes from James Bottomley:
 "This has two libfc fixes for bugs causing rare crashes, one iscsi fix
  for a potential hang on shutdown, and a fix for an I/O blocksize issue
  which caused a regression"

* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
  sd: Fix maximum I/O size for BLOCK_PC requests
  libfc: Fix fc_fcp_cleanup_each_cmd()
  libfc: Fix fc_exch_recv_req() error path
  libiscsi: Fix host busy blocking during connection teardown
2015-08-15 13:54:53 -07:00
Linus Torvalds
45e38cff4f Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull KVM fixes from Paolo Bonzini:
 "Just two very small & simple patches"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
  KVM: x86: Use adjustment in guest cycles when handling MSR_IA32_TSC_ADJUST
  KVM: x86: zero IDT limit on entry to SMM
2015-08-14 17:27:52 -07:00
Linus Torvalds
8394a1b715 Merge branch 'akpm' (patches from Andrew)
Merge fixes from Andrew Morton:
 "11 fixes"

* emailed patches from Andrew Morton <akpm@linux-foundation.org>:
  Update maintainers for DRM STI driver
  mm: cma: mark cma_bitmap_maxno() inline in header
  zram: fix pool name truncation
  memory-hotplug: fix wrong edge when hot add a new node
  .mailmap: Andrey Ryabinin has moved
  ipc/sem.c: update/correct memory barriers
  mm/hwpoison: fix panic due to split huge zero page
  ipc,sem: remove uneeded sem_undo_list lock usage in exit_sem()
  ipc,sem: fix use after free on IPC_RMID after a task using same semaphore set exits
  mm/hwpoison: fix fail isolate hugetlbfs page w/ refcount held
  mm/hwpoison: fix page refcount of unknown non LRU page
2015-08-14 17:05:26 -07:00
Linus Torvalds
fbd9163f1c Merge tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux
Pull clock fix from Stephen Boyd:
 "A one-liner for a regression found in the PXA clock driver"

* tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux:
  clk: pxa: pxa3xx: fix CKEN register access
2015-08-14 16:10:04 -07:00
Benjamin Gaignard
7f11c47605 Update maintainers for DRM STI driver
Add Vincent Abriou and myself as maintainers.

Signed-off-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Cc: Vincent Abriou <vincent.abriou@st.com>
Cc: Dave Airlie <airlied@linux.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-14 15:56:32 -07:00
Gregory Fong
f21838e056 mm: cma: mark cma_bitmap_maxno() inline in header
cma_bitmap_maxno() was marked as static and not static inline, which can
cause warnings about this function not being used if this file is included
in a file that does not call that function, and violates the conventions
used elsewhere.  The two options are to move the function implementation
back to mm/cma.c or make it inline here, and it's simple enough for the
latter to make sense.

Signed-off-by: Gregory Fong <gregory.0xf0@gmail.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-14 15:56:32 -07:00
Sergey Senozhatsky
4ce321f574 zram: fix pool name truncation
zram_meta_alloc() constructs a pool name for zs_create_pool() call as

    snprintf(pool_name, sizeof(pool_name), "zram%d", device_id);

However, it defines pool name buffer to be only 8 bytes long (minus
trailing zero), which means that we can have only 1000 pool names: zram0
-- zram999.

With CONFIG_ZSMALLOC_STAT enabled an attempt to create a device zram1000
can fail if device zram100 already exists, because snprintf() will
truncate new pool name to zram100 and pass it debugfs_create_dir(),
causing:

  debugfs dir <zram100> creation failed
  zram: Error creating memory pool

... and so on.

Fix it by passing zram->disk->disk_name to zram_meta_alloc() instead of
divice_id.  We construct zram%d name earlier and keep it as a ->disk_name,
no need to snprintf() it again.

Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-14 15:56:32 -07:00
Xishi Qiu
f9126ab924 memory-hotplug: fix wrong edge when hot add a new node
When we add a new node, the edge of memory may be wrong.

e.g. system has 4 nodes, and node3 is movable, node3 mem:[24G-32G],

1. hotremove the node3,
2. then hotadd node3 with a part of memory, mem:[26G-30G],
3. call hotadd_new_pgdat()
        free_area_init_node()
                get_pfn_range_for_nid()
4. it will return wrong start_pfn and end_pfn, because we have not
update the memblock.

This patch also fixes a BUG_ON during hot-addition, please see
http://marc.info/?l=linux-kernel&m=142961156129456&w=2

Signed-off-by: Xishi Qiu <qiuxishi@huawei.com>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Taku Izumi <izumi.taku@jp.fujitsu.com>
Cc: Tang Chen <tangchen@cn.fujitsu.com>
Cc: Gu Zheng <guz.fnst@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-14 15:56:32 -07:00
Andrey Ryabinin
2baf9e8948 .mailmap: Andrey Ryabinin has moved
Update my email address.

Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-14 15:56:32 -07:00
Manfred Spraul
3ed1f8a99d ipc/sem.c: update/correct memory barriers
sem_lock() did not properly pair memory barriers:

!spin_is_locked() and spin_unlock_wait() are both only control barriers.
The code needs an acquire barrier, otherwise the cpu might perform read
operations before the lock test.

As no primitive exists inside <include/spinlock.h> and since it seems
noone wants another primitive, the code creates a local primitive within
ipc/sem.c.

With regards to -stable:

The change of sem_wait_array() is a bugfix, the change to sem_lock() is a
nop (just a preprocessor redefinition to improve the readability).  The
bugfix is necessary for all kernels that use sem_wait_array() (i.e.:
starting from 3.10).

Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Reported-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Kirill Tkhai <ktkhai@parallels.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: <stable@vger.kernel.org>	[3.10+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-14 15:56:32 -07:00
Wanpeng Li
7f6bf39bbd mm/hwpoison: fix panic due to split huge zero page
Bug:

  ------------[ cut here ]------------
  kernel BUG at mm/huge_memory.c:1957!
  invalid opcode: 0000 [#1] SMP
  Modules linked in: snd_hda_codec_hdmi i915 rpcsec_gss_krb5 snd_hda_codec_realtek snd_hda_codec_generic nfsv4 dns_re
  CPU: 2 PID: 2576 Comm: test_huge Not tainted 4.2.0-rc5-mm1+ #27
  Hardware name: Dell Inc. OptiPlex 7020/0F5C5X, BIOS A03 01/08/2015
  task: ffff880204e3d600 ti: ffff8800db16c000 task.ti: ffff8800db16c000
  RIP: split_huge_page_to_list+0xdb/0x120
  Call Trace:
    memory_failure+0x32e/0x7c0
    madvise_hwpoison+0x8b/0x160
    SyS_madvise+0x40/0x240
    ? do_page_fault+0x37/0x90
    entry_SYSCALL_64_fastpath+0x12/0x71
  Code: ff f0 41 ff 4c 24 30 74 0d 31 c0 48 83 c4 08 5b 41 5c 41 5d c9 c3 4c 89 e7 e8 e2 58 fd ff 48 83 c4 08 31 c0
  RIP  split_huge_page_to_list+0xdb/0x120
   RSP <ffff8800db16fde8>
  ---[ end trace aee7ce0df8e44076 ]---

Testcase:

    #define _GNU_SOURCE
    #include <stdlib.h>
    #include <stdio.h>
    #include <sys/mman.h>
    #include <unistd.h>
    #include <fcntl.h>
    #include <sys/types.h>
    #include <errno.h>
    #include <string.h>

    #define MB 1024*1024

    int main(void)
    {
            char *mem;

            posix_memalign((void **)&mem, 2 * MB, 200 * MB);

            madvise(mem, 200 * MB, MADV_HWPOISON);

            free(mem);

            return 0;
    }

Huge zero page is allocated if page fault w/o FAULT_FLAG_WRITE flag.
The get_user_pages_fast() which called in madvise_hwpoison() will get
huge zero page if the page is not allocated before.  Huge zero page is a
tranparent huge page, however, it is not an anonymous page.
memory_failure will split the huge zero page and trigger
BUG_ON(is_huge_zero_page(page));

After commit 98ed2b0052 ("mm/memory-failure: give up error handling
for non-tail-refcounted thp"), memory_failure will not catch non anon
thp from madvise_hwpoison path and this bug occur.

Fix it by catching non anon thp in memory_failure in order to not split
huge zero page in madvise_hwpoison path.

After this patch:

  Injecting memory failure for page 0x202800 at 0x7fd8ae800000
  MCE: 0x202800: non anonymous thp
  [...]

[akpm@linux-foundation.org: remove second split, per Wanpeng]
Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-14 15:56:32 -07:00
Herton R. Krzesinski
a979558448 ipc,sem: remove uneeded sem_undo_list lock usage in exit_sem()
After we acquire the sma->sem_perm lock in exit_sem(), we are protected
against a racing IPC_RMID operation.  Also at that point, we are the last
user of sem_undo_list.  Therefore it isn't required that we acquire or use
ulp->lock.

Signed-off-by: Herton R. Krzesinski <herton@redhat.com>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Rafael Aquini <aquini@redhat.com>
CC: Aristeu Rozanski <aris@redhat.com>
Cc: David Jeffery <djeffery@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-14 15:56:32 -07:00
Herton R. Krzesinski
602b8593d2 ipc,sem: fix use after free on IPC_RMID after a task using same semaphore set exits
The current semaphore code allows a potential use after free: in
exit_sem we may free the task's sem_undo_list while there is still
another task looping through the same semaphore set and cleaning the
sem_undo list at freeary function (the task called IPC_RMID for the same
semaphore set).

For example, with a test program [1] running which keeps forking a lot
of processes (which then do a semop call with SEM_UNDO flag), and with
the parent right after removing the semaphore set with IPC_RMID, and a
kernel built with CONFIG_SLAB, CONFIG_SLAB_DEBUG and
CONFIG_DEBUG_SPINLOCK, you can easily see something like the following
in the kernel log:

   Slab corruption (Not tainted): kmalloc-64 start=ffff88003b45c1c0, len=64
   000: 6b 6b 6b 6b 6b 6b 6b 6b 00 6b 6b 6b 6b 6b 6b 6b  kkkkkkkk.kkkkkkk
   010: ff ff ff ff 6b 6b 6b 6b ff ff ff ff ff ff ff ff  ....kkkk........
   Prev obj: start=ffff88003b45c180, len=64
   000: 00 00 00 00 ad 4e ad de ff ff ff ff 5a 5a 5a 5a  .....N......ZZZZ
   010: ff ff ff ff ff ff ff ff c0 fb 01 37 00 88 ff ff  ...........7....
   Next obj: start=ffff88003b45c200, len=64
   000: 00 00 00 00 ad 4e ad de ff ff ff ff 5a 5a 5a 5a  .....N......ZZZZ
   010: ff ff ff ff ff ff ff ff 68 29 a7 3c 00 88 ff ff  ........h).<....
   BUG: spinlock wrong CPU on CPU#2, test/18028
   general protection fault: 0000 [#1] SMP
   Modules linked in: 8021q mrp garp stp llc nf_conntrack_ipv4 nf_defrag_ipv4 ip6t_REJECT nf_reject_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables binfmt_misc ppdev input_leds joydev parport_pc parport floppy serio_raw virtio_balloon virtio_rng virtio_console virtio_net iosf_mbi crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcspkr qxl ttm drm_kms_helper drm snd_hda_codec_generic i2c_piix4 snd_hda_intel snd_hda_codec snd_hda_core snd_hwdep snd_seq snd_seq_device snd_pcm snd_timer snd soundcore crc32c_intel virtio_pci virtio_ring virtio pata_acpi ata_generic [last unloaded: speedstep_lib]
   CPU: 2 PID: 18028 Comm: test Not tainted 4.2.0-rc5+ #1
   Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.8.1-20150318_183358- 04/01/2014
   RIP: spin_dump+0x53/0xc0
   Call Trace:
     spin_bug+0x30/0x40
     do_raw_spin_unlock+0x71/0xa0
     _raw_spin_unlock+0xe/0x10
     freeary+0x82/0x2a0
     ? _raw_spin_lock+0xe/0x10
     semctl_down.clone.0+0xce/0x160
     ? __do_page_fault+0x19a/0x430
     ? __audit_syscall_entry+0xa8/0x100
     SyS_semctl+0x236/0x2c0
     ? syscall_trace_leave+0xde/0x130
     entry_SYSCALL_64_fastpath+0x12/0x71
   Code: 8b 80 88 03 00 00 48 8d 88 60 05 00 00 48 c7 c7 a0 2c a4 81 31 c0 65 8b 15 eb 40 f3 7e e8 08 31 68 00 4d 85 e4 44 8b 4b 08 74 5e <45> 8b 84 24 88 03 00 00 49 8d 8c 24 60 05 00 00 8b 53 04 48 89
   RIP  [<ffffffff810d6053>] spin_dump+0x53/0xc0
    RSP <ffff88003750fd68>
   ---[ end trace 783ebb76612867a0 ]---
   NMI watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [test:18053]
   Modules linked in: 8021q mrp garp stp llc nf_conntrack_ipv4 nf_defrag_ipv4 ip6t_REJECT nf_reject_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables binfmt_misc ppdev input_leds joydev parport_pc parport floppy serio_raw virtio_balloon virtio_rng virtio_console virtio_net iosf_mbi crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcspkr qxl ttm drm_kms_helper drm snd_hda_codec_generic i2c_piix4 snd_hda_intel snd_hda_codec snd_hda_core snd_hwdep snd_seq snd_seq_device snd_pcm snd_timer snd soundcore crc32c_intel virtio_pci virtio_ring virtio pata_acpi ata_generic [last unloaded: speedstep_lib]
   CPU: 3 PID: 18053 Comm: test Tainted: G      D         4.2.0-rc5+ #1
   Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.8.1-20150318_183358- 04/01/2014
   RIP: native_read_tsc+0x0/0x20
   Call Trace:
     ? delay_tsc+0x40/0x70
     __delay+0xf/0x20
     do_raw_spin_lock+0x96/0x140
     _raw_spin_lock+0xe/0x10
     sem_lock_and_putref+0x11/0x70
     SYSC_semtimedop+0x7bf/0x960
     ? handle_mm_fault+0xbf6/0x1880
     ? dequeue_task_fair+0x79/0x4a0
     ? __do_page_fault+0x19a/0x430
     ? kfree_debugcheck+0x16/0x40
     ? __do_page_fault+0x19a/0x430
     ? __audit_syscall_entry+0xa8/0x100
     ? do_audit_syscall_entry+0x66/0x70
     ? syscall_trace_enter_phase1+0x139/0x160
     SyS_semtimedop+0xe/0x10
     SyS_semop+0x10/0x20
     entry_SYSCALL_64_fastpath+0x12/0x71
   Code: 47 10 83 e8 01 85 c0 89 47 10 75 08 65 48 89 3d 1f 74 ff 7e c9 c3 0f 1f 44 00 00 55 48 89 e5 e8 87 17 04 00 66 90 c9 c3 0f 1f 00 <55> 48 89 e5 0f 31 89 c1 48 89 d0 48 c1 e0 20 89 c9 48 09 c8 c9
   Kernel panic - not syncing: softlockup: hung tasks

I wasn't able to trigger any badness on a recent kernel without the
proper config debugs enabled, however I have softlockup reports on some
kernel versions, in the semaphore code, which are similar as above (the
scenario is seen on some servers running IBM DB2 which uses semaphore
syscalls).

The patch here fixes the race against freeary, by acquiring or waiting
on the sem_undo_list lock as necessary (exit_sem can race with freeary,
while freeary sets un->semid to -1 and removes the same sem_undo from
list_proc or when it removes the last sem_undo).

After the patch I'm unable to reproduce the problem using the test case
[1].

[1] Test case used below:

    #include <stdio.h>
    #include <sys/types.h>
    #include <sys/ipc.h>
    #include <sys/sem.h>
    #include <sys/wait.h>
    #include <stdlib.h>
    #include <time.h>
    #include <unistd.h>
    #include <errno.h>

    #define NSEM 1
    #define NSET 5

    int sid[NSET];

    void thread()
    {
            struct sembuf op;
            int s;
            uid_t pid = getuid();

            s = rand() % NSET;
            op.sem_num = pid % NSEM;
            op.sem_op = 1;
            op.sem_flg = SEM_UNDO;

            semop(sid[s], &op, 1);
            exit(EXIT_SUCCESS);
    }

    void create_set()
    {
            int i, j;
            pid_t p;
            union {
                    int val;
                    struct semid_ds *buf;
                    unsigned short int *array;
                    struct seminfo *__buf;
            } un;

            /* Create and initialize semaphore set */
            for (i = 0; i < NSET; i++) {
                    sid[i] = semget(IPC_PRIVATE , NSEM, 0644 | IPC_CREAT);
                    if (sid[i] < 0) {
                            perror("semget");
                            exit(EXIT_FAILURE);
                    }
            }
            un.val = 0;
            for (i = 0; i < NSET; i++) {
                    for (j = 0; j < NSEM; j++) {
                            if (semctl(sid[i], j, SETVAL, un) < 0)
                                    perror("semctl");
                    }
            }

            /* Launch threads that operate on semaphore set */
            for (i = 0; i < NSEM * NSET * NSET; i++) {
                    p = fork();
                    if (p < 0)
                            perror("fork");
                    if (p == 0)
                            thread();
            }

            /* Free semaphore set */
            for (i = 0; i < NSET; i++) {
                    if (semctl(sid[i], NSEM, IPC_RMID))
                            perror("IPC_RMID");
            }

            /* Wait for forked processes to exit */
            while (wait(NULL)) {
                    if (errno == ECHILD)
                            break;
            };
    }

    int main(int argc, char **argv)
    {
            pid_t p;

            srand(time(NULL));

            while (1) {
                    p = fork();
                    if (p < 0) {
                            perror("fork");
                            exit(EXIT_FAILURE);
                    }
                    if (p == 0) {
                            create_set();
                            goto end;
                    }

                    /* Wait for forked processes to exit */
                    while (wait(NULL)) {
                            if (errno == ECHILD)
                                    break;
                    };
            }
    end:
            return 0;
    }

[akpm@linux-foundation.org: use normal comment layout]
Signed-off-by: Herton R. Krzesinski <herton@redhat.com>
Acked-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Rafael Aquini <aquini@redhat.com>
CC: Aristeu Rozanski <aris@redhat.com>
Cc: David Jeffery <djeffery@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-14 15:56:32 -07:00
Wanpeng Li
036138080a mm/hwpoison: fix fail isolate hugetlbfs page w/ refcount held
Hugetlbfs pages will get a refcount in get_any_page() or
madvise_hwpoison() if soft offlining through madvise.  The refcount which
is held by the soft offline path should be released if we fail to isolate
hugetlbfs pages.

Fix it by reducing the refcount for both isolation success and failure.

Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: <stable@vger.kernel.org>	[3.9+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-14 15:56:32 -07:00
Wanpeng Li
4f32be677b mm/hwpoison: fix page refcount of unknown non LRU page
After trying to drain pages from pagevec/pageset, we try to get reference
count of the page again, however, the reference count of the page is not
reduced if the page is still not on LRU list.

Fix it by adding the put_page() to drop the page reference which is from
__get_any_page().

Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: <stable@vger.kernel.org>	[3.9+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-14 15:56:32 -07:00
Linus Torvalds
3670901f73 Merge branch 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull timer fix from Ingo Molnar:
 "A single clocksource driver suspend/resume fix"

* 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  clockevents/drivers/sh_cmt: Only perform clocksource suspend/resume if enabled
2015-08-14 11:06:43 -07:00
Linus Torvalds
b25c6cee55 Merge branch 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf fixes from Ingo Molnar:
 "Misc fixes: PMU driver corner cases, tooling fixes, and an 'AUX'
  (Intel PT) race related core fix"

* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  perf/x86/intel/cqm: Do not access cpu_data() from CPU_UP_PREPARE handler
  perf/x86/intel: Fix memory leak on hot-plug allocation fail
  perf: Fix PERF_EVENT_IOC_PERIOD migration race
  perf: Fix double-free of the AUX buffer
  perf: Fix fasync handling on inherited events
  perf tools: Fix test build error when bindir contains double slash
  perf stat: Fix transaction lenght metrics
  perf: Fix running time accounting
2015-08-14 10:57:16 -07:00
Linus Torvalds
5e5013c6b5 Merge branch 'locking-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull locking fix from Ingo Molnar:
 "A single fix for a locking self-test crash"

* 'locking-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  locking/pvqspinlock: Fix kernel panic in locking-selftest
2015-08-14 10:45:23 -07:00
Linus Torvalds
c679765416 Merge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linux
Pull drm fixes from Dave Airlie:
 "Back from holidays, found these in the cracks: one nouveau revert, one
  vmwgfx locking fix and a bunch of exynos fixes"

* 'drm-fixes' of git://people.freedesktop.org/~airlied/linux:
  Revert "drm/nouveau/fifo/gk104: kick channels when deactivating them"
  drm/vmwgfx: Fix execbuf locking issues
  drm/exynos/fimc: fix runtime pm support
  drm/exynos/mixer: always update INT_EN cache
  drm/exynos/mixer: correct vsync configuration sequence
  drm/exynos/mixer: fix interrupt clearing
  drm/exynos/hdmi: fix edid memory leak
  drm/exynos: gsc: fix wrong bitwise operation for swap detection
2015-08-14 10:39:32 -07:00
Alexandre Courbot
d211d87e14 Revert "drm/nouveau/fifo/gk104: kick channels when deactivating them"
This reverts commit 1addc12648

This commit seems to cause crashes in gk104_fifo_intr_runlist() by
returning 0xbad0da00 when register 0x2a00 is read. Since this commit was
intended for GM20B which is not completely supported yet, let's revert
it for the time being.

Reported-by: Eric Biggers <ebiggers3@gmail.com>
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Tested-by: Afzal Mohammed <afzal.mohd.ma@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2015-08-14 09:50:37 +10:00
Thomas Hellstrom
3e04e2fe6d drm/vmwgfx: Fix execbuf locking issues
This addresses two issues that cause problems with viewperf maya-03 in
situation with memory pressure.

The first issue causes attempts to unreserve buffers if batched
reservation fails due to, for example, a signal pending. While previously
the ttm_eu api was resistant against this type of error, it is no longer
and the lockdep code will complain about attempting to unreserve buffers
that are not reserved. The issue is resolved by avoid calling
ttm_eu_backoff_reservation in the buffer reserve error path.

The second issue is that the binding_mutex may be held when user-space
fence objects are created and hence during memory reclaims. This may cause
recursive attempts to grab the binding mutex. The issue is resolved by not
holding the binding mutex across fence creation and submission.

Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2015-08-14 09:49:19 +10:00
Dave Airlie
3c6d45b417 Merge branch 'exynos-drm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/daeinki/drm-exynos into drm-fixes
This pull request fixes memory leak and some issues related to
   mixer and gscaler driver issues.

* 'exynos-drm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/daeinki/drm-exynos:
  drm/exynos/fimc: fix runtime pm support
  drm/exynos/mixer: always update INT_EN cache
  drm/exynos/mixer: correct vsync configuration sequence
  drm/exynos/mixer: fix interrupt clearing
  drm/exynos/hdmi: fix edid memory leak
  drm/exynos: gsc: fix wrong bitwise operation for swap detection
2015-08-14 09:47:07 +10:00
Linus Torvalds
7ddab73346 Merge branch 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-arm
Pull ARM fixes from Russell King:
 "Another few small ARM fixes, mostly addressing some VDSO issues"

* 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-arm:
  ARM: 8410/1: VDSO: fix coarse clock monotonicity regression
  ARM: 8409/1: Mark ret_fast_syscall as a function
  ARM: 8408/1: Fix the secondary_startup function in Big Endian case
  ARM: 8405/1: VDSO: fix regression with toolchains lacking ld.bfd executable
2015-08-13 16:34:56 -07:00
Linus Torvalds
cd88ec2317 x86: fix error handling for 32-bit compat out-of-range system call numbers
Commit 3f5159a922 ("x86/asm/entry/32: Update -ENOSYS handling to match
the 64-bit logic") broke the ENOSYS handling for the 32-bit compat case.
The proper error return value was never loaded into %rax, except if
things just happened to go through the audit paths, which ended up
reloading the return value.

This moves the loading or %rax into the normal system call path, just to
make sure the error case triggers it.  It's kind of sad, since it adds a
useless instruction to reload the register to the fast path, but it's
not like that single load from the stack is going to be noticeable.

Reported-by: David Drysdale <drysdale@google.com>
Tested-by: Kees Cook <keescook@chromium.org>
Acked-by: Andy Lutomirski <luto@amacapital.net>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-13 16:19:44 -07:00
Linus Torvalds
5b3e2e14ea Merge tag 'dm-4.2-fixes-5' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm
Pull device mapper fixes from Mike Snitzer:

 - two stable fixes for corruption seen in a snapshot of thinp metadata;
   metadata snapshots aren't widely used but help provide a consistent
   view of the metadata associated with an active thin-pool.

 - a dm-cache fix for the 4.2 "default" policy switch from "mq" to "smq"

* tag 'dm-4.2-fixes-5' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
  dm cache policy smq: move 'dm-cache-default' module alias to SMQ
  dm btree: add ref counting ops for the leaves of top level btrees
  dm thin metadata: delete btrees when releasing metadata snapshot
2015-08-13 13:52:46 -07:00
Linus Torvalds
ebcbf1664c Merge branch 'for-linus' of git://git.kernel.dk/linux-block
Pull xen block driver fixes from Jens Axboe:
 "A few small bug fixes for xen-blk{front,back} that have been sitting
  over my vacation"

* 'for-linus' of git://git.kernel.dk/linux-block:
  xen-blkback: replace work_pending with work_busy in purge_persistent_gnt()
  xen-blkfront: don't add indirect pages to list when !feature_persistent
  xen-blkfront: introduce blkfront_gather_backend_features()
2015-08-13 13:44:32 -07:00
Linus Torvalds
6b476e1140 Merge tag 'for-linus-4.2-rc6-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip
Pull xen bug fixes from David Vrabel:

 - revert a fix from 4.2-rc5 that was causing lots of WARNING spam.

 - fix a memory leak affecting backends in HVM guests.

 - fix PV domU hang with certain configurations.

* tag 'for-linus-4.2-rc6-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip:
  xen/xenbus: Don't leak memory when unmapping the ring on HVM backend
  Revert "xen/events/fifo: Handle linked events when closing a port"
  x86/xen: build "Xen PV" APIC driver for domU as well
2015-08-13 13:36:22 -07:00
Linus Torvalds
ed596cde94 Revert x86 sigcontext cleanups
This reverts commits 9a036b93a3 ("x86/signal/64: Remove 'fs' and 'gs'
from sigcontext") and c6f2062935 ("x86/signal/64: Fix SS handling for
signals delivered to 64-bit programs").

They were cleanups, but they break dosemu by changing the signal return
behavior (and removing 'fs' and 'gs' from the sigcontext struct - while
not actually changing any behavior - causes build problems).

Reported-and-tested-by: Stas Sergeev <stsp@list.ru>
Acked-by: Andy Lutomirski <luto@amacapital.net>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: stable@vger.kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-13 12:42:22 -07:00
Linus Torvalds
26b552e0a8 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Pull networking fixes from David Miller:

 1) Workaround hw bug when acquiring PCI bos ownership of iwlwifi
    devices, from Emmanuel Grumbach.

 2) Falling back to vmalloc in conntrack should not emit a warning, from
    Pablo Neira Ayuso.

 3) Fix NULL deref when rtlwifi driver is used as an AP, from Luis
    Felipe Dominguez Vega.

 4) Rocker doesn't free netdev on device removal, from Ido Schimmel.

 5) UDP multicast early sock demux has route handling races, from Eric
    Dumazet.

 6) Fix L4 checksum handling in openvswitch, from Glenn Griffin.

 7) Fix use-after-free in skb_set_peeked, from Herbert Xu.

 8) Don't advertize NETIF_F_FRAGLIST in virtio_net driver, this can lead
    to fraglists longer than the driver can support.  From Jason Wang.

 9) Fix mlx5 on non-4k-pagesize systems, from Carol L Soto.

10) Fix interrupt storm in bna driver, from Ivan Vecera.

11) Don't propagate -EBUSY from netlink_insert(), from Daniel Borkmann.

12) Fix inet request sock leak, from Eric Dumazet.

13) Fix TX interrupt masking and marking in TX descriptors of fs_enet
    driver, from LEROY Christophe.

14) Get rid of rule optimizer in gianfar driver, it's buggy and unlikely
    to get fixed any time soon.  From Jakub Kicinski

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (61 commits)
  cosa: missing error code on failure in probe()
  gianfar: remove faulty filer optimizer
  gianfar: correct list membership accounting
  gianfar: correct filer table writing
  bonding: Gratuitous ARP gets dropped when first slave added
  net: dsa: Do not override PHY interface if already configured
  net: fs_enet: mask interrupts for TX partial frames.
  net: fs_enet: explicitly remove I flag on TX partial frames
  inet: fix possible request socket leak
  inet: fix races with reqsk timers
  mkiss: Fix error handling in mkiss_open()
  bnx2x: Free NVRAM lock at end of each page
  bnx2x: Prevent null pointer dereference on SKB release
  cxgb4: missing curly braces in t4_setup_debugfs()
  net-timestamp: Update skb_complete_tx_timestamp comment
  ipv6: don't reject link-local nexthop on other interface
  netlink: make sure -EBUSY won't escape from netlink_insert
  bna: fix interrupts storm caused by erroneous packets
  net: mvpp2: replace TX coalescing interrupts with hrtimer
  net: mvpp2: enable proper per-CPU TX buffers unmapping
  ...
2015-08-13 10:46:39 -07:00
Linus Torvalds
2331d30dc8 Merge tag 'edac_fix_for_4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/bp/bp
Pull EDAC fix from Borislav Petkov:
 "A ppc4xx_edac fix for accessing ->csrows properly.  This driver was
  missed during the conversion a couple of years ago"

* tag 'edac_fix_for_4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/bp/bp:
  EDAC, ppc4xx: Access mci->csrows array elements properly
2015-08-13 10:22:11 -07:00
Murali Karicheri
85ad3deea4 ARM: dts: keystone: Fix the mdio bindings by moving it to soc specific file
Currently mdio bindings are defined in keystone.dtsi and this results
in incorrect unit address for the node on K2E and K2L SoCs. Fix this
by moving them to SoC specific DTS file.

Signed-off-by: Murali Karicheri <m-karicheri2@ti.com>
Signed-off-by: Santosh Shilimkar <ssantosh@kernel.org>
2015-08-13 10:01:29 -07:00
Murali Karicheri
e61eee7cf8 ARM: dts: keystone: fix the clock node for mdio
Currently the MDIO clock is pointing to clkpa instead of clkcpgmac.
MDIO is part of the ethss and the clock should be clkcpgmac.

Signed-off-by: Murali Karicheri <m-karicheri2@ti.com>
Signed-off-by: Santosh Shilimkar <ssantosh@kernel.org>
2015-08-13 10:01:29 -07:00
Olof Johansson
659181aedd Merge tag 'omap-for-v4.2/fixes-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap into fixes
Fix a NULL pointer exception for omap GPMC bus code if probe fails.

* tag 'omap-for-v4.2/fixes-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap:
  memory: omap-gpmc: Don't try to save uninitialized GPMC context

Signed-off-by: Olof Johansson <olof@lixom.net>
2015-08-13 14:27:12 +02:00
Olof Johansson
db55350599 Merge tag 'imx-fixes-4.2-3' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux into fixes
The i.MX fixes for 4.2, 3rd round:
 - Fix i.MX6 PCIe interrupt routing which gets missed from stacked IRQ
   domain conversion.  The PCIe wakeup support is currently broken
   because of this.

* tag 'imx-fixes-4.2-3' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux:
  ARM: imx6: correct i.MX6 PCIe interrupt routing

Signed-off-by: Olof Johansson <olof@lixom.net>
2015-08-13 12:24:55 +02:00
Olof Johansson
07616f013b Merge tag 'samsung-mach-fixes-4.2' of https://github.com/krzk/linux into fixes
Two fixes for bugs in Exynos power domain error exit path:
1. kfree() of read-only memory (name of power domain returned
   by kstrdup_const()),
2. Doubled of_node_put() leading to invalid ref count for OF node.

* tag 'samsung-mach-fixes-4.2' of https://github.com/krzk/linux:
  ARM: EXYNOS: fix double of_node_put() on error path
  ARM: EXYNOS: Fix potentian kfree() of ro memory

Signed-off-by: Olof Johansson <olof@lixom.net>
2015-08-13 12:18:45 +02:00
Michael Walle
5c16179b55 EDAC, ppc4xx: Access mci->csrows array elements properly
The commit

  de3910eb79 ("edac: change the mem allocation scheme to
		 make Documentation/kobject.txt happy")

changed the memory allocation for the csrows member. But ppc4xx_edac was
forgotten in the patch. Fix it.

Signed-off-by: Michael Walle <michael@walle.cc>
Cc: <stable@vger.kernel.org>
Cc: linux-edac <linux-edac@vger.kernel.org>
Cc: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
Link: http://lkml.kernel.org/r/1437469253-8611-1-git-send-email-michael@walle.cc
Signed-off-by: Borislav Petkov <bp@suse.de>
2015-08-13 06:02:19 +02:00
Dan Carpenter
e6d006938c cosa: missing error code on failure in probe()
If register_hdlc_device() fails, the current code returns 0 but we
should return an error code instead.

Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-12 16:53:11 -07:00
David S. Miller
e941ba8650 Merge branch 'gianfar-fixes'
Jakub Kicinski says:

====================
gianfar: filer changes

respinning with examples as requested.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-12 14:47:06 -07:00
Jakub Kicinski
1f2b729334 gianfar: remove faulty filer optimizer
Current filer rule optimization is broken in several ways:
 (1) Can perform reads/writes beyond end of allocated tables.
     (gianfar_ethtool.c:1326).

(2) It breaks badly for rules with more than 2 specifiers
     (e.g. matching ip, port, tos).

Example:
# ethtool -N eth2 flow-type udp4 dst-ip 10.0.0.1 dst-port 1 tos 1 action 1
Added rule with ID 254
# ethtool -N eth2 flow-type udp4 dst-ip 10.0.0.2 dst-port 2 tos 2 action 9
Added rule with ID 253
# ethtool -N eth2 flow-type udp4 dst-ip 10.0.0.3 dst-port 3 tos 3 action 17
Added rule with ID 252
# ./filer_decode /sys/kernel/debug/gfar1/filer_raw
00: MASK == 00000210 AND         Q:00           ctrl:00000080 prop:00000210
01: FPR  == 00000210 AND CLE     Q:00           ctrl:00000281 prop:00000210
02: MASK == ffffffff AND         Q:00           ctrl:00000080 prop:ffffffff
03: DPT  == 00000003 AND         Q:00           ctrl:0000008e prop:00000003
04: TOS  == 00000003 AND         Q:00           ctrl:0000008a prop:00000003
05: DIA  == 0a000003 AND         Q:11           ctrl:0000448c prop:0a000003
06: DPT  == 00000002 AND         Q:00           ctrl:0000008e prop:00000002
07: TOS  == 00000002 AND         Q:00           ctrl:0000008a prop:00000002
08: DIA  == 0a000002 AND         Q:09           ctrl:0000248c prop:0a000002
09: DIA  == 0a000001 AND         Q:00           ctrl:0000008c prop:0a000001
0a: DPT  == 00000001 AND         Q:00           ctrl:0000008e prop:00000001
0b: TOS  == 00000001     CLE     Q:01           ctrl:0000060a prop:00000001
ff: MASK >= 00000000             Q:00           ctrl:00000020 prop:00000000

(Entire cluster gets AND-ed together).

 (3) We observed that the masking rules it generates do not
     play well with clustering on P2020.  Only first rule
     of the cluster would ever fire.  Given that optimizer
     relies heavily on masking this is very hard to fix.

Example:
# ethtool -N eth2 flow-type udp4 dst-ip 10.0.0.1 dst-port 1  action 1
Added rule with ID 254
# ethtool -N eth2 flow-type udp4 dst-ip 10.0.0.2 dst-port 2  action 9
Added rule with ID 253
# ethtool -N eth2 flow-type udp4 dst-ip 10.0.0.3 dst-port 3  action 17
Added rule with ID 252
# ./filer_decode /sys/kernel/debug/gfar1/filer_raw
00: MASK == 00000210 AND         Q:00           ctrl:00000080 prop:00000210
01: FPR  == 00000210 AND CLE     Q:00           ctrl:00000281 prop:00000210
02: MASK == ffffffff AND         Q:00           ctrl:00000080 prop:ffffffff
03: DPT  == 00000003 AND         Q:00           ctrl:0000008e prop:00000003
04: DIA  == 0a000003             Q:11           ctrl:0000440c prop:0a000003
05: DPT  == 00000002 AND         Q:00           ctrl:0000008e prop:00000002
06: DIA  == 0a000002             Q:09           ctrl:0000240c prop:0a000002
07: DIA  == 0a000001 AND         Q:00           ctrl:0000008c prop:0a000001
08: DPT  == 00000001     CLE     Q:01           ctrl:0000060e prop:00000001
ff: MASK >= 00000000             Q:00           ctrl:00000020 prop:00000000

Which looks correct according to the spec but only the first
(eth id 252)/last added rule for 10.0.0.3 will ever trigger.
As if filer did not treat the AND CLE as cluster start but
also kept AND-ing the rules.  We found no errata covering this.

The fact that nobody noticed (2) or (3) makes me think
that this feature is not very widely used and we should just
remove it.

Reported-by: Aleksander Dutkowski <adutkowski@gmail.com>
Signed-off-by: Jakub Kicinski <kubakici@wp.pl>
Acked-by: Claudiu Manoil <claudiu.manoil@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-12 14:47:06 -07:00
Jakub Kicinski
b5c8c8906e gianfar: correct list membership accounting
At a cost of one line let's make sure .count is correct
when calling gfar_process_filer_changes().

Signed-off-by: Jakub Kicinski <kubakici@wp.pl>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-12 14:47:05 -07:00
Jakub Kicinski
a898fe040f gianfar: correct filer table writing
MAX_FILER_IDX is the last usable index.  Using less-than
will already guarantee that one entry for catch-all rule
will be left, no need to subtract 1 here.

Signed-off-by: Jakub Kicinski <kubakici@wp.pl>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-12 14:47:05 -07:00
Venkat Venkatsubra
b02e3e948d bonding: Gratuitous ARP gets dropped when first slave added
When the first slave is added (such as during bootup) the first
gratuitous ARP gets dropped. We don't see this drop during a failover.
The packet gets dropped in qdisc (noop_enqueue).

The fix is to delay the sending of gratuitous ARPs till the bond dev's
carrier is present.

It can also be worked around by setting num_grat_arp to more than 1.

Signed-off-by: Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-12 14:37:33 -07:00
Florian Fainelli
211c504a44 net: dsa: Do not override PHY interface if already configured
In case we need to divert reads/writes using the slave MII bus, we may have
already fetched a valid PHY interface property from Device Tree, and that
mode is used by the PHY driver to make configuration decisions.

If we could not fetch the "phy-mode" property, we will assign p->phy_interface
to PHY_INTERFACE_MODE_NA, such that we can actually check for that condition as
to whether or not we should override the interface value.

Fixes: 19334920ea ("net: dsa: Set valid phy interface type")
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-12 14:29:50 -07:00
Martin K. Petersen
4f258a4634 sd: Fix maximum I/O size for BLOCK_PC requests
Commit bcdb247c6b ("sd: Limit transfer length") clamped the maximum
size of an I/O request to the MAXIMUM TRANSFER LENGTH field in the BLOCK
LIMITS VPD. This had the unfortunate effect of also limiting the maximum
size of non-filesystem requests sent to the device through sg/bsg.

Avoid using blk_queue_max_hw_sectors() and set the max_sectors queue
limit directly.

Also update the comment in blk_limits_max_hw_sectors() to clarify that
max_hw_sectors defines the limit for the I/O controller only.

Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Reported-by: Brian King <brking@linux.vnet.ibm.com>
Tested-by: Brian King <brking@linux.vnet.ibm.com>
Cc: stable@vger.kernel.org # 3.17+
Signed-off-by: James Bottomley <JBottomley@Odin.com>
2015-08-12 11:54:37 -07:00
Linus Torvalds
30065bfda9 Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fix from Catalin Marinas:
 "Fix coarse clock monotonicity (VDSO timestamp off by one jiffy
  compared to the syscall one)"

* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
  arm64: VDSO: fix coarse clock monotonicity regression
2015-08-12 11:25:01 -07:00
Bart Van Assche
8f2777f53e libfc: Fix fc_fcp_cleanup_each_cmd()
Since fc_fcp_cleanup_cmd() can sleep this function must not
be called while holding a spinlock. This patch avoids that
fc_fcp_cleanup_each_cmd() triggers the following bug:

BUG: scheduling while atomic: sg_reset/1512/0x00000202
1 lock held by sg_reset/1512:
 #0:  (&(&fsp->scsi_pkt_lock)->rlock){+.-...}, at: [<ffffffffc0225cd5>] fc_fcp_cleanup_each_cmd.isra.21+0xa5/0x150 [libfc]
Preemption disabled at:[<ffffffffc0225cd5>] fc_fcp_cleanup_each_cmd.isra.21+0xa5/0x150 [libfc]
Call Trace:
 [<ffffffff816c612c>] dump_stack+0x4f/0x7b
 [<ffffffff810828bc>] __schedule_bug+0x6c/0xd0
 [<ffffffff816c87aa>] __schedule+0x71a/0xa10
 [<ffffffff816c8ad2>] schedule+0x32/0x80
 [<ffffffffc0217eac>] fc_seq_set_resp+0xac/0x100 [libfc]
 [<ffffffffc0218b11>] fc_exch_done+0x41/0x60 [libfc]
 [<ffffffffc0225cff>] fc_fcp_cleanup_each_cmd.isra.21+0xcf/0x150 [libfc]
 [<ffffffffc0225f43>] fc_eh_device_reset+0x1c3/0x270 [libfc]
 [<ffffffff814a2cc9>] scsi_try_bus_device_reset+0x29/0x60
 [<ffffffff814a3908>] scsi_ioctl_reset+0x258/0x2d0
 [<ffffffff814a2650>] scsi_ioctl+0x150/0x440
 [<ffffffff814b3a9d>] sd_ioctl+0xad/0x120
 [<ffffffff8132f266>] blkdev_ioctl+0x1b6/0x810
 [<ffffffff811da608>] block_ioctl+0x38/0x40
 [<ffffffff811b4e08>] do_vfs_ioctl+0x2f8/0x530
 [<ffffffff811b50c1>] SyS_ioctl+0x81/0xa0
 [<ffffffff816cf8b2>] system_call_fastpath+0x16/0x7a

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Vasu Dev <vasu.dev@intel.com>
Signed-off-by: James Bottomley <JBottomley@Odin.com>
2015-08-12 11:24:21 -07:00
Bart Van Assche
f6979adeaa libfc: Fix fc_exch_recv_req() error path
Due to patch "libfc: Do not invoke the response handler after
fc_exch_done()" (commit ID 7030fd62) the lport_recv() call
in fc_exch_recv_req() is passed a dangling pointer. Avoid this
by moving the fc_frame_free() call from fc_invoke_resp() to its
callers. This patch fixes the following crash:

general protection fault: 0000 [#3] PREEMPT SMP
RIP: fc_lport_recv_req+0x72/0x280 [libfc]
Call Trace:
 fc_exch_recv+0x642/0xde0 [libfc]
 fcoe_percpu_receive_thread+0x46a/0x5ed [fcoe]
 kthread+0x10a/0x120
 ret_from_fork+0x42/0x70

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Vasu Dev <vasu.dev@intel.com>
Signed-off-by: James Bottomley <JBottomley@Odin.com>
2015-08-12 11:23:30 -07:00
Linus Torvalds
04da002d98 Merge branch 'drm-fixes-4.2' of git://people.freedesktop.org/~agd5f/linux
Pull amd drm fixes from Alex Deucher:
 "Dave is on vacation at the moment, so please pull these radeon and
  amdgpu fixes directly.

  Just a few minor things for 4.2:

   - add a new radeon pci id
   - fix a power management regression in amdgpu
   - fix HEVC command buffer validation in amdgpu"

* 'drm-fixes-4.2' of git://people.freedesktop.org/~agd5f/linux:
  drm/radeon: add new OLAND pci id
  Revert "drm/amdgpu: Configure doorbell to maximum slots"
  drm/amdgpu: add context buffer size check for HEVC
2015-08-12 11:13:54 -07:00
John Soni Jose
660d0831d1 libiscsi: Fix host busy blocking during connection teardown
In case of hw iscsi offload, an host can have N-number of active
connections. There can be IO's running on some connections which
make host->host_busy always TRUE. Now if logout from a connection
is tried then the code gets into an infinite loop as host->host_busy
is always TRUE.

 iscsi_conn_teardown(....)
 {
   .........
    /*
     * Block until all in-progress commands for this connection
     * time out or fail.
     */
     for (;;) {
      spin_lock_irqsave(session->host->host_lock, flags);
      if (!atomic_read(&session->host->host_busy)) { /* OK for ERL == 0 */
	      spin_unlock_irqrestore(session->host->host_lock, flags);
              break;
      }
     spin_unlock_irqrestore(session->host->host_lock, flags);
     msleep_interruptible(500);
     iscsi_conn_printk(KERN_INFO, conn, "iscsi conn_destroy(): "
                 "host_busy %d host_failed %d\n",
	          atomic_read(&session->host->host_busy),
	          session->host->host_failed);

	................
	...............
     }
  }

This is not an issue with software-iscsi/iser as each cxn is a separate
host.

Fix:
Acquiring eh_mutex in iscsi_conn_teardown() before setting
session->state = ISCSI_STATE_TERMINATE.

Signed-off-by: John Soni Jose <sony.john@avagotech.com>
Reviewed-by: Mike Christie <michaelc@cs.wisc.edu>
Reviewed-by: Chris Leech <cleech@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: James Bottomley <JBottomley@Odin.com>
2015-08-12 10:21:20 -07:00
Alex Deucher
e037239e5e drm/radeon: add new OLAND pci id
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
2015-08-12 12:24:05 -04:00
Alex Deucher
b8826b0cbf Revert "drm/amdgpu: Configure doorbell to maximum slots"
This reverts commit 78ad5cdd21.
This commit breaks dpm and suspend/resume on CZ.
2015-08-12 12:24:04 -04:00
Boyuan Zhang
8c8bac59dd drm/amdgpu: add context buffer size check for HEVC
Signed-off-by: Boyuan Zhang <boyuan.zhang@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
2015-08-12 12:24:03 -04:00
Linus Torvalds
cec45d901e Merge tag 'regmap-fix-v4.2-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap
Pull regmap fix from Mark Brown:
 "regmap: Fix handling of present bits on rbtree cache block resize

  When expanding a cache block we use krealloc() to resize the register
  present bitmap without initialising the newly allocated data (the
  original code was written for kzalloc()).  Add an appropraite memset()
  to fix that"

* tag 'regmap-fix-v4.2-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap:
  regmap: regcache-rbtree: Clean new present bits on present bitmap resize
2015-08-12 09:06:39 -07:00
Yi Zhang
34dd051741 dm cache policy smq: move 'dm-cache-default' module alias to SMQ
When creating dm-cache with the default policy, it will call
request_module("dm-cache-default") to register the default policy.
But the "dm-cache-default" alias was left referring to the MQ policy.
Fix this by moving the module alias to SMQ.

Fixes: bccab6a0 (dm cache: switch the "default" cache replacement policy from mq to smq)
Signed-off-by: Yi Zhang <yizhan@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2015-08-12 11:27:29 -04:00
Joe Thornber
b0dc3c8bc1 dm btree: add ref counting ops for the leaves of top level btrees
When using nested btrees, the top leaves of the top levels contain
block addresses for the root of the next tree down.  If we shadow a
shared leaf node the leaf values (sub tree roots) should be incremented
accordingly.

This is only an issue if there is metadata sharing in the top levels.
Which only occurs if metadata snapshots are being used (as is possible
with dm-thinp).  And could result in a block from the thinp metadata
snap being reused early, thus corrupting the thinp metadata snap.

Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org
2015-08-12 10:50:37 -04:00
Joe Thornber
7f518ad0a2 dm thin metadata: delete btrees when releasing metadata snapshot
The device details and mapping trees were just being decremented
before.  Now btree_del() is called to do a deep delete.

Signed-off-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org
2015-08-12 10:42:51 -04:00
Matt Fleming
d7a702f0b1 perf/x86/intel/cqm: Do not access cpu_data() from CPU_UP_PREPARE handler
Tony reports that booting his 144-cpu machine with maxcpus=10 triggers
the following WARN_ON():

[   21.045727] WARNING: CPU: 8 PID: 647 at arch/x86/kernel/cpu/perf_event_intel_cqm.c:1267 intel_cqm_cpu_prepare+0x75/0x90()
[   21.045744] CPU: 8 PID: 647 Comm: systemd-udevd Not tainted 4.2.0-rc4 #1
[   21.045745] Hardware name: Intel Corporation BRICKLAND/BRICKLAND, BIOS BRHSXSD1.86B.0066.R00.1506021730 06/02/2015
[   21.045747]  0000000000000000 0000000082771b09 ffff880856333ba8 ffffffff81669b67
[   21.045748]  0000000000000000 0000000000000000 ffff880856333be8 ffffffff8107b02a
[   21.045750]  ffff88085b789800 ffff88085f68a020 ffffffff819e2470 000000000000000a
[   21.045750] Call Trace:
[   21.045757]  [<ffffffff81669b67>] dump_stack+0x45/0x57
[   21.045759]  [<ffffffff8107b02a>] warn_slowpath_common+0x8a/0xc0
[   21.045761]  [<ffffffff8107b15a>] warn_slowpath_null+0x1a/0x20
[   21.045762]  [<ffffffff81036725>] intel_cqm_cpu_prepare+0x75/0x90
[   21.045764]  [<ffffffff81036872>] intel_cqm_cpu_notifier+0x42/0x160
[   21.045767]  [<ffffffff8109a33d>] notifier_call_chain+0x4d/0x80
[   21.045769]  [<ffffffff8109a44e>] __raw_notifier_call_chain+0xe/0x10
[   21.045770]  [<ffffffff8107b538>] _cpu_up+0xe8/0x190
[   21.045771]  [<ffffffff8107b65a>] cpu_up+0x7a/0xa0
[   21.045774]  [<ffffffff8165e920>] cpu_subsys_online+0x40/0x90
[   21.045777]  [<ffffffff81433b37>] device_online+0x67/0x90
[   21.045778]  [<ffffffff81433bea>] online_store+0x8a/0xa0
[   21.045782]  [<ffffffff81430e78>] dev_attr_store+0x18/0x30
[   21.045785]  [<ffffffff8126b6ba>] sysfs_kf_write+0x3a/0x50
[   21.045786]  [<ffffffff8126ad40>] kernfs_fop_write+0x120/0x170
[   21.045789]  [<ffffffff811f0b77>] __vfs_write+0x37/0x100
[   21.045791]  [<ffffffff811f38b8>] ? __sb_start_write+0x58/0x110
[   21.045795]  [<ffffffff81296d2d>] ? security_file_permission+0x3d/0xc0
[   21.045796]  [<ffffffff811f1279>] vfs_write+0xa9/0x190
[   21.045797]  [<ffffffff811f2075>] SyS_write+0x55/0xc0
[   21.045800]  [<ffffffff81067300>] ? do_page_fault+0x30/0x80
[   21.045804]  [<ffffffff816709ae>] entry_SYSCALL_64_fastpath+0x12/0x71
[   21.045805] ---[ end trace fe228b836d8af405 ]---

The root cause is that CPU_UP_PREPARE is completely the wrong notifier
action from which to access cpu_data(), because smp_store_cpu_info()
won't have been executed by the target CPU at that point, which in turn
means that ->x86_cache_max_rmid and ->x86_cache_occ_scale haven't been
filled out.

Instead let's invoke our handler from CPU_STARTING and rename it
appropriately.

Reported-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Kanaka Juvva <kanaka.d.juvva@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vikas Shivappa <vikas.shivappa@intel.com>
Link: http://lkml.kernel.org/r/1438863163-14083-1-git-send-email-matt@codeblueprint.co.uk
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-12 11:37:23 +02:00
Peter Zijlstra
dbc72b7a0c perf/x86/intel: Fix memory leak on hot-plug allocation fail
We fail to free the shared_regs allocation if the constraint_list
allocation fails.

Cure this and be more consistent in NULL-ing the pointers after free.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-12 11:37:22 +02:00
Peter Zijlstra
c7999c6f3f perf: Fix PERF_EVENT_IOC_PERIOD migration race
I ran the perf fuzzer, which triggered some WARN()s which are due to
trying to stop/restart an event on the wrong CPU.

Use the normal IPI pattern to ensure we run the code on the correct CPU.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: bad7192b84 ("perf: Fix PERF_EVENT_IOC_PERIOD to force-reset the period")
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-12 11:37:22 +02:00
Ben Hutchings
ee9397a6fb perf: Fix double-free of the AUX buffer
If rb->aux_refcount is decremented to zero before rb->refcount,
__rb_free_aux() may be called twice resulting in a double free of
rb->aux_pages.  Fix this by adding a check to __rb_free_aux().

Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Fixes: 57ffc5ca67 ("perf: Fix AUX buffer refcounting")
Link: http://lkml.kernel.org/r/1437953468.12842.17.camel@decadent.org.uk
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-12 11:37:21 +02:00
Tomeu Vizoso
e984a1791a memory: omap-gpmc: Don't try to save uninitialized GPMC context
If for some reason the GPMC device hasn't been probed yet, gpmc_base is
going to be NULL. Because there's no context yet to be saved, just turn
these functions into no-ops until that device gets probed.

Unable to handle kernel NULL pointer dereference at virtual address 00000010
pgd = c0204000
[00000010] *pgd=00000000
Internal error: Oops: 5 [#1] SMP ARM
Modules linked in:
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.2.0-rc5-next-20150804-05947-g23f38fe8eda9 #1
Hardware name: Generic OMAP3-GP (Flattened Device Tree)
task: c0e623e8 ti: c0e5c000 task.ti: c0e5c000
PC is at omap3_gpmc_save_context+0x8/0xc4
LR is at omap_sram_idle+0x154/0x23c
pc : [<c087c7ac>]    lr : [<c023262c>]    psr: 60000193
sp : c0e5df40  ip : c0f92a80  fp : c0999eb0
r10: c0e57364  r9 : c0e66f14  r8 : 00000003
r7 : 00000000  r6 : 00000003  r5 : 00000000  r4 : c0f5f174
r3 : c0fa4fe8  r2 : 00000000  r1 : 00000000  r0 : fa200280
Flags: nZCv  IRQs off  FIQs on  Mode SVC_32  ISA ARM  Segment kernel
Control: 10c5387d  Table: 80204019  DAC: 00000015
Process swapper/0 (pid: 0, stack limit = 0xc0e5c220)
Stack: (0xc0e5df40 to 0xc0e5e000)
df40: 00000000 c0e66ef8 c0f5f1a4 00000000 00000003 c02333a4 c3813822 00000000
df60: 00000000 c0e5a5c8 cfb8a5d0 c07f0c44 0e4f1d7e 00000000 00000000 00000000
df80: c3813822 00000000 cfb8a5d0 c0e5e4e4 cfb8a5d0 c0e66f14 c0e5a5c8 c0e5e54c
dfa0: c0e5e544 c0e57364 c0999eb0 c0277758 000000fa c0f5d000 00000000 c0d61c18
dfc0: ffffffff ffffffff 00000000 c0d61674 00000000 c0df7a48 00000000 c0f5d5d4
dfe0: c0e5e4c0 c0df7a44 c0e634f8 80204059 00000000 8020807c 00000000 00000000
[<c087c7ac>] (omap3_gpmc_save_context) from [<c023262c>] (omap_sram_idle+0x154/0x23c)
[<c023262c>] (omap_sram_idle) from [<c02333a4>] (omap3_enter_idle_bm+0xec/0x1a8)
[<c02333a4>] (omap3_enter_idle_bm) from [<c07f0c44>] (cpuidle_enter_state+0xbc/0x284)
[<c07f0c44>] (cpuidle_enter_state) from [<c0277758>] (cpu_startup_entry+0x174/0x24c)
[<c0277758>] (cpu_startup_entry) from [<c0d61c18>] (start_kernel+0x358/0x3c0)
[<c0d61c18>] (start_kernel) from [<8020807c>] (0x8020807c)
Code: c0ccace8 c0ccacc0 e59f30b4 e5932000 (e5921010)

Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Suggested-by: Javier Martinez Canillas <javier@dowhile0.org>
Reviewed-by: Javier Martinez Canillas <javier@osg.samsung.com>
Acked-by: Roger Quadros <rogerq@ti.com>
[tony@atomide.com: updated description as suggested by Javier]
Signed-off-by: Tony Lindgren <tony@atomide.com>
2015-08-12 01:43:49 -07:00
Linus Torvalds
58ccab9134 Merge tag 'localmodconfig-v4.2-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-kconfig
Pull localmodconfig fix from Steven Rostedt:
 "Leonidas Spyropoulos found that modules like nouveau were being
  unselected by make localmodconfig even though their configs were set
  and the module was loaded and visible by lsmod.

  The reason for this was because streamline-config.pl only looks at
  Makefiles, and not Kbuild files.  As these modules use Kbuild for
  their names, they too need to be checked by localmodconfig.  This was
  fixed by Richard Weinberger"

* tag 'localmodconfig-v4.2-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-kconfig:
  localmodconfig: Use Kbuild files too
2015-08-11 15:13:41 -07:00
Richard Weinberger
c0ddc8c745 localmodconfig: Use Kbuild files too
In kbuild it is allowed to define objects in files named "Makefile"
and "Kbuild".
Currently localmodconfig reads objects only from "Makefile"s and misses
modules like nouveau.

Link: http://lkml.kernel.org/r/1437948415-16290-1-git-send-email-richard@nod.at

Cc: stable@vger.kernel.org
Reported-and-tested-by: Leonidas Spyropoulos <artafinde@gmail.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2015-08-11 17:34:35 -04:00
David S. Miller
28eaad7504 Merge branch 'for-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth
Johan Hedberg says:

====================
pull request: bluetooth 2015-08-11

Here's an important regression fix for the 4.2-rc series that ensures
user space isn't given invalid LTK values. The bug essentially prevents
the encryption of subsequent LE connections, i.e. makes it impossible to
pair devices over LE.

Let me know if there are any issues pulling. Thanks.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-11 14:16:07 -07:00
LEROY Christophe
c68875fa82 net: fs_enet: mask interrupts for TX partial frames.
We are not interested in interrupts for partially transmitted frames.
Unlike SCC and FCC, the FEC doesn't handle the I bit in buffer
descriptors, instead it defines two interrupt bits, TXB and TXF.

We have to mask TXB in order to only get interrupts once the
frame is fully transmitted.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-11 12:05:34 -07:00
LEROY Christophe
8961822c46 net: fs_enet: explicitly remove I flag on TX partial frames
We are not interested in interrupts for partially transmitted frames,
we have to clear BD_ENET_TX_INTR explicitly otherwise it may remain
from a previously used descriptor.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-11 12:05:34 -07:00
Linus Torvalds
edf15b4d4b Merge tag 'fbdev-fixes-4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/tomba/linux
Pull fbdev fixes from Tomi Valkeinen:
 - fix display regression on Versatile boards
 - fix OF node refcount bugs on omapdss
 - fix WARN about clock prepare on pxa3xx_gcu
 - fix mem leak in videomode helpers
 - fix fbconsole related boot problem on sun7i-a20-olinuxino-micro

* tag 'fbdev-fixes-4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/tomba/linux:
  fbcon: unconditionally initialize cursor blink interval
  video: Fix possible leak in of_get_videomode()
  video: fbdev: pxa3xx_gcu: prepare the clocks
  OMAPDSS: Fix omap_dss_find_output_by_port_node() port refcount decrement
  OMAPDSS: Fix node refcount leak in omapdss_of_get_next_port()
  fbdev: select versatile helpers for the integrator
2015-08-11 10:23:59 -07:00
Olof Johansson
e2e927c823 Merge tag 'omap-for-v4.2/fixes-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap into fixes
Few trivial omap MMC regression fixes for card voltages where
the syscon areas for PBIAS regulator were missing "simple-bus"
that prevents probing of the children in the mapped region.

This probably was not noticed earlier as the bootloader has
already configured the regulator for the card in the slot.

* tag 'omap-for-v4.2/fixes-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap:
  ARM: dts: dra7: Fix broken pbias device creation
  ARM: dts: OMAP5: Fix broken pbias device creation
  ARM: dts: OMAP4: Fix broken pbias device creation
  ARM: dts: omap243x: Fix broken pbias device creation

Signed-off-by: Olof Johansson <olof@lixom.net>
2015-08-11 15:22:10 +02:00
Nathan Lynch
09edea4f8f ARM: 8410/1: VDSO: fix coarse clock monotonicity regression
Since 906c55579a ("timekeeping: Copy the shadow-timekeeper over the
real timekeeper last") it has become possible on ARM to:

- Obtain a CLOCK_MONOTONIC_COARSE or CLOCK_REALTIME_COARSE timestamp
  via syscall.
- Subsequently obtain a timestamp for the same clock ID via VDSO which
  predates the first timestamp (by one jiffy).

This is because ARM's update_vsyscall is deriving the coarse time
using the __current_kernel_time interface, when it should really be
using the timekeeper object provided to it by the timekeeping core.
It happened to work before only because __current_kernel_time would
access the same timekeeper object which had been passed to
update_vsyscall.  This is no longer the case.

Cc: stable@vger.kernel.org
Fixes: 906c55579a ("timekeeping: Copy the shadow-timekeeper over the real timekeeper last")
Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-08-11 13:42:44 +01:00
Julien Grall
c22fe519e7 xen/xenbus: Don't leak memory when unmapping the ring on HVM backend
The commit ccc9d90a9a "xenbus_client:
Extend interface to support multi-page ring" removes the call to
free_xenballooned_pages() in xenbus_unmap_ring_vfree_hvm(), leaking a
page for every shared ring.

Only with backends running in HVM domains were affected.

Signed-off-by: Julien Grall <julien.grall@citrix.com>
Cc: <stable@vger.kernel.org>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
2015-08-11 11:05:58 +01:00
David Vrabel
ad6cd7bafc Revert "xen/events/fifo: Handle linked events when closing a port"
This reverts commit fcdf31a7c1.

This was causing a WARNING whenever a PIRQ was closed since
shutdown_pirq() is called with irqs disabled.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Cc: <stable@vger.kernel.org>
2015-08-11 11:05:42 +01:00
Marek Szyprowski
9992349a54 drm/exynos/fimc: fix runtime pm support
Once pm_runtime_set_active() gets called, the kernel assumes that given
device has already enabled runtime pm and will call pm_runtime_suspend()
without matching pm_runtime_resume(). In case of DRM FIMC IPP driver,
this will result in calling clk_disable() without respective call to
clk_enable(). This patch removes call to pm_runtime_set_active() to
ensure that pm_runtime_suspend/resume calls will match.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
2015-08-11 17:21:35 +09:00
Andrzej Hajda
2c5f70ef58 drm/exynos/mixer: always update INT_EN cache
INT_EN cache field was updated only by mixer_enable_vblank.
The patch adds update also by mixer_disable_vblank function.

Signed-off-by: Andrzej Hajda <a.hajda@samsung.com>
Reviewed-by: Joonyoung Shim <jy0922.shim@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
2015-08-11 17:21:35 +09:00
Andrzej Hajda
4f98f9446f drm/exynos/mixer: correct vsync configuration sequence
Specification advises to clear vsync indicator before configuring vsync.

Signed-off-by: Andrzej Hajda <a.hajda@samsung.com>
Reviewed-by: Joonyoung Shim <jy0922.shim@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
2015-08-11 17:21:35 +09:00
Andrzej Hajda
9859e20371 drm/exynos/mixer: fix interrupt clearing
The driver used incorrect flags to clear interrupt status.
The patch fixes it.

Signed-off-by: Andrzej Hajda <a.hajda@samsung.com>
Reviewed-by: Joonyoung Shim <jy0922.shim@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
2015-08-11 17:21:34 +09:00
Andrzej Hajda
e6e771dc05 drm/exynos/hdmi: fix edid memory leak
edid returned by drm_get_edid should be freed.
The patch fixes it.

Signed-off-by: Andrzej Hajda <a.hajda@samsung.com>
Reviewed-by: Joonyoung Shim <jy0922.shim@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
2015-08-11 17:21:34 +09:00
Hyungwon Hwang
f2d016011c drm/exynos: gsc: fix wrong bitwise operation for swap detection
The bits for rotation are not used as exclusively. So GSC_IN_ROT_270 can
not be used for swap detection. The definition of it is same with
GSC_IN_ROT_MASK. It is enough to check GSC_IN_ROT_90 bit is set or not to
check whether width / height size swapping is needed.

Signed-off-by: Hyungwon Hwang <human.hwang@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
2015-08-11 17:21:34 +09:00
Eric Dumazet
3257d8b12f inet: fix possible request socket leak
In commit b357a364c5 ("inet: fix possible panic in
reqsk_queue_unlink()"), I missed fact that tcp_check_req()
can return the listener socket in one case, and that we must
release the request socket refcount or we leak it.

Tested:

 Following packetdrill test template shows the issue

0     socket(..., SOCK_STREAM, IPPROTO_TCP) = 3
+0    setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
+0    bind(3, ..., ...) = 0
+0    listen(3, 1) = 0

+0    < S 0:0(0) win 2920 <mss 1460,sackOK,nop,nop>
+0    > S. 0:0(0) ack 1 <mss 1460,nop,nop,sackOK>
+.002 < . 1:1(0) ack 21 win 2920
+0    > R 21:21(0)

Fixes: b357a364c5 ("inet: fix possible panic in reqsk_queue_unlink()")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-10 21:17:45 -07:00
Eric Dumazet
2235f2ac75 inet: fix races with reqsk timers
reqsk_queue_destroy() and reqsk_queue_unlink() should use
del_timer_sync() instead of del_timer() before calling reqsk_put(),
otherwise we could free a req still used by another cpu.

But before doing so, reqsk_queue_destroy() must release syn_wait_lock
spinlock or risk a dead lock, as reqsk_timer_handler() might
need to take this same spinlock from reqsk_queue_unlink() (called from
inet_csk_reqsk_queue_drop())

Fixes: fa76ce7328 ("inet: get rid of central tcp/dccp listener timer")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-10 21:17:29 -07:00
Fabio Estevam
9d332d92a9 mkiss: Fix error handling in mkiss_open()
If register_netdev() fails we are not propagating the error and
we return success because ax_open() succeeded previously.

Fix this by checking the return value of ax_open() and
register_netdev() and propagate the error in case of failure.

Reported-by: RUC_Soft_Sec <zy900702@163.com>
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-10 21:10:31 -07:00
David S. Miller
182554570a Merge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf
Pablo Neira Ayuso says:

====================
Netfilter fixes for net

The following patchset contains five Netfilter fixes for your net tree,
they are:

1) Silence a warning on falling back to vmalloc(). Since 88eab472ec, we can
   easily hit this warning message, that gets users confused. So let's get rid
   of it.

2) Recently when porting the template object allocation on top of kmalloc to
   fix the netns dependencies between x_tables and conntrack, the error
   checks where left unchanged. Remove IS_ERR() and check for NULL instead.
   Patch from Dan Carpenter.

3) Don't ignore gfp_flags in the new nf_ct_tmpl_alloc() function, from
   Joe Stringer.

4) Fix a crash due to NULL pointer dereference in ip6t_SYNPROXY, patch from
   Phil Sutter.

5) The sequence number of the Syn+ack that is sent from SYNPROXY to clients is
   not adjusted through our NAT infrastructure, as a result the client may
   ignore this TCP packet and TCP flow hangs until the client probes us.  Also
   from Phil Sutter.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-10 21:03:25 -07:00
Linus Torvalds
7a834ba5e2 Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/hid
Pull HID fixes from Jiri Kosina:

 - fix for bounds limit calculation in uclogic driver, by Dan Carpenter

 - fix for use-after-free during device removal, by Krzysztof Kozlowski

 - fix for userspace regression (that became apparent only with shiny
   new libinput, so it's not that bad, but I still consider it 4.2
   material), in wacom driver, by Jason Gerecke

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/hid:
  HID: wacom: Report correct device resolution when using the wireless adapater
  HID: hid-input: Fix accessing freed memory during device disconnect
  HID: uclogic: fix limit in uclogic_tablet_enable()
2015-08-10 15:16:48 -07:00
Jason Gerecke
0be017120b HID: wacom: Report correct device resolution when using the wireless adapater
The 'wacom_wireless_work' function does not recalculate the tablet's
resolution, causing the value contained in the 'features' struct to
always be reported to userspace. This value is valid only for the pen
interface, meaning that the value will be incorrect for the touchpad (if
present). This in particular causes problems for libinput which relies
on the reported resolution being correct.

This patch adds the necessary calls to recalculate the resolution for
each interface. This requires a little bit of code shuffling since both
the 'wacom_set_default_phy' and 'wacom_calculate_res' are declared below
their new first point of use in 'wacom_wireless_work'.

Signed-off-by: Jason Gerecke <jason.gerecke@wacom.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2015-08-10 23:49:56 +02:00
David S. Miller
875a74b6d6 Merge branch 'bnx2x-fixes'
Yuval Mintz says:

====================
bnx2x: small fixes

This adds 2 small fixes, one to error flows during memory release
and the other to flash writes via ethtool API.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-10 14:31:59 -07:00
Yuval Mintz
0ea853dfa9 bnx2x: Free NVRAM lock at end of each page
Writing each 4Kb page into flash might take up-to ~100 miliseconds,
during which time management firmware cannot acces the nvram for its
own uses.

Firmware upgrade utility use the ethtool API to burn new flash images
for the device via the ethtool API, doing so by writing several page-worth
of data on each command. Such action might create problems for the
management firmware, as the nvram might not be accessible for a long time.

This patch changes the write implementation, releasing the nvram lock on
the completion of each page, allowing the management firmware time to
claim it and perform its own required actions.

Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: Ariel Elior <Ariel.Elior@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-10 14:31:59 -07:00
Yuval Mintz
e1615903eb bnx2x: Prevent null pointer dereference on SKB release
On error flows its possible to free an SKB even if it was not allocated.

Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: Ariel Elior <Ariel.Elior@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-10 14:31:58 -07:00
Dan Carpenter
21a447637d cxgb4: missing curly braces in t4_setup_debugfs()
There were missing curly braces so it means we call add_debugfs_mem()
unintentionally.

Fixes: 3ccc6cf74d ('cxgb4: Adds support for T6 adapter')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-10 14:26:26 -07:00
Benjamin Poirier
7a76a021cd net-timestamp: Update skb_complete_tx_timestamp comment
After "62bccb8 net-timestamp: Make the clone operation stand-alone from phy
timestamping" the hwtstamps parameter of skb_complete_tx_timestamp() may no
longer be NULL.

Signed-off-by: Benjamin Poirier <bpoirier@suse.com>
Cc: Alexander Duyck <alexander.h.duyck@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-10 13:35:37 -07:00
Florian Westphal
330567b71d ipv6: don't reject link-local nexthop on other interface
48ed7b26fa ("ipv6: reject locally assigned nexthop addresses") is too
strict; it rejects following corner-case:

ip -6 route add default via fe80::1:2:3 dev eth1

[ where fe80::1:2:3 is assigned to a local interface, but not eth1 ]

Fix this by restricting search to given device if nh is linklocal.

Joint work with Hannes Frederic Sowa.

Fixes: 48ed7b26fa ("ipv6: reject locally assigned nexthop addresses")
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-10 13:29:22 -07:00
Daniel Borkmann
4e7c133068 netlink: make sure -EBUSY won't escape from netlink_insert
Linus reports the following deadlock on rtnl_mutex; triggered only
once so far (extract):

[12236.694209] NetworkManager  D 0000000000013b80     0  1047      1 0x00000000
[12236.694218]  ffff88003f902640 0000000000000000 ffffffff815d15a9 0000000000000018
[12236.694224]  ffff880119538000 ffff88003f902640 ffffffff81a8ff84 00000000ffffffff
[12236.694230]  ffffffff81a8ff88 ffff880119c47f00 ffffffff815d133a ffffffff81a8ff80
[12236.694235] Call Trace:
[12236.694250]  [<ffffffff815d15a9>] ? schedule_preempt_disabled+0x9/0x10
[12236.694257]  [<ffffffff815d133a>] ? schedule+0x2a/0x70
[12236.694263]  [<ffffffff815d15a9>] ? schedule_preempt_disabled+0x9/0x10
[12236.694271]  [<ffffffff815d2c3f>] ? __mutex_lock_slowpath+0x7f/0xf0
[12236.694280]  [<ffffffff815d2cc6>] ? mutex_lock+0x16/0x30
[12236.694291]  [<ffffffff814f1f90>] ? rtnetlink_rcv+0x10/0x30
[12236.694299]  [<ffffffff8150ce3b>] ? netlink_unicast+0xfb/0x180
[12236.694309]  [<ffffffff814f5ad3>] ? rtnl_getlink+0x113/0x190
[12236.694319]  [<ffffffff814f202a>] ? rtnetlink_rcv_msg+0x7a/0x210
[12236.694331]  [<ffffffff8124565c>] ? sock_has_perm+0x5c/0x70
[12236.694339]  [<ffffffff814f1fb0>] ? rtnetlink_rcv+0x30/0x30
[12236.694346]  [<ffffffff8150d62c>] ? netlink_rcv_skb+0x9c/0xc0
[12236.694354]  [<ffffffff814f1f9f>] ? rtnetlink_rcv+0x1f/0x30
[12236.694360]  [<ffffffff8150ce3b>] ? netlink_unicast+0xfb/0x180
[12236.694367]  [<ffffffff8150d344>] ? netlink_sendmsg+0x484/0x5d0
[12236.694376]  [<ffffffff810a236f>] ? __wake_up+0x2f/0x50
[12236.694387]  [<ffffffff814cad23>] ? sock_sendmsg+0x33/0x40
[12236.694396]  [<ffffffff814cb05e>] ? ___sys_sendmsg+0x22e/0x240
[12236.694405]  [<ffffffff814cab75>] ? ___sys_recvmsg+0x135/0x1a0
[12236.694415]  [<ffffffff811a9d12>] ? eventfd_write+0x82/0x210
[12236.694423]  [<ffffffff811a0f9e>] ? fsnotify+0x32e/0x4c0
[12236.694429]  [<ffffffff8108cb70>] ? wake_up_q+0x60/0x60
[12236.694434]  [<ffffffff814cba09>] ? __sys_sendmsg+0x39/0x70
[12236.694440]  [<ffffffff815d4797>] ? entry_SYSCALL_64_fastpath+0x12/0x6a

It seems so far plausible that the recursive call into rtnetlink_rcv()
looks suspicious. One way, where this could trigger is that the senders
NETLINK_CB(skb).portid was wrongly 0 (which is rtnetlink socket), so
the rtnl_getlink() request's answer would be sent to the kernel instead
to the actual user process, thus grabbing rtnl_mutex() twice.

One theory would be that netlink_autobind() triggered via netlink_sendmsg()
internally overwrites the -EBUSY error to 0, but where it is wrongly
originating from __netlink_insert() instead. That would reset the
socket's portid to 0, which is then filled into NETLINK_CB(skb).portid
later on. As commit d470e3b483 ("[NETLINK]: Fix two socket hashing bugs.")
also puts it, -EBUSY should not be propagated from netlink_insert().

It looks like it's very unlikely to reproduce. We need to trigger the
rhashtable_insert_rehash() handler under a situation where rehashing
currently occurs (one /rare/ way would be to hit ht->elasticity limits
while not filled enough to expand the hashtable, but that would rather
require a specifically crafted bind() sequence with knowledge about
destination slots, seems unlikely). It probably makes sense to guard
__netlink_insert() in any case and remap that error. It was suggested
that EOVERFLOW might be better than an already overloaded ENOMEM.

Reference: http://thread.gmane.org/gmane.linux.network/372676
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-10 10:59:10 -07:00
Ivan Vecera
ade4dc3e61 bna: fix interrupts storm caused by erroneous packets
The commit "e29aa33 bna: Enable Multi Buffer RX" moved packets counter
increment from the beginning of the NAPI processing loop after the check
for erroneous packets so they are never accounted. This counter is used
to inform firmware about number of processed completions (packets).
As these packets are never acked the firmware fires IRQs for them again
and again.

Fixes: e29aa33 ("bna: Enable Multi Buffer RX")
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Acked-by: Rasesh Mody <rasesh.mody@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-10 10:58:37 -07:00
David S. Miller
ea70858423 Merge branch 'mvpp2-fixes'
Marcin Wojtas says:

====================
Fixes for the network driver of Marvell Armada 375 SoC

This is a set of three patches that fix long-lasting problems implemented in
the initial support for the Armada 375 network controller.

Due to an inappropriate concept of handling the per-CPU sent packets'
processing on TX path the driver numerous problems occured, such as RCU
stalls. Those have been fixed, of which details you can find in the commit
logs. The patches were intensively tested on top of v4.2-rc5.

I'm looking forward to any comments or remarks.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-10 10:57:01 -07:00
Marcin Wojtas
edc660fa09 net: mvpp2: replace TX coalescing interrupts with hrtimer
The PP2 controller is capable of per-CPU TX processing, which means there are
per-CPU banked register sets and queues. Current version of the driver supports
TX packet coalescing - once on given CPU sent packets amount reaches a threshold
value, an IRQ occurs. However, there is a single interrupt line responsible for
CPU0/1 TX and RX events (the latter is not per-CPU, the hardware does not
support RSS).

When the top-half executes the interrupt cause is not known. This is why in
NAPI poll function, along with RX processing, IRQ cause register on both
CPU's is accessed in order to determine on which of them the TX coalescing
threshold might have been reached. Thus the egress processing and releasing the
buffers is able to take place on the corresponding CPU. Hitherto approach lead
to an illegal usage of on_each_cpu function in softirq context.

The problem is solved by resigning from TX coalescing interrupts and separating
egress finalization from NAPI processing. For that purpose a method of using
hrtimer is introduced. In main transmit function (mvpp2_tx) buffers are released
once a software coalescing threshold is reached. In case not all the data is
processed a timer is set on this CPU - in its interrupt context a tasklet is
scheduled in which all queues are processed. At once only one timer per-CPU can
be running, which is controlled by a dedicated flag.

This commit removes TX processing from NAPI polling function, disables hardware
coalescing and enables hrtimer with tasklet, using new per-CPU port structure
(mvpp2_port_pcpu).

Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-10 10:57:00 -07:00
Marcin Wojtas
71ce391dfb net: mvpp2: enable proper per-CPU TX buffers unmapping
mvpp2 driver allows usage of per-CPU TX processing. Once the packets are
prepared independetly on each CPU, the hardware enqueues the descriptors in
common TX queue. After they are sent, the buffers and associated sk_buffs
should be released on the corresponding CPU.

This is why a special index is maintained in order to point to the right data to
be released after transmission takes place. Each per-CPU TX queue comprise an
array of sent sk_buffs, freed in mvpp2_txq_bufs_free function. However, the
index was used there also for obtaining a descriptor (and therefore a buffer to
be DMA-unmapped) from common TX queue, which was wrong, because it was not
referring to the current CPU.

This commit enables proper unmapping of sent data buffers by indexing them in
per-CPU queues using a dedicated array for keeping their physical addresses.

Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-10 10:57:00 -07:00
Marcin Wojtas
d53793c5d6 net: mvpp2: remove excessive spinlocks from driver initialization
Using spinlocks protection during one-time driver initialization is not
necessary. Moreover it resulted in invalid GFP_KERNEL allocation under the lock.

This commit removes redundant spinlocks from buffer manager part of mvpp2
initialization.

Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Reported-by: Alexandre Fournier <alexandre.fournier@wisp-e.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-10 10:57:00 -07:00
Linus Torvalds
2b9bea035a Merge tag 'mfd-fixes-4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/lee/mfd
Pull MFD fixes from Lee Jones:
 - fix dependency issues on ChromeOS platforms
 - fix runtime PM issues on Arizona
 - fix IRQ/suspend race on Arizona

* tag 'mfd-fixes-4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/lee/mfd:
  mfd: Remove MFD_CROS_EC_SPI depends on OF
  platform/chrome: Don't make CHROME_PLATFORMS depends on X86 || ARM
  mfd: arizona: Fix initialisation of the PM runtime
  mfd: arizona: Fix race between runtime suspend and IRQs
2015-08-10 10:48:11 -07:00
Linus Torvalds
016a9f50e2 Merge tag 'ntb-4.2-rc7' of git://github.com/jonmason/ntb
Pull NTB bugfixes from Jon Mason:
 "NTB bug fixes to address transport receive issues, stats, link
  negotiation issues, and string formatting"

* tag 'ntb-4.2-rc7' of git://github.com/jonmason/ntb:
  ntb: avoid format string in dev_set_name
  NTB: Fix dereference before check
  NTB: Fix zero size or integer overflow in ntb_set_mw
  NTB: Schedule to receive on QP link up
  NTB: Fix oops in debugfs when transport is half-up
  NTB: ntb_netdev not covering all receive errors
  NTB: Fix transport stats for multiple devices
  NTB: Fix ntb_transport out-of-order RX update
2015-08-10 10:38:42 -07:00
Linus Torvalds
a3ca013d88 Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull RCU pathwalk fix from Al Viro:
 "Another racy use of nd->path.dentry in RCU mode"

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
  may_follow_link() should use nd->inode
2015-08-10 10:04:47 -07:00
Nathan Lynch
878854a374 arm64: VDSO: fix coarse clock monotonicity regression
Since 906c55579a ("timekeeping: Copy the shadow-timekeeper over the
real timekeeper last") it has become possible on arm64 to:

- Obtain a CLOCK_MONOTONIC_COARSE or CLOCK_REALTIME_COARSE timestamp
  via syscall.
- Subsequently obtain a timestamp for the same clock ID via VDSO which
  predates the first timestamp (by one jiffy).

This is because arm64's update_vsyscall is deriving the coarse time
using the __current_kernel_time interface, when it should really be
using the timekeeper object provided to it by the timekeeping core.
It happened to work before only because __current_kernel_time would
access the same timekeeper object which had been passed to
update_vsyscall.  This is no longer the case.

Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-08-10 15:37:45 +01:00
Jason A. Donenfeld
fc5fee86bd x86/xen: build "Xen PV" APIC driver for domU as well
It turns out that a PV domU also requires the "Xen PV" APIC
driver. Otherwise, the flat driver is used and we get stuck in busy
loops that never exit, such as in this stack trace:

(gdb) target remote localhost:9999
Remote debugging using localhost:9999
__xapic_wait_icr_idle () at ./arch/x86/include/asm/ipi.h:56
56              while (native_apic_mem_read(APIC_ICR) & APIC_ICR_BUSY)
(gdb) bt
 #0  __xapic_wait_icr_idle () at ./arch/x86/include/asm/ipi.h:56
 #1  __default_send_IPI_shortcut (shortcut=<optimized out>,
dest=<optimized out>, vector=<optimized out>) at
./arch/x86/include/asm/ipi.h:75
 #2  apic_send_IPI_self (vector=246) at arch/x86/kernel/apic/probe_64.c:54
 #3  0xffffffff81011336 in arch_irq_work_raise () at
arch/x86/kernel/irq_work.c:47
 #4  0xffffffff8114990c in irq_work_queue (work=0xffff88000fc0e400) at
kernel/irq_work.c:100
 #5  0xffffffff8110c29d in wake_up_klogd () at kernel/printk/printk.c:2633
 #6  0xffffffff8110ca60 in vprintk_emit (facility=0, level=<optimized
out>, dict=0x0 <irq_stack_union>, dictlen=<optimized out>,
fmt=<optimized out>, args=<optimized out>)
    at kernel/printk/printk.c:1778
 #7  0xffffffff816010c8 in printk (fmt=<optimized out>) at
kernel/printk/printk.c:1868
 #8  0xffffffffc00013ea in ?? ()
 #9  0x0000000000000000 in ?? ()

Mailing-list-thread: https://lkml.org/lkml/2015/8/4/755
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
2015-08-10 15:33:10 +01:00
Scot Doyle
2a17d7e80f fbcon: unconditionally initialize cursor blink interval
A sun7i-a20-olinuxino-micro fails to boot when kernel parameter
vt.global_cursor_default=0. The value is copied to vc->vc_deccm
causing the initialization of ops->cur_blink_jiffies to be skipped.
Unconditionally initialize it.

Reported-and-tested-by: Jonathan Liu <net147@gmail.com>
Signed-off-by: Scot Doyle <lkml14@scotdoyle.com>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
2015-08-10 17:20:32 +03:00
Christian Engelmayer
37b617f9be video: Fix possible leak in of_get_videomode()
In case videomode_from_timings() fails in function of_get_videomode(), the
allocated display timing data is not freed in the exit path. Make sure that
display_timings_release() is called in any case. Detected by Coverity CID
1309681.

Signed-off-by: Christian Engelmayer <cengelma@gmx.at>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
2015-08-10 15:11:12 +03:00
Phil Sutter
3c16241c44 netfilter: SYNPROXY: fix sending window update to client
Upon receipt of SYNACK from the server, ipt_SYNPROXY first sends back an ACK to
finish the server handshake, then calls nf_ct_seqadj_init() to initiate
sequence number adjustment of forwarded packets to the client and finally sends
a window update to the client to unblock it's TX queue.

Since synproxy_send_client_ack() does not set synproxy_send_tcp()'s nfct
parameter, no sequence number adjustment happens and the client receives the
window update with incorrect sequence number. Depending on client TCP
implementation, this leads to a significant delay (until a window probe is
being sent).

Signed-off-by: Phil Sutter <phil@nwl.cc>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2015-08-10 13:55:07 +02:00
Phil Sutter
96fffb4f23 netfilter: ip6t_SYNPROXY: fix NULL pointer dereference
This happens when networking namespaces are enabled.

Suggested-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Phil Sutter <phil@nwl.cc>
Acked-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2015-08-10 13:54:44 +02:00
Robert Jarzmik
9e6e35edb3 video: fbdev: pxa3xx_gcu: prepare the clocks
The clocks need to be prepared before being enabled. Without it a
warning appears in the drivers probe path :

WARNING: CPU: 0 PID: 1 at drivers/clk/clk.c:707 clk_core_enable+0x84/0xa0()
Modules linked in:
CPU: 0 PID: 1 Comm: swapper Not tainted 4.2.0-rc3-cm-x300+ #804
Hardware name: CM-X300 module
[<c000ed50>] (unwind_backtrace) from [<c000ce08>] (show_stack+0x10/0x14)
[<c000ce08>] (show_stack) from [<c0017eb4>] (warn_slowpath_common+0x7c/0xb4)
[<c0017eb4>] (warn_slowpath_common) from [<c0017f88>] (warn_slowpath_null+0x1c/0x24)
[<c0017f88>] (warn_slowpath_null) from [<c02d30dc>] (clk_core_enable+0x84/0xa0)
[<c02d30dc>] (clk_core_enable) from [<c02d3118>] (clk_enable+0x20/0x34)
[<c02d3118>] (clk_enable) from [<c0200dfc>] (pxa3xx_gcu_probe+0x148/0x338)
[<c0200dfc>] (pxa3xx_gcu_probe) from
[<c022eccc>] (platform_drv_probe+0x30/0x94)

Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
2015-08-10 12:25:43 +03:00
Jyri Sarha
6266f4b19d OMAPDSS: Fix omap_dss_find_output_by_port_node() port refcount decrement
Fix omap_dss_find_output_by_port_node() port parameter refcount
decrementation. The only user of dss_of_port_get_parent_device()
function is omap_dss_find_output_by_port_node() and it assumes the
refcount of the port parameter is not decremented by the call.

Signed-off-by: Jyri Sarha <jsarha@ti.com>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
2015-08-10 12:22:40 +03:00
Jyri Sarha
2b55cb3b04 OMAPDSS: Fix node refcount leak in omapdss_of_get_next_port()
Fix node refcount leak in omapdss_of_get_next_port().

Signed-off-by: Jyri Sarha <jsarha@ti.com>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
2015-08-10 12:22:35 +03:00
Linus Walleij
2701fa0864 fbdev: select versatile helpers for the integrator
Commit 11c32d7b62
"video: move Versatile CLCD helpers" missed the fact
that the Integrator/CP is also using the helper, and
as a result the platform got only stubs and no graphics.
Add this as a default selection to Kconfig so we have
graphics again.

Fixes: 11c32d7b62 (video: move Versatile CLCD helpers)
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
2015-08-10 12:20:38 +03:00
Kees Cook
e15f940908 ntb: avoid format string in dev_set_name
Avoid any chance of format string expansion when calling dev_set_name.

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
2015-08-09 16:32:22 -04:00
Allen Hubbe
30a4bb1e5a NTB: Fix dereference before check
Remove early dereference of a pointer that is checked later in the code.

Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
2015-08-09 16:32:22 -04:00
Allen Hubbe
8c9edf63e7 NTB: Fix zero size or integer overflow in ntb_set_mw
A plain 32 bit integer will overflow for values over 4GiB.

Change the plain integer size to the appropriate size type in
ntb_set_mw.  Change the type of the size parameter and two local
variables used for size.

Even if there is no overflow, a size of zero is invalid here.

Reported-by: Juyoung Jung <jjung@micron.com>
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
2015-08-09 16:32:22 -04:00
Allen Hubbe
8b5a22d8f1 NTB: Schedule to receive on QP link up
Schedule to receive on QP link up, to make sure that the doorbell is
properly cleared for interrupts.

Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
2015-08-09 16:32:22 -04:00
Dave Jiang
260bee9451 NTB: Fix oops in debugfs when transport is half-up
When the remote side is not up, we do not have all the context for the
transport, and that causes NULL ptr access. Have the debugfs reads check
to see if transport is up before we make access.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
2015-08-09 16:32:22 -04:00
Dave Jiang
da4eb27a2c NTB: ntb_netdev not covering all receive errors
ntb_netdev is allowing the link to come up even when -ENOMEM is returned
from ntb_transport_rx_enqueue.  Fix to cover all possible errors.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
2015-08-09 16:32:21 -04:00
Dave Jiang
c8650fd03d NTB: Fix transport stats for multiple devices
Currently the debugfs does not have files for all NTB transport queue
pairs.  When there are multiple NTBs present in a system, the QP names
of the last transport clobber the names of previously added transport
QPs.  Only the last added QPs can be observed via debugfs.

Create a directory per NTB transport to associate the QPs with that
transport.  Name the directory the same as the PCI device.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
2015-08-09 16:32:21 -04:00
Allen Hubbe
da2e5ae561 NTB: Fix ntb_transport out-of-order RX update
It was possible for a synchronous update of the RX index in the error
case to get ahead of the asynchronous RX index update in the normal
case.  Change the RX processing to preserve an RX completion order.

There were two error cases.  First, if a buffer is not present to
receive data, there would be no queue entry to preserve the RX
completion order.  Instead of dropping the RX frame, leave the RX frame
in the ring.  Schedule RX processing when RX entries are enqueued, in
case there are RX frames waiting in the ring to be received.

Second, if a buffer is too small to receive data, drop the frame in the
ring, mark the RX entry as done, and indicate the error in the RX entry
length.  Check for a negative length in the receive callback in
ntb_netdev, and count occurrences as rx_length_errors.

Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
2015-08-09 16:32:21 -04:00
Geert Uytterhoeven
54d46b7fbc clockevents/drivers/sh_cmt: Only perform clocksource suspend/resume if enabled
Currently the sh_cmt clocksource timer is disabled or enabled
unconditionally on clocksource suspend resp. resume, even if a
better clocksource is present (e.g. arch_sys_counter) and the
sh_cmt clocksource is not enabled.

As sh_cmt is a syscore device when its timer is enabled, this
may lead to a genpd.prepared_count imbalance in the presence of
PM Domains, which may cause a lock-up during reboot after s2ram.

During suspend:
  - pm_genpd_prepare() is called for all non-syscore devices (incl.
    sh_cmt), increasing genpd.prepared_count for each device,
  - clocksource.suspend() is called for all clocksource devices,
  - sh_cmt_clocksource_suspend() calls sh_cmt_stop(), which is a no-op
    as the clocksource was not enabled.

During resume:
  - clocksource.resume() is called for all clocksource devices,
  - sh_cmt_clocksource_resume() calls sh_cmt_start(), which enables the
    clocksource timer, and turns sh_cmt into a syscore device,
  - pm_genpd_complete() is called for all non-syscore devices (excl.
    sh_cmt now!), decreasing genpd.prepared_count for each device but
    sh_cmt.

Now genpd.prepared_count of the PM Domain containing sh_cmt is
still 1 instead of zero.  On subsequent suspend/resume cycles,
sh_cmt is still a syscore device, hence it's skipped for
pm_genpd_{prepare,complete}(), keeping the imbalance of
genpd.prepared_count at 1.

During reboot:

  - platform_drv_shutdown() is called for any platform device that has
    a driver with a .shutdown() method (only rcar-dmac on R-Car Gen2),

  - platform_drv_shutdown() calls dev_pm_domain_detach(), which
    calls genpd_dev_pm_detach(),

  - genpd_dev_pm_detach() keeps calling pm_genpd_remove_device() until
    it doesn't return -EAGAIN[*],

  - If the device is part of the same PM Domain as sh_cmt,
    pm_genpd_remove_device() always fails with -EAGAIN due to
    genpd.prepared_count > 0.

  - Infinite loop in genpd_dev_pm_detach()[*].

[*] Commit 93af5e9354 ("PM / Domains: Avoid infinite loops in
    attach/detach code") already limited the number of loop iterations,
    avoiding the lock-up.

To fix this, only disable or enable the clocksource timer on
clocksource suspend resp. resume if the clocksource was enabled.

This was tested on r8a7791/koelsch with the CPG Clock Domain:

  - using arch_sys_counter as the clocksource, which is the default, and
    which showed the problem,

  - using sh_cmt as a clocksource ("echo ffca0000.timer > \
    /sys/devices/system/clocksource/clocksource0/current_clocksource"),
    which behaves the same as before.

Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1438875126-12596-2-git-send-email-daniel.lezcano@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-08 10:50:08 +02:00
Juergen Gross
4809146b86 x86/ldt: Correct FPU emulation access to LDT
Commit 37868fe113 ("x86/ldt: Make modify_ldt synchronous")
introduced a new struct ldt_struct anchored at mm->context.ldt.

Adapt the x86 fpu emulation code to use that new structure.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Andy Lutomirski <luto@kernel.org>
Cc: <stable@vger.kernel.org> # On top of: 37868fe113: x86/ldt: Make modify_ldt synchronous
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: billm@melbpc.org.au
Link: http://lkml.kernel.org/r/1438883674-1240-1-git-send-email-jgross@suse.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-08 10:20:45 +02:00
Juergen Gross
136d9d83c0 x86/ldt: Correct LDT access in single stepping logic
Commit 37868fe113 ("x86/ldt: Make modify_ldt synchronous")
introduced a new struct ldt_struct anchored at mm->context.ldt.

convert_ip_to_linear() was changed to reflect this, but indexing
into the ldt has to be changed as the pointer is no longer void *.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Andy Lutomirski <luto@kernel.org>
Cc: <stable@vger.kernel.org> # On top of: 37868fe113: x86/ldt: Make modify_ldt synchronous
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: bp@suse.de
Link: http://lkml.kernel.org/r/1438848278-12906-1-git-send-email-jgross@suse.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-08 10:20:45 +02:00
Robert Jarzmik
b93028c9af clk: pxa: pxa3xx: fix CKEN register access
Clocks 0 to 31 are on CKENA, and not CKENB. The clock register names
were inadequately inverted. As a consequence, all clock operations were
happening on CKENB, because almost all but 2 clocks are on CKENA.

As the clocks were activated by the bootloader in the former tests, it
escaped the testing that the wrong clock gate was manipulated. The error
was revealed by changing the pxa3xx-nand driver to a module, where upon
unloading, the wrong clock was disabled in CKENB.

Fixes: 9bbb8a338f ("clk: pxa: add pxa3xx clock driver")
Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
2015-08-07 16:53:13 -07:00
Carol L Soto
fe1e1876d8 net/mlx5_core: Set log_uar_page_sz for non 4K page size architecture
failed to configure the page size for architectures with page size
different than 4K.

Fixes: 938fe83 ("net/mlx5_core: New device capabilities handling")
Signed-off-by: Carol L Soto <clsoto@linux.vnet.ibm.com>
Acked-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-07 15:55:48 -07:00
David S. Miller
95a428f3be Merge tag 'batman-adv-fix-for-davem' of git://git.open-mesh.org/linux-merge
Antonio Quartulli says:

====================
Included changes:
- prevent DAT from replying on behalf of local clients and confuse L2
  bridges
- fix crash on double list removal of TT objects (tt_local_entry)
- fix crash due to missing NULL checks
- initialize bw values for new GWs objects to prevent memory leak
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-07 15:51:10 -07:00
Drew Richardson
e83dd37700 ARM: 8409/1: Mark ret_fast_syscall as a function
ret_fast_syscall runs when user space makes a syscall. However it
needs to be marked as such so the ELF information is correct. Before
it was:

   101: 8000f300     0 NOTYPE  LOCAL  DEFAULT    2 ret_fast_syscall

But with this change it correctly shows as:

   101: 8000f300    96 FUNC    LOCAL  DEFAULT    2 ret_fast_syscall

I see this function when using perf to unwind call stacks from kernel
space to user space. Without this change I would need to add some
special case logic when using the vmlinux ELF information.

Signed-off-by: Drew Richardson <drew.richardson@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-08-07 19:57:02 +01:00
Gregory CLEMENT
998ef5d81c ARM: 8408/1: Fix the secondary_startup function in Big Endian case
Since the commit "b2c3e38a5471 ARM: redo TTBR setup code for LPAE",
the setup code had been reworked. As a result the secondary CPUs
failed to come online in Big Endian.

As explained by Russell, the new code expected the value in r4/r5 to
be the least significant 32bits in r4 and the most significant 32bits
in r5. However, in the secondary code, we load this using ldrd, which
on BE reverses that.

This patch swap r4/r5 after the ldrd. It is done using the xor
instructions in order to not use a temporary register.

Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-08-07 19:57:02 +01:00
David S. Miller
d1163e91ce Merge branch 'be2net-fixes'
Sathya Perla says:

====================
be2net: patch set

This patch set contains 2 driver fixes to a Lancer HW issue and a fix
to a double free bug. Pls apply to the "net" tree. Thanks!

Patch 1 now enables filters only after creating RXQs. This is done as
HW issues were observed on Lancer adapters if filters
(flags, mac addrs etc) are enabled *before* creating RXQs. This patch
changes the driver design by enabling filters in be_open() --
instead of be_setup() -- after RXQs are created and buffers posted.

Patch 2 fixes an RX stall issue that was seen on Lancer adapters when
RXQs are destroyed while they are in an "out of buffer" state.
This patch fixes this issue by posting 64 buffers to each RXQ before
destroying them in the close path. This is done after ensuring that no
more new packets are selected for transfer to the RXQs by disabling
interface filters.

Patch 3 protects eqo->affinity_mask variable from being freed twice and
resulting in a crash.  It's now freed only when EQs haven't yet been
destroyed.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-07 11:53:06 -07:00
Kalesh AP
649886a36b be2net: protect eqo->affinity_mask from getting freed twice
There are paths in the driver such as an unrecoverable error (UE) detection
followed by a driver unload wherein be_clear() is invoked twice.
Individual data structures are reset so that they are not cleaned/freed
twice. This patch does the same for eqo->affinity_mask. It is freed only
if EQs haven't yet been destroyed. This fixes a possible crash when
affinity_mask is freed twice.

Signed-off-by: Kalesh AP <kalesh.purayil@avagotech.com>
Signed-off-by: Sathya Perla <sathya.perla@avagotech.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-07 11:53:06 -07:00
Kalesh AP
99b44304f2 be2net: post buffers before destroying RXQs in Lancer
An RX stall issue was seen on Lancer adapters, when RXQs are destroyed
while they are in an "out of buffer" state. This patch fixes this issue
by posting 64 buffers to each RXQ before destroying them in the close path.
This is done after ensuring that no more new packets are selected for
transfer to the RXQs by disabling interface filters.

Signed-off-by: Kalesh AP <kalesh.purayil@avagotech.com>
Signed-off-by: Sathya Perla <sathya.perla@avagotech.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-07 11:53:05 -07:00
Kalesh AP
bcc84140a6 be2net: enable IFACE filters only after creating RXQs
HW issues were observed on Lancer adapters if IFACE filters
(flags, mac addrs etc) are enabled *before* creating RXQs.  This patch
changes the driver design by enabling filters in be_open() --
instead of be_setup() -- after RXQs are created and buffers posted.
Two new wrapper functions, be_enable_if_filters() and
be_disable_if_filters() are introduced to enable/disable IFACE filters in
be_open()/be_close() respectively. In be_setup() the IFACE is now created
only with the RSS flag.

Signed-off-by: Kalesh AP <kalesh.purayil@avagotech.com>
Signed-off-by: Sathya Perla <sathya.perla@avagotech.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-07 11:53:05 -07:00
Haozhong Zhang
d7add05458 KVM: x86: Use adjustment in guest cycles when handling MSR_IA32_TSC_ADJUST
When kvm_set_msr_common() handles a guest's write to
MSR_IA32_TSC_ADJUST, it will calcuate an adjustment based on the data
written by guest and then use it to adjust TSC offset by calling a
call-back adjust_tsc_offset(). The 3rd parameter of adjust_tsc_offset()
indicates whether the adjustment is in host TSC cycles or in guest TSC
cycles. If SVM TSC scaling is enabled, adjust_tsc_offset()
[i.e. svm_adjust_tsc_offset()] will first scale the adjustment;
otherwise, it will just use the unscaled one. As the MSR write here
comes from the guest, the adjustment is in guest TSC cycles. However,
the current kvm_set_msr_common() uses it as a value in host TSC
cycles (by using true as the 3rd parameter of adjust_tsc_offset()),
which can result in an incorrect adjustment of TSC offset if SVM TSC
scaling is enabled. This patch fixes this problem.

Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
Cc: stable@vger.linux.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-08-07 13:28:03 +02:00
Paolo Bonzini
18c3626e3d KVM: x86: zero IDT limit on entry to SMM
The recent BlackHat 2015 presentation "The Memory Sinkhole"
mentions that the IDT limit is zeroed on entry to SMM.

This is not documented, and must have changed some time after 2010
(see http://www.ssi.gouv.fr/uploads/IMG/pdf/IT_Defense_2010_final.pdf).
KVM was not doing it, but the fix is easy.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-08-07 12:46:32 +02:00
Jason Wang
48900cb6af virtio-net: drop NETIF_F_FRAGLIST
virtio declares support for NETIF_F_FRAGLIST, but assumes
that there are at most MAX_SKB_FRAGS + 2 fragments which isn't
always true with a fraglist.

A longer fraglist in the skb will make the call to skb_to_sgvec overflow
the sg array, leading to memory corruption.

Drop NETIF_F_FRAGLIST so we only get what we can handle.

Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-07 00:13:10 -07:00
Mathieu Olivari
4f7eb70f7b stmmac: dwmac-ipq806x: fix static checker warning
The patch b1c17215d7: "stmmac: add ipq806x glue layer", leads to the
following static checker warning:

.../stmmac/dwmac-ipq806x.c:314 ipq806x_gmac_probe()
warn: double left shift '1 << (1 << gmac->id)'

The NSS_COMMON_CLK_SRC_CTRL_OFFSET macro is used once as an offset, and
once as a mask, which is a bug indeed. We'll fix it by defining the
offset as the real offset value and computing the mask from it when
required.

Tested on IPQ806x ref designs AP148 & DB149.

Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Mathieu Olivari <mathieu@codeaurora.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-07 00:12:04 -07:00
WingMan Kwok
866b8b18e3 net: netcp: fix unused interface rx buffer size configuration
Prior to this patch, rx buffer size for each rx queue
of an interface is configurable through dts bindings.
But for an interface, the first rx queue's rx buffer
size is always the usual MTU size (plus usual overhead)
and page size for the remaining rx queues (if they are
enabled by specifying a non-zero rx queue depth dts
binding of the corresponding interface).  This patch
removes the rx buffer size configuration capability.

Signed-off-by: WingMan Kwok <w-kwok2@ti.com>
Acked-by: Murali Karicheri <m-karicheri2@ti.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-07 00:02:36 -07:00
Ian Campbell
22f54bf932 net: thunderx: remove effective "default y" from Kconfig if ARCH_THUNDER=y
As well as for kernels built only for ThunderX ARCH_THUNDERX is also enabled
for kernels which support multiple platforms (such as distro kernels). Thus
"default ARCH_THUNDER" is inappropriate.

I believe default m is equally frowned upon, so remove the line completely
rather than "default m if ARCH_THUNDER".

Signed-off-by: Ian Campbell <ijc@hellion.org.uk>
Cc: Sunil Goutham <sgoutham@cavium.com>
Cc: Robert Richter <rric@kernel.org>
Cc: Derek Chickles <derek.chickles@caviumnetworks.com>
Cc: Satanand Burla <satananda.burla@caviumnetworks.com>
Cc: Felix Manlunas <felix.manlunas@caviumnetworks.com>
Cc: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: linux-arm-kernel@lists.infradead.org
Cc: netdev@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-07 00:01:37 -07:00
Ivan Vecera
7ebc482202 r8169: enforce RX_MULTI_EN on rtl8168ep/8111ep chips
Enforcing this flag in RxConfig for the mentioned chips fixes netdev
watchdog issues prepended with AMD IOMMU message(s) like:
AMD-Vi: Event logged [IO_PAGE_FAULT device=01:00.0 domain=0x001d address=0x0000000000003000 flags=0x0050]

Note that this flag is also set in Realtek's own driver for these chips.

Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Tested-by: Alexander Lindqvist <alexander@bitspace.se>
Acked-by: Francois Romieu <romieu@fr.zoreil.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-07 00:00:10 -07:00
Nikolay Aleksandrov
786c2077ec bridge: netlink: account for the IFLA_BRPORT_PROXYARP_WIFI attribute size and policy
The attribute size wasn't accounted for in the get_slave_size() callback
(br_port_get_slave_size) when it was introduced, so fix it now. Also add
a policy entry for it in br_port_policy.

Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Fixes: 842a9ae08a ("bridge: Extend Proxy ARP design to allow optional rules for Wi-Fi")
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-06 23:54:42 -07:00
Nikolay Aleksandrov
355b9f9df1 bridge: netlink: account for the IFLA_BRPORT_PROXYARP attribute size and policy
The attribute size wasn't accounted for in the get_slave_size() callback
(br_port_get_slave_size) when it was introduced, so fix it now. Also add
a policy entry for it in br_port_policy.

Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Fixes: 958501163d ("bridge: Add support for IEEE 802.11 Proxy ARP")
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-06 23:54:42 -07:00
David S. Miller
50e18af16f Merge tag 'wireless-drivers-for-davem-2015-08-04' of git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers
Kalle Valo says:

====================
iwlwifi:

* a fix for the stuck TFD queue mechanism - it was producing
  noisy false alarms
* a fix for the NIC prepare flow that prevented the driver
  from being able to access the device on certain systems
* a fix for the scan prority handling which allows the
  regular scan to run even if a scheduled scan is already
  running

rsi:

* fix firmware load DMA regression

b43:

* fix extpa_gain check for 2GHz

rtlwifi:

* fix NULL dereference when PCI driver used as an AP
* add missing module parameter declaration for rtl8723be_mod_params.msi_support
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-06 23:53:34 -07:00
Oleg Nesterov
7ba8bd75dd net: pktgen: don't abuse current->state in pktgen_thread_worker()
Commit 1fbe4b46ca "net: pktgen: kill the Wait for kthread_stop
code in pktgen_thread_worker()" removed (in particular) the final
__set_current_state(TASK_RUNNING) and I didn't notice the previous
set_current_state(TASK_INTERRUPTIBLE). This triggers the warning
in __might_sleep() after return.

Afaics, we can simply remove both set_current_state()'s, and we
could do this a long ago right after ef87979c27 "pktgen: better
scheduler friendliness" which changed pktgen_thread_worker() to
use wait_event_interruptible_timeout().

Reported-by: Huang Ying <ying.huang@intel.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-06 23:52:44 -07:00
Ross Lagerwall
57b229063a xen/netback: Wake dealloc thread after completing zerocopy work
Waking the dealloc thread before decrementing inflight_packets is racy
because it means the thread may go to sleep before inflight_packets is
decremented. If kthread_stop() has already been called, the dealloc
thread may wait forever with nothing to wake it. Instead, wake the
thread only after decrementing inflight_packets.

Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-06 23:42:48 -07:00
Herbert Xu
a0a2a66024 net: Fix skb_set_peeked use-after-free bug
The commit 738ac1ebb9 ("net: Clone
skb before setting peeked flag") introduced a use-after-free bug
in skb_recv_datagram.  This is because skb_set_peeked may create
a new skb and free the existing one.  As it stands the caller will
continue to use the old freed skb.

This patch fixes it by making skb_set_peeked return the new skb
(or the old one if unchanged).

Fixes: 738ac1ebb9 ("net: Clone skb before setting peeked flag")
Reported-by: Brenden Blanco <bblanco@plumgrid.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Brenden Blanco <bblanco@plumgrid.com>
Reviewed-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-06 21:55:47 -07:00
Lucas Stach
14d2b7c1a9 net: fec: fix initial runtime PM refcount
The clocks are initially active and thus the device is marked active.
This still keeps the PM refcount at 0, the pm_runtime_put_autosuspend()
call at the end of probe then leaves us with an invalid refcount of -1,
which in turn leads to the device staying in suspended state even though
netdev open had been called.

Fix this by initializing the refcount to be coherent with the initial
device status.

Fixes:
8fff755e9f (net: fec: Ensure clocks are enabled while using mdio bus)

Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Tested-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-06 18:53:25 -07:00
Jakub Pawlowski
cb92205bad Bluetooth: fix MGMT_EV_NEW_LONG_TERM_KEY event
This patch fixes how MGMT_EV_NEW_LONG_TERM_KEY event is build. Right now
val vield is filled with only 1 byte, instead of whole value. This bug
was introduced in
commit 1fc62c526a ("Bluetooth: Fix exposing full value of shortened LTKs")

Before that patch, if you paired with device using bluetoothd using simple
pairing, and then restarted bluetoothd, you would be able to re-connect,
but device would fail to establish encryption and would terminate
connection. After this patch connecting after bluetoothd restart works
fine.

Signed-off-by: Jakub Pawlowski <jpawlowski@google.com>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2015-08-06 16:36:03 +02:00
Lucas Stach
1a9fa19095 ARM: imx6: correct i.MX6 PCIe interrupt routing
The PCIe interrupts are also routed through the GPC. This has been
missed from the conversion to stacked IRQ domains as the PCIe
controller uses an explicit interrupt map and thus doesn't inherit
the SoC global interrupt parent.

Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Cc: <stable@vger.kernel.org> # 4.1
Signed-off-by: Shawn Guo <shawnguo@kernel.org>
2015-08-06 16:30:18 +08:00
Linus Walleij
bf64dd262e ARM: ux500: add an SMP enablement type and move cpu nodes
The "cpus" node cannot be inside the "soc" node, while this
works for the CoreSight blocks, the early boot code will look
for "cpus" directly under the root node, so this is a hard
convention. So move the CPU nodes.

Augment the "reg" property to match what is actually in the
hardware: 0x300 and 0x301 respectively.

Then add an SMP enablement type to be used by the SMP init
code, "ste,dbx500-smp".

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Olof Johansson <olof@lixom.net>
2015-08-06 10:10:34 +02:00
Kishon Vijay Abraham I
cd4556733b ARM: dts: dra7: Fix broken pbias device creation
commit <d919501feffa> ("ARM: dts: dra7: add minimal l4 bus
layout with control module support") moved pbias_regulator dt node
from being a child node of ocp to be the child node of
scm_conf. After this device for pbias_regulator is
not created.

Fix it by adding "simple-bus" compatible property to
scm_conf dt node.

Fixes: d919501fef ("ARM: dts: dra7: add minimal l4 bus
layout with control module support")

Cc: <stable@vger.kernel.org> # v4.1
Suggested-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
Tested-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
2015-08-05 03:04:07 -07:00
Kishon Vijay Abraham I
70caac3f25 ARM: dts: OMAP5: Fix broken pbias device creation
commit <ed8509edddeb> ("ARM: dts: omap5: add minimal l4 bus
layout with control module support") moved pbias_regulator dt node
from being a child node of ocp to be the child node of
omap5_padconf_global. After this device for pbias_regulator is
not created.

Fix it by adding "simple-bus" compatible property to
omap5_padconf_global dt node.

Fixes: ed8509eddd ("ARM: dts: omap5: add minimal l4 bus
layout with control module support")

Cc: <stable@vger.kernel.org> # v4.1
Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
2015-08-05 03:04:07 -07:00
Kishon Vijay Abraham I
89a898df87 ARM: dts: OMAP4: Fix broken pbias device creation
commit <7415b0b4c645> ("ARM: dts: omap4: add minimal l4 bus layout
with control module support") moved pbias_regulator dt node
from being a child node of ocp to be the child node of
omap4_padconf_global. After this device for pbias_regulator
is not created.

Fix it by adding "simple-bus" compatible property to
omap4_padconf_global dt node.

Fixes: 7415b0b4c6 ("ARM: dts: omap4: add minimal l4 bus layout
with control module support")

Cc: <stable@vger.kernel.org> # v4.1
Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
2015-08-05 03:04:07 -07:00
Kishon Vijay Abraham I
4317c8c912 ARM: dts: omap243x: Fix broken pbias device creation
commit <72b10ac00eb1> ("ARM: dts: omap24xx: add minimal l4 bus
layout with control module support") moved pbias_regulator dt node
from being a child node of ocp to be the child node of
scm_conf. After this device for pbias_regulator is
not created.

Fix it by adding "simple-bus" compatible property to
scm_conf dt node.

Fixes: 72b10ac00e ("ARM: dts: omap24xx: add minimal l4 bus
layout with control module support")

Cc: <stable@vger.kernel.org> # v4.1
Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
2015-08-05 03:02:17 -07:00
Joe Stringer
f58e5aa7b8 netfilter: conntrack: Use flags in nf_ct_tmpl_alloc()
The flags were ignored for this function when it was introduced. Also
fix the style problem in kzalloc.

Fixes: 0838aa7fc (netfilter: fix netns dependencies with conntrack
templates)
Signed-off-by: Joe Stringer <joestringer@nicira.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2015-08-05 10:56:43 +02:00
Al Viro
aa65fa35ba may_follow_link() should use nd->inode
Now that we can get there in RCU mode, we shouldn't play with
nd->path.dentry->d_inode - it's not guaranteed to be stable.
Use nd->inode instead.

Reported-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-08-04 23:23:50 -04:00
Simon Wunderlich
27a4d5efd4 batman-adv: initialize up/down values when adding a gateway
Without this initialization, gateways which actually announce up/down
bandwidth of 0/0 could be added. If these nodes get purged via
_batadv_purge_orig() later, the gw_node structure does not get removed
since batadv_gw_node_delete() updates the gw_node with up/down
bandwidth of 0/0, and the updating function then discards the change
and does not free gw_node.

This results in leaking the gw_node structures, which references other
structures: gw_node -> orig_node -> orig_node_ifinfo -> hardif. When
removing the interface later, the open reference on the hardif may cause
hangs with the infamous "unregister_netdevice: waiting for mesh1 to
become free. Usage count = 1" message.

Signed-off-by: Simon Wunderlich <simon@open-mesh.com>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
2015-08-05 00:31:47 +02:00
Marek Lindner
ef72706a05 batman-adv: protect tt_local_entry from concurrent delete events
The tt_local_entry deletion performed in batadv_tt_local_remove() was neither
protecting against simultaneous deletes nor checking whether the element was
still part of the list before calling hlist_del_rcu().

Replacing the hlist_del_rcu() call with batadv_hash_remove() provides adequate
protection via hash spinlocks as well as an is-element-still-in-hash check to
avoid 'blind' hash removal.

Fixes: 068ee6e204 ("batman-adv: roaming handling mechanism redesign")
Reported-by: alfonsname@web.de
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
2015-08-05 00:31:47 +02:00
Marek Lindner
354136bcc3 batman-adv: fix kernel crash due to missing NULL checks
batadv_softif_vlan_get() may return NULL which has to be verified
by the caller.

Fixes: 35df3b298f ("batman-adv: fix TT VLAN inconsistency on VLAN re-add")
Reported-by: Ryan Thompson <ryan@eero.com>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
2015-08-05 00:31:46 +02:00
Antonio Quartulli
f202a666e9 batman-adv: avoid DAT to mess up LAN state
When a node running DAT receives an ARP request from the LAN for the
first time, it is likely that this node will request the ARP entry
through the distributed ARP table (DAT) in the mesh.

Once a DAT reply is received the asking node must check if the MAC
address for which the IP address has been asked is local. If it is, the
node must drop the ARP reply bceause the client should have replied on
its own locally.

Forwarding this reply means fooling any L2 bridge (e.g. Ethernet
switches) lying between the batman-adv node and the LAN. This happens
because the L2 bridge will think that the client sending the ARP reply
lies somewhere in the mesh, while this node is sitting in the same LAN.

Reported-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
2015-08-05 00:31:46 +02:00
Krzysztof Kozlowski
0621809e37 HID: hid-input: Fix accessing freed memory during device disconnect
During unbinding the driver was dereferencing a pointer to memory
already freed by power_supply_unregister().

Driver was freeing its internal description of battery through pointers
stored in power_supply structure. However, because the core owns the
power supply instance, after calling power_supply_unregister() this
memory is freed and the driver cannot access these members.

Fix this by storing the pointer to internal description of battery in a
local variable before calling power_supply_unregister(), so the pointer
remains valid.

Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Reported-by: H.J. Lu <hjl.tools@gmail.com>
Fixes: 297d716f62 ("power_supply: Change ownership from driver to core")
Cc: <stable@vger.kernel.org>
Reviewed-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.com>
2015-08-04 15:45:44 +02:00
Peter Zijlstra
fed66e2cdd perf: Fix fasync handling on inherited events
Vince reported that the fasync signal stuff doesn't work proper for
inherited events. So fix that.

Installing fasync allocates memory and sets filp->f_flags |= FASYNC,
which upon the demise of the file descriptor ensures the allocation is
freed and state is updated.

Now for perf, we can have the events stick around for a while after the
original FD is dead because of references from child events. So we
cannot copy the fasync pointer around. We can however consistently use
the parent's fasync, as that will be updated.

Reported-and-Tested-by: Vince Weaver <vincent.weaver@maine.edu>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: <stable@vger.kernel.org>
Cc: Arnaldo Carvalho deMelo <acme@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1434011521.1495.71.camel@twins
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-04 09:57:52 +02:00
Ross Lagerwall
2475b22526 xen-netback: Allocate fraglist early to avoid complex rollback
Determine if a fraglist is needed in the tx path, and allocate it if
necessary before setting up the copy and map operations.
Otherwise, undoing the copy and map operations is tricky.

This fixes a use-after-free: if allocating the fraglist failed, the copy
and map operations that had been set up were still executed, writing
over the data area of a freed skb.

Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-03 22:23:03 -07:00
Eric Dumazet
10e2eb878f udp: fix dst races with multicast early demux
Multicast dst are not cached. They carry DST_NOCACHE.

As mentioned in commit f886497212 ("ipv4: fix dst race in
sk_dst_get()"), these dst need special care before caching them
into a socket.

Caching them is allowed only if their refcnt was not 0, ie we
must use atomic_inc_not_zero()

Also, we must use READ_ONCE() to fetch sk->sk_rx_dst, as mentioned
in commit d0c294c53a ("tcp: prevent fetching dst twice in early demux
code")

Fixes: 421b3885bf ("udp: ipv4: Add udp early demux")
Tested-by: Gregory Hoggarth <Gregory.Hoggarth@alliedtelesis.co.nz>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Gregory Hoggarth <Gregory.Hoggarth@alliedtelesis.co.nz>
Reported-by: Alex Gartrell <agartrell@fb.com>
Cc: Michal Kubeček <mkubecek@suse.cz>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-03 22:16:50 -07:00
Jia-Ju Bai
2fc09962e2 3c59x: Fix resource leaks in vortex_open
When vortex_up is failed, the skb buffers allocated by __netdev_alloc_skb
in vortex_open are not released, which may cause resource leaks.
This bug has been submitted before.
This patch modifies the error handling code to fix it.

Signed-off-by: Jia-Ju Bai <baijiaju1990@163.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-03 15:21:33 -07:00
Dan Carpenter
468b732b6f rds: fix an integer overflow test in rds_info_getsockopt()
"len" is a signed integer.  We check that len is not negative, so it
goes from zero to INT_MAX.  PAGE_SIZE is unsigned long so the comparison
is type promoted to unsigned long.  ULONG_MAX - 4095 is a higher than
INT_MAX so the condition can never be true.

I don't know if this is harmful but it seems safe to limit "len" to
INT_MAX - 4095.

Fixes: a8c879a7ee ('RDS: Info and stats')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-03 15:20:16 -07:00
WANG Cong
636dba8e12 act_mirred: avoid calling tcf_hash_release() when binding
When we share an action within a filter, the bind refcnt
should increase, therefore we should not call tcf_hash_release().

Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: Cong Wang <cwang@twopensource.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-03 14:13:28 -07:00
Glenn Griffin
3576fd794b openvswitch: Fix L4 checksum handling when dealing with IP fragments
openvswitch modifies the L4 checksum of a packet when modifying
the ip address. When an IP packet is fragmented only the first
fragment contains an L4 header and checksum. Prior to this change
openvswitch would modify all fragments, modifying application data
in non-first fragments, causing checksum failures in the
reassembled packet.

Signed-off-by: Glenn Griffin <ggriffin.kernel@gmail.com>
Acked-by: Pravin B Shelar <pshelar@nicira.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-03 14:03:08 -07:00
Larry Finger
741e3b9902 rtlwifi: rtl8723be: Add module parameter for MSI interrupts
The driver code allows for the disabling of MSI interrupts; however the
module_parm line was missed and the option fails to show with modinfo.

Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Cc: Stable <stable@vger.kernel.org> [3.15+]
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
2015-08-03 11:26:24 +03:00
Eric Dumazet
3d0e0af406 fq_codel: explicitly reset flows in ->reset()
Alex reported the following crash when using fq_codel
with htb:

  crash> bt
  PID: 630839  TASK: ffff8823c990d280  CPU: 14  COMMAND: "tc"
   [... snip ...]
   #8 [ffff8820ceec17a0] page_fault at ffffffff8160a8c2
      [exception RIP: htb_qlen_notify+24]
      RIP: ffffffffa0841718  RSP: ffff8820ceec1858  RFLAGS: 00010282
      RAX: 0000000000000000  RBX: 0000000000000000  RCX: ffff88241747b400
      RDX: ffff88241747b408  RSI: 0000000000000000  RDI: ffff8811fb27d000
      RBP: ffff8820ceec1868   R8: ffff88120cdeff24   R9: ffff88120cdeff30
      R10: 0000000000000bd4  R11: ffffffffa0840919  R12: ffffffffa0843340
      R13: 0000000000000000  R14: 0000000000000001  R15: ffff8808dae5c2e8
      ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
   #9 [...] qdisc_tree_decrease_qlen at ffffffff81565375
  #10 [...] fq_codel_dequeue at ffffffffa084e0a0 [sch_fq_codel]
  #11 [...] fq_codel_reset at ffffffffa084e2f8 [sch_fq_codel]
  #12 [...] qdisc_destroy at ffffffff81560d2d
  #13 [...] htb_destroy_class at ffffffffa08408f8 [sch_htb]
  #14 [...] htb_put at ffffffffa084095c [sch_htb]
  #15 [...] tc_ctl_tclass at ffffffff815645a3
  #16 [...] rtnetlink_rcv_msg at ffffffff81552cb0
  [... snip ...]

As Jamal pointed out, there is actually no need to call dequeue
to purge the queued skb's in reset, data structures can be just
reset explicitly. Therefore, we reset everything except config's
and stats, so that we would have a fresh start after device flipping.

Fixes: 4b549a2ef4 ("fq_codel: Fair Queue Codel AQM")
Reported-by: Alex Gartrell <agartrell@fb.com>
Cc: Alex Gartrell <agartrell@fb.com>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
[xiyou.wangcong@gmail.com: added codel_vars_init() and qdisc_qstats_backlog_dec()]
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-02 17:20:01 -07:00
Ido Schimmel
1ebd47efa4 rocker: free netdevice during netdevice removal
When removing a port's netdevice in 'rocker_remove_ports', we should
also free the allocated 'net_device' structure. Do that by calling
'free_netdev' after unregistering it.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@resnulli.us>
Fixes: 4b8ac9660a ("rocker: introduce rocker switch driver")
Acked-by: Scott Feldman <sfeldma@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-02 17:19:17 -07:00
Nathan Lynch
3473f26592 ARM: 8405/1: VDSO: fix regression with toolchains lacking ld.bfd executable
The Sourcery CodeBench Lite 2014.05 toolchain (gcc 4.8.3, binutils
2.24.51) has a GCC which implements -fuse-ld, and it doesn't include
the gold linker, but it lacks an ld.bfd executable in its
installation.  This means that passing -fuse-ld=bfd fails with:

      VDSO    arch/arm/vdso/vdso.so.raw
    collect2: fatal error: cannot find 'ld'

Arguably this is a deficiency in the toolchain, but I suspect it's
commonly used enough that it's worth accommodating: just use

cc-ldoption (to cause a link attempt) instead of cc-option to test
whether we can use -fuse-ld.  So -fuse-ld=bfd won't be used with this
toolchain, but the build will rightly succeed, just as it does for
toolchains which don't implement -fuse-ld (and don't use gold as the
default linker).

Note: this will change the failure mode for a corner case I was trying
to handle in d2b30cd4b7, where the toolchain defaults to the gold
linker and the BFD linker is not found in PATH, from:

      VDSO    arch/arm/vdso/vdso.so.raw
    collect2: fatal error: cannot find 'ld'

i.e. the BFD linker is not found, to:

      OBJCOPY arch/arm/vdso/vdso.so
    BFD: arch/arm/vdso/vdso.so: Not enough room for program headers, try
    linking with -N

that is, we fail to prevent gold from being used as the linker, and it
produces an object that objcopy can't digest.

Reported-by: Baruch Siach <baruch@tkos.co.il>
Tested-by: Baruch Siach <baruch@tkos.co.il>
Tested-by: Raphaël Poggi <poggi.raph@gmail.com>
Fixes: d2b30cd4b7 ("ARM: 8384/1: VDSO: force use of BFD linker")
Cc: stable@vger.kernel.org
Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-07-31 18:54:45 +01:00
Ingo Molnar
5542b2aa9e Merge tag 'perf-urgent-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent
Pull perf/urgent fixes from Arnaldo Carvalho de Melo:

  - Fix 'perf stat' transaction length metrics. (Andi Kleen)

  - Fix test build error when bindir contains double slash. (Pawel Moll)

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-07-31 09:56:48 +02:00
Luis Felipe Dominguez Vega
7c62940165 rtlwifi: Fix NULL dereference when PCI driver used as an AP
In commit 33511b157b ("rtlwifi: add support to
send beacon frame"), the mechanism for sending beacons was established. That
patch works correctly for rtl8192cu, but there is a possibility of getting
the following warnings in the PCI drivers:

WARNING: CPU: 1 PID: 2439 at net/mac80211/driver-ops.h:12
ieee80211_bss_info_change_notify+0x179/0x1d0 [mac80211]()
wlp5s0:  Failed check-sdata-in-driver check, flags: 0x0

The warning is followed by a NULL pointer dereference as follows:

BUG: unable to handle kernel NULL pointer dereference at 0000000000000006
IP: [<ffffffffc073998e>] rtl_get_tcb_desc+0x5e/0x760 [rtlwifi]

This problem was reported at http://thread.gmane.org/gmane.linux.kernel.wireless.general/138645,
but no solution was found at that time.

The problem was also reported at https://bugzilla.kernel.org/show_bug.cgi?id=9744
and this solution was developed and tested there.

The USB driver works with a NULL final argument in the adapter_tx() callback;
however, the PCI drivers need a struct rtl_tcb_desc in that position.

Fixes: 33511b157b ("rtlwifi: add support to send beacon frame.")
Signed-off-by: Luis Felipe Dominguez Vega <lfdominguez@nauta.cu>
Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Cc: Stable <stable@vger.kernel.org> [3.19+]
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
2015-07-31 09:25:35 +03:00
Hauke Mehrtens
098697dbad b43: fix extpa_gain check for 2GHz
On the 2GHz and and on the 5GHZ band only the extpa_gain setting from
the 5GHz band was checked. this patch makes it check the property from
the correct band.

Signed-off-by: Hauke Mehrtens <hauke@hauke-m.de>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
2015-07-31 09:24:11 +03:00
Mike Looijmans
5d5cd85ff4 rsi: Fix failure to load firmware after memory leak fix and fix the leak
Fixes commit eae79b4f3e ("rsi: fix memory leak in rsi_load_ta_instructions()")
which stopped the driver from functioning.

Firmware data has been allocated using vmalloc(), resulting in memory
that cannot be used for DMA. Hence the firmware was first copied to a
buffer allocated with kmalloc() in the original code. This patch reverts
the commit and only calls "kfree()" to release the buffer after sending
the data. This fixes the memory leak without breaking the driver.

Add a comment to the kmemdup() calls to explain why this is done, and abort
if memory allocation fails.

Tested on a Topic Miami-Florida board which contains the rsi SDIO chip.

Also added the same kfree() call to the USB glue driver. This was not
tested on actual hardware though, as I only have the SDIO version.

Fixes: eae79b4f3e ("rsi: fix memory leak in rsi_load_ta_instructions()")
Signed-off-by: Mike Looijmans <mike.looijmans@topic.nl>
Cc: stable@vger.kernel.org
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
2015-07-31 09:22:44 +03:00
Kalle Valo
f7c0af8247 Merge tag 'iwlwifi-for-kalle-2015-07-30' of https://git.kernel.org/pub/scm/linux/kernel/git/iwlwifi/iwlwifi-fixes
* a fix for the stuck TFD queue mechanism - it was producing
  noisy false alarms.
* a fix for the NIC prepare flow that prevented the driver
  from being able to access the device on certain systems.
* a fix for the scan prority handling which allows the
  regular scan to run even if a scheduled scan is already
  running.
2015-07-31 09:20:12 +03:00
Vladimir Zapolskiy
3e9f798784 ARM: EXYNOS: fix double of_node_put() on error path
The change removes the second of_node_put(), if
for_each_compatible_node() body execution is not terminated. This
prevents from object refcounter overflow over zero in OF_DYNAMIC
build.

Signed-off-by: Vladimir Zapolskiy <vz@mleia.com>
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
2015-07-31 10:12:17 +09:00
Vladimir Zapolskiy
27bbd23fe8 ARM: EXYNOS: Fix potentian kfree() of ro memory
The change fixes a bug introduced by 2be2a3ff42, memory allocated
by kstrdup_const() must be always deallocated with kfree_const(),
otherwise there is a risk of kfree'ing ro memory in power domain error
exit path.

Signed-off-by: Vladimir Zapolskiy <vz@mleia.com>
Cc: <stable@vger.kernel.org>
Fixes: 2be2a3ff42 ("ARM: EXYNOS: register power domain driver from core_initcall")
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
2015-07-31 10:11:25 +09:00
Emmanuel Grumbach
aecdc63d87 iwlwifi: pcie: fix stuck queue detection for sleeping clients
The stuck queue detection mechanism allows to detect queues
that are stuck. For sleeping clients, a queue may rightfully
be stuck: if a poor client implementation stays asleep for
more than 10s, then we don't want to trigger recovery flows
because of that client.
In order to cope with this, I added a mechanism that
monitors the state of the client: when a client goes to
sleep, the timer of his queues is frozen. When he wakes up,
the timer is reset to the right value so that if a client
was awake for more than 10s and the queues are stuck, only
then, the recovery flow will kick in.
This is valid only on non-shared queues: A-MPDU queues.

There was a bug in case we Tx to a sleeping client that has
an empty A-MPDU queue: the timer was armed to now + 10s.
This is bad, but pretty harmless.
The problem is that when the client wakes up, the timer is
modified to be now + remainder. But remainder is 0 since the
queue was empty when that client went to sleep...

Fix this by checking the state of the client before playing
with the timer when we add a packet to an empty queue.

Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
2015-07-30 21:38:14 +03:00
Dan Carpenter
1a727c6361 netfilter: nf_conntrack: checking for IS_ERR() instead of NULL
We recently changed this from nf_conntrack_alloc() to nf_ct_tmpl_alloc()
so the error handling needs to changed to check for NULL instead of
IS_ERR().

Fixes: 0838aa7fcf ('netfilter: fix netns dependencies with conntrack templates')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2015-07-30 14:04:19 +02:00
Pablo Neira Ayuso
f0ad462189 netfilter: nf_conntrack: silence warning on falling back to vmalloc()
Since 88eab472ec ("netfilter: conntrack: adjust nf_conntrack_buckets default
value"), the hashtable can easily hit this warning. We got reports from users
that are getting this message in a quite spamming fashion, so better silence
this.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Acked-by: Florian Westphal <fw@strlen.de>
2015-07-30 12:32:55 +02:00
Guenter Roeck
8ef9724bf9 regmap: regcache-rbtree: Clean new present bits on present bitmap resize
When inserting a new register into a block, the present bit map size is
increased using krealloc. krealloc does not clear the additionally
allocated memory, leaving it filled with random values. Result is that
some registers are considered cached even though this is not the case.

Fix the problem by clearing the additionally allocated memory. Also, if
the bitmap size does not increase, do not reallocate the bitmap at all
to reduce overhead.

Fixes: 3f4ff561bc ("regmap: rbtree: Make cache_present bitmap per node")
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Mark Brown <broonie@kernel.org>
Cc: stable@vger.kernel.org
2015-07-29 15:10:13 +01:00
Dan Carpenter
4a8e70f5d0 HID: uclogic: fix limit in uclogic_tablet_enable()
The limit should be ARRAY_SIZE(params) (5 elements) here instead of
sizeof(params) (20 bytes).

Fixes: 08177f40bd ('HID: uclogic: merge hid-huion driver in hid-uclogic')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Nikolai Kondrashov <spbnick@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.com>
2015-07-29 14:05:15 +02:00
Pawel Moll
0927beeca5 perf tools: Fix test build error when bindir contains double slash
When building with a prefix ending with a slash, for example:

	$ make prefix=/usr/local/

one of the perf tests fail to compile due to BUILD_STR macro mishandling
bindir_SQ string containing with two slashes:

	-DBINDIR="BUILD_STR(/usr/local//bin)"

with the following error:

	  CC       tests/attr.o
	tests/attr.c: In function ‘test__attr’:
	tests/attr.c:168:50: error: expected ‘)’ before ‘;’ token
	  snprintf(path_perf, PATH_MAX, "%s/perf", BINDIR);
                                                  ^
	tests/attr.c:176:1: error: expected ‘;’ before ‘}’ token
	 }
	 ^
	tests/attr.c:176:1: error: control reaches end of non-void function [-Werror=return-type]
	 }
	 ^
	cc1: all warnings being treated as errors

This patch works around the problem by "cleaning" the bindir string
using make's abspath function.

Signed-off-by: Pawel Moll <pawel.moll@arm.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1438092613-21014-1-git-send-email-pawel.moll@arm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-07-28 13:03:49 -03:00
Andi Kleen
5497628576 perf stat: Fix transaction lenght metrics
The transaction length metrics in perf stat -T broke recently.

It would not match the metric correctly and always print K/sec.

This was caused by a incorrect update of the cycles_in_tx statistics.

Update the correct variable.

Also the check for zero division was reversed, which resulted in K/sec
being printed for no transactions. Fix this also up.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1438039491-22091-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2015-07-28 12:05:04 -03:00
Avraham Stern
dc9f69b907 iwlwifi: mvm: Fix regular scan priority
The code checks the total number of iterations to differentiate
between regular scan and scheduled scan. However, regular scan has
a total of one iteration, not zero. As a result, regular scan will
have lower priority than it should have, and in case scheduled
scan is already running when regular scan is requested, regular scan
will be delayed until scheduled scan is aborted.
Fix that by checking for total iterations number of one as an
identifier for regular scan.

Fixes: 133c8259f8 ("iwlwifi: mvm: rename generic_scan_cmd functions to dwell")
Signed-off-by: Avraham Stern <avraham.stern@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
2015-07-28 11:36:02 +03:00
Emmanuel Grumbach
c9fdec9f39 iwlwifi: pcie: fix prepare card flow
When the card is not owned by the PCIe bus, we need to
acquire ownership first. This flow is implemented in
iwl_pcie_prepare_card_hw. Because of a hardware bug, we
need to disable link power management before we can
request ownership otherwise the other user of the device
won't get notified that we are requesting the device which
will prevent us from acquire ownership.

Same holds for the down flow where we need to make sure
that any other potential user is notified that the driver
is going down.

CC: <stable@vger.kernel.org> [4.1]
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
2015-07-28 11:21:31 +03:00
Jens Axboe
e162b219ae Merge branch 'stable/for-jens-4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen into for-linus
Konrad writes:

"There are three bugs that have been found in the xen-blkfront (and
backend). Two of them have the stable tree CC-ed. They have been found
where an guest is migrating to a host that is missing
'feature-persistent' support (from one that has it enabled). We end up
hitting an BUG() in the driver code."
2015-07-27 11:58:41 -06:00
Peter Zijlstra
00a2916f7f perf: Fix running time accounting
A recent fix to the shadow timestamp inadvertly broke the running time
accounting.

We must not update the running timestamp if we fail to schedule the
event, the event will not have ran. This can (and did) result in
negative total runtime because the stopped timestamp was before the
running timestamp (we 'started' but never stopped the event -- because
it never really started we didn't have to stop it either).

Reported-and-Tested-by: Vince Weaver <vincent.weaver@maine.edu>
Fixes: 72f669c008 ("perf: Update shadow timestamp before add event")
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: stable@vger.kernel.org # 4.1
Cc: Shaohua Li <shli@fb.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-07-27 13:52:19 +02:00
Bob Liu
53bc7dc004 xen-blkback: replace work_pending with work_busy in purge_persistent_gnt()
The BUG_ON() in purge_persistent_gnt() will be triggered when previous purge
work haven't finished.

There is a work_pending() before this BUG_ON, but it doesn't account if the work
is still currently running.

CC: stable@vger.kernel.org
Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Signed-off-by: Bob Liu <bob.liu@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2015-07-24 09:09:49 -04:00
Bob Liu
7b0767502b xen-blkfront: don't add indirect pages to list when !feature_persistent
We should consider info->feature_persistent when adding indirect page to list
info->indirect_pages, else the BUG_ON() in blkif_free() would be triggered.

When we are using persistent grants the indirect_pages list
should always be empty because blkfront has pre-allocated enough
persistent pages to fill all requests on the ring.

CC: stable@vger.kernel.org
Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Signed-off-by: Bob Liu <bob.liu@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2015-07-24 09:09:16 -04:00
Bob Liu
d50babbe30 xen-blkfront: introduce blkfront_gather_backend_features()
There is a bug when migrate from !feature-persistent host to feature-persistent
host, because domU still thinks new host/backend doesn't support persistent.
Dmesg like:
backed has not unmapped grant: 839
backed has not unmapped grant: 773
backed has not unmapped grant: 773
backed has not unmapped grant: 773
backed has not unmapped grant: 839

The fix is to recheck feature-persistent of new backend in blkif_recover().
See: https://lkml.org/lkml/2015/5/25/469

As Roger suggested, we can split the part of blkfront_connect that checks for
optional features, like persistent grants, indirect descriptors and
flush/barrier features to a separate function and call it from both
blkfront_connect and blkif_recover

Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Signed-off-by: Bob Liu <bob.liu@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2015-07-24 09:07:10 -04:00
Javier Martinez Canillas
fb9caeedaf mfd: Remove MFD_CROS_EC_SPI depends on OF
The ChromeOS EC SPI transport driver has a dependency on OF because it
uses some OF helpers from the <linux/of.h> header. But there isn't a
need for an explicit dependency since the header has stub functions if
CONFIG_OF is not defined.

Also, MFD_CROS_EC_SPI already depends on MFD_CROS_EC which in turn has
a dependency on OF so in practice can't be selected without CONFIG_OF.

Signed-off-by: Javier Martinez Canillas <javier.martinez@collabora.co.uk>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
2015-07-24 09:05:44 +01:00
Javier Martinez Canillas
d12bbcd3ea platform/chrome: Don't make CHROME_PLATFORMS depends on X86 || ARM
The Chrome platform support depends on X86 || ARM because there are
only Chromebooks using those architectures. But only some drivers
depend on a given architecture, and the ones that do already have
a dependency on their specific Kconfig symbol entries.

An option is to also make CHROME_PLATFORMS depends on || COMPILE_TEST
but is more future proof to remove the dependency and let the drivers
be built in all architectures if possible to have more build coverage.

Acked-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Javier Martinez Canillas <javier.martinez@collabora.co.uk>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
2015-07-24 09:05:43 +01:00
Charles Keepax
72e43164fd mfd: arizona: Fix initialisation of the PM runtime
The PM runtime core by default assumes a chip is suspended when runtime
PM is enabled. Currently the arizona driver enables runtime PM when the
chip is fully active and then disables the DCVDD regulator at the end of
arizona_dev_init. This however has several problems, firstly the if we
reach the end of arizona_dev_init, we did not properly follow all the
proceedures for shutting down the chip, and most notably we never marked
the chip as cache only so any writes occurring between then and the next
PM runtime resume will be lost. Secondly, if we are already resumed when
we reach the end of dev_init, then at best we get unbalanced regulator
enable/disables at work we lose DCVDD whilst we need it.

Additionally, since the commit 4f0216409f7c ("mfd: arizona: Add better
support for system suspend"), the PM runtime operations may
disable/enable the IRQ, so the IRQs must now be enabled before we call
any PM operations.

This patch adds a call to pm_runtime_set_active to inform the PM core
that the device is starting up active and moves the PM enabling to
around the IRQ initialisation to avoid any PM callbacks happening until
the IRQs are initialised.

Cc: stable@vger.kernel.org # v3.5+
Signed-off-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
2015-07-24 09:05:28 +01:00
Charles Keepax
111509294b mfd: arizona: Fix race between runtime suspend and IRQs
The function arizona_irq_thread (the threaded handler for the arizona
IRQs) calls pm_runtime_get_sync at the start to ensure that the chip is
active as we handle the IRQ. If the chip is part way through a runtime
suspend when an IRQ arrives the PM core will wait for the suspend to
complete, before resuming. However, since commit 4f0216409f7c
("mfd: arizona: Add better support for system suspend") the runtime
suspend function may call disable_irq, if the chip is going to fully
power off, which will try to wait for any outstanding IRQs to complete.
This results in deadlock as the IRQ thread is waiting for the PM
operation to complete and the PM thread is waiting for the IRQ to
complete.

To avoid this situation we use disable_irq_nosync, which allows the
suspending thread to finish the suspend without waiting for the IRQ to
complete. This is safe because if an IRQ is being processed it can only
be blocked at the pm_runtime_get_sync at the start of the handler
otherwise it wouldn't be possible to suspend.

Signed-off-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
2015-07-24 08:51:52 +01:00
Waiman Long
cba77f03f2 locking/pvqspinlock: Fix kernel panic in locking-selftest
Enabling locking-selftest in a VM guest may cause the following
kernel panic:

  kernel BUG at .../kernel/locking/qspinlock_paravirt.h:137!

This is due to the fact that the pvqspinlock unlock function is
expecting either a _Q_LOCKED_VAL or _Q_SLOW_VAL in the lock
byte. This patch prevents that bug report by ignoring it when
debug_locks_silent is set. Otherwise, a warning will be printed
if it contains an unexpected value.

With this patch applied, the kernel locking-selftest completed
without any noise.

Tested-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1436663959-53092-1-git-send-email-Waiman.Long@hp.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-07-21 10:18:07 +02:00
156 changed files with 1360 additions and 1183 deletions

View File

@@ -17,6 +17,7 @@ Aleksey Gorelov <aleksey_gorelov@phoenix.com>
Al Viro <viro@ftp.linux.org.uk>
Al Viro <viro@zenIV.linux.org.uk>
Andreas Herrmann <aherrman@de.ibm.com>
Andrey Ryabinin <ryabinin.a.a@gmail.com> <a.ryabinin@samsung.com>
Andrew Morton <akpm@linux-foundation.org>
Andrew Vasquez <andrew.vasquez@qlogic.com>
Andy Adamson <andros@citi.umich.edu>

View File

@@ -199,6 +199,7 @@ nodes to be present and contain the properties described below.
"qcom,kpss-acc-v1"
"qcom,kpss-acc-v2"
"rockchip,rk3066-smp"
"ste,dbx500-smp"
- cpu-release-addr
Usage: required for systems that have an "enable-method"

View File

@@ -3587,6 +3587,15 @@ S: Maintained
F: drivers/gpu/drm/rockchip/
F: Documentation/devicetree/bindings/video/rockchip*
DRM DRIVERS FOR STI
M: Benjamin Gaignard <benjamin.gaignard@linaro.org>
M: Vincent Abriou <vincent.abriou@st.com>
L: dri-devel@lists.freedesktop.org
T: git http://git.linaro.org/people/benjamin.gaignard/kernel.git
S: Maintained
F: drivers/gpu/drm/sti
F: Documentation/devicetree/bindings/gpu/st,stih4xx.txt
DSBR100 USB FM RADIO DRIVER
M: Alexey Klimov <klimov.linux@gmail.com>
L: linux-media@vger.kernel.org

View File

@@ -1,7 +1,7 @@
VERSION = 4
PATCHLEVEL = 2
SUBLEVEL = 0
EXTRAVERSION = -rc6
EXTRAVERSION = -rc7
NAME = Hurr durr I'ma sheep
# *DOCUMENTATION*

View File

@@ -116,7 +116,7 @@
ranges = <0 0x2000 0x2000>;
scm_conf: scm_conf@0 {
compatible = "syscon";
compatible = "syscon", "simple-bus";
reg = <0x0 0x1400>;
#address-cells = <1>;
#size-cells = <1>;

View File

@@ -181,10 +181,10 @@
interrupt-names = "msi";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &intc GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 2 &intc GIC_SPI 122 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 3 &intc GIC_SPI 121 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 4 &intc GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>;
interrupt-map = <0 0 0 1 &gpc GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 2 &gpc GIC_SPI 122 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 3 &gpc GIC_SPI 121 IRQ_TYPE_LEVEL_HIGH>,
<0 0 0 4 &gpc GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clks IMX6QDL_CLK_PCIE_AXI>,
<&clks IMX6QDL_CLK_LVDS1_GATE>,
<&clks IMX6QDL_CLK_PCIE_REF_125M>;

View File

@@ -131,10 +131,17 @@
<GIC_SPI 376 IRQ_TYPE_EDGE_RISING>;
};
};
mdio: mdio@24200f00 {
compatible = "ti,keystone_mdio", "ti,davinci_mdio";
#address-cells = <1>;
#size-cells = <0>;
reg = <0x24200f00 0x100>;
status = "disabled";
clocks = <&clkcpgmac>;
clock-names = "fck";
bus_freq = <2500000>;
};
/include/ "k2e-netcp.dtsi"
};
};
&mdio {
reg = <0x24200f00 0x100>;
};

View File

@@ -98,6 +98,17 @@
#gpio-cells = <2>;
gpio,syscon-dev = <&devctrl 0x25c>;
};
mdio: mdio@02090300 {
compatible = "ti,keystone_mdio", "ti,davinci_mdio";
#address-cells = <1>;
#size-cells = <0>;
reg = <0x02090300 0x100>;
status = "disabled";
clocks = <&clkcpgmac>;
clock-names = "fck";
bus_freq = <2500000>;
};
/include/ "k2hk-netcp.dtsi"
};
};

View File

@@ -29,7 +29,6 @@
};
soc {
/include/ "k2l-clocks.dtsi"
uart2: serial@02348400 {
@@ -79,6 +78,17 @@
#gpio-cells = <2>;
gpio,syscon-dev = <&devctrl 0x24c>;
};
mdio: mdio@26200f00 {
compatible = "ti,keystone_mdio", "ti,davinci_mdio";
#address-cells = <1>;
#size-cells = <0>;
reg = <0x26200f00 0x100>;
status = "disabled";
clocks = <&clkcpgmac>;
clock-names = "fck";
bus_freq = <2500000>;
};
/include/ "k2l-netcp.dtsi"
};
};
@@ -96,7 +106,3 @@
/* Pin muxed. Enabled and configured by Bootloader */
status = "disabled";
};
&mdio {
reg = <0x26200f00 0x100>;
};

View File

@@ -267,17 +267,6 @@
1 0 0x21000A00 0x00000100>;
};
mdio: mdio@02090300 {
compatible = "ti,keystone_mdio", "ti,davinci_mdio";
#address-cells = <1>;
#size-cells = <0>;
reg = <0x02090300 0x100>;
status = "disabled";
clocks = <&clkpa>;
clock-names = "fck";
bus_freq = <2500000>;
};
kirq0: keystone_irq@26202a0 {
compatible = "ti,keystone-irq";
interrupts = <GIC_SPI 4 IRQ_TYPE_EDGE_RISING>;

View File

@@ -51,7 +51,8 @@
};
scm_conf: scm_conf@270 {
compatible = "syscon";
compatible = "syscon",
"simple-bus";
reg = <0x270 0x240>;
#address-cells = <1>;
#size-cells = <1>;

View File

@@ -191,7 +191,8 @@
};
omap4_padconf_global: omap4_padconf_global@5a0 {
compatible = "syscon";
compatible = "syscon",
"simple-bus";
reg = <0x5a0 0x170>;
#address-cells = <1>;
#size-cells = <1>;

View File

@@ -180,7 +180,8 @@
};
omap5_padconf_global: omap5_padconf_global@5a0 {
compatible = "syscon";
compatible = "syscon",
"simple-bus";
reg = <0x5a0 0xec>;
#address-cells = <1>;
#size-cells = <1>;

View File

@@ -15,6 +15,33 @@
#include "skeleton.dtsi"
/ {
cpus {
#address-cells = <1>;
#size-cells = <0>;
enable-method = "ste,dbx500-smp";
cpu-map {
cluster0 {
core0 {
cpu = <&CPU0>;
};
core1 {
cpu = <&CPU1>;
};
};
};
CPU0: cpu@300 {
device_type = "cpu";
compatible = "arm,cortex-a9";
reg = <0x300>;
};
CPU1: cpu@301 {
device_type = "cpu";
compatible = "arm,cortex-a9";
reg = <0x301>;
};
};
soc {
#address-cells = <1>;
#size-cells = <1>;
@@ -22,32 +49,6 @@
interrupt-parent = <&intc>;
ranges;
cpus {
#address-cells = <1>;
#size-cells = <0>;
cpu-map {
cluster0 {
core0 {
cpu = <&CPU0>;
};
core1 {
cpu = <&CPU1>;
};
};
};
CPU0: cpu@0 {
device_type = "cpu";
compatible = "arm,cortex-a9";
reg = <0>;
};
CPU1: cpu@1 {
device_type = "cpu";
compatible = "arm,cortex-a9";
reg = <1>;
};
};
ptm@801ae000 {
compatible = "arm,coresight-etm3x", "arm,primecell";
reg = <0x801ae000 0x1000>;

View File

@@ -61,6 +61,7 @@ work_pending:
movlt scno, #(__NR_restart_syscall - __NR_SYSCALL_BASE)
ldmia sp, {r0 - r6} @ have to reload r0 - r6
b local_restart @ ... and off we go
ENDPROC(ret_fast_syscall)
/*
* "slow" syscall return path. "why" tells us if this was a real syscall.

View File

@@ -399,6 +399,9 @@ ENTRY(secondary_startup)
sub lr, r4, r5 @ mmu has been enabled
add r3, r7, lr
ldrd r4, [r3, #0] @ get secondary_data.pgdir
ARM_BE8(eor r4, r4, r5) @ Swap r5 and r4 in BE:
ARM_BE8(eor r5, r4, r5) @ it can be done in 3 steps
ARM_BE8(eor r4, r4, r5) @ without using a temp reg.
ldr r8, [r3, #8] @ get secondary_data.swapper_pg_dir
badr lr, __enable_mmu @ return address
mov r13, r12 @ __secondary_switched address

View File

@@ -296,7 +296,6 @@ static bool tk_is_cntvct(const struct timekeeper *tk)
*/
void update_vsyscall(struct timekeeper *tk)
{
struct timespec xtime_coarse;
struct timespec64 *wtm = &tk->wall_to_monotonic;
if (!cntvct_ok) {
@@ -308,10 +307,10 @@ void update_vsyscall(struct timekeeper *tk)
vdso_write_begin(vdso_data);
xtime_coarse = __current_kernel_time();
vdso_data->tk_is_cntvct = tk_is_cntvct(tk);
vdso_data->xtime_coarse_sec = xtime_coarse.tv_sec;
vdso_data->xtime_coarse_nsec = xtime_coarse.tv_nsec;
vdso_data->xtime_coarse_sec = tk->xtime_sec;
vdso_data->xtime_coarse_nsec = (u32)(tk->tkr_mono.xtime_nsec >>
tk->tkr_mono.shift);
vdso_data->wtm_clock_sec = wtm->tv_sec;
vdso_data->wtm_clock_nsec = wtm->tv_nsec;

View File

@@ -146,9 +146,8 @@ static __init int exynos4_pm_init_power_domain(void)
pd->base = of_iomap(np, 0);
if (!pd->base) {
pr_warn("%s: failed to map memory\n", __func__);
kfree(pd->pd.name);
kfree_const(pd->pd.name);
kfree(pd);
of_node_put(np);
continue;
}

View File

@@ -14,7 +14,7 @@ VDSO_LDFLAGS += -Wl,-z,max-page-size=4096 -Wl,-z,common-page-size=4096
VDSO_LDFLAGS += -nostdlib -shared
VDSO_LDFLAGS += $(call cc-ldoption, -Wl$(comma)--hash-style=sysv)
VDSO_LDFLAGS += $(call cc-ldoption, -Wl$(comma)--build-id)
VDSO_LDFLAGS += $(call cc-option, -fuse-ld=bfd)
VDSO_LDFLAGS += $(call cc-ldoption, -fuse-ld=bfd)
obj-$(CONFIG_VDSO) += vdso.o
extra-$(CONFIG_VDSO) += vdso.lds

View File

@@ -199,16 +199,15 @@ up_fail:
*/
void update_vsyscall(struct timekeeper *tk)
{
struct timespec xtime_coarse;
u32 use_syscall = strcmp(tk->tkr_mono.clock->name, "arch_sys_counter");
++vdso_data->tb_seq_count;
smp_wmb();
xtime_coarse = __current_kernel_time();
vdso_data->use_syscall = use_syscall;
vdso_data->xtime_coarse_sec = xtime_coarse.tv_sec;
vdso_data->xtime_coarse_nsec = xtime_coarse.tv_nsec;
vdso_data->xtime_coarse_sec = tk->xtime_sec;
vdso_data->xtime_coarse_nsec = tk->tkr_mono.xtime_nsec >>
tk->tkr_mono.shift;
vdso_data->wtm_clock_sec = tk->wall_to_monotonic.tv_sec;
vdso_data->wtm_clock_nsec = tk->wall_to_monotonic.tv_nsec;

View File

@@ -80,7 +80,7 @@ syscall_trace_entry:
SAVE_STATIC
move s0, t2
move a0, sp
daddiu a1, v0, __NR_64_Linux
move a1, v0
jal syscall_trace_enter
bltz v0, 2f # seccomp failed? Skip syscall

View File

@@ -72,7 +72,7 @@ n32_syscall_trace_entry:
SAVE_STATIC
move s0, t2
move a0, sp
daddiu a1, v0, __NR_N32_Linux
move a1, v0
jal syscall_trace_enter
bltz v0, 2f # seccomp failed? Skip syscall

View File

@@ -140,6 +140,7 @@ sysexit_from_sys_call:
*/
andl $~TS_COMPAT, ASM_THREAD_INFO(TI_status, %rsp, SIZEOF_PTREGS)
movl RIP(%rsp), %ecx /* User %eip */
movq RAX(%rsp), %rax
RESTORE_RSI_RDI
xorl %edx, %edx /* Do not leak kernel information */
xorq %r8, %r8
@@ -219,7 +220,6 @@ sysexit_from_sys_call:
1: setbe %al /* 1 if error, 0 if not */
movzbl %al, %edi /* zero-extend that into %edi */
call __audit_syscall_exit
movq RAX(%rsp), %rax /* reload syscall return value */
movl $(_TIF_ALLWORK_MASK & ~_TIF_SYSCALL_AUDIT), %edi
DISABLE_INTERRUPTS(CLBR_NONE)
TRACE_IRQS_OFF
@@ -368,6 +368,7 @@ sysretl_from_sys_call:
RESTORE_RSI_RDI_RDX
movl RIP(%rsp), %ecx
movl EFLAGS(%rsp), %r11d
movq RAX(%rsp), %rax
xorq %r10, %r10
xorq %r9, %r9
xorq %r8, %r8

View File

@@ -57,9 +57,9 @@ struct sigcontext {
unsigned long ip;
unsigned long flags;
unsigned short cs;
unsigned short __pad2; /* Was called gs, but was always zero. */
unsigned short __pad1; /* Was called fs, but was always zero. */
unsigned short ss;
unsigned short gs;
unsigned short fs;
unsigned short __pad0;
unsigned long err;
unsigned long trapno;
unsigned long oldmask;

View File

@@ -177,24 +177,9 @@ struct sigcontext {
__u64 rip;
__u64 eflags; /* RFLAGS */
__u16 cs;
/*
* Prior to 2.5.64 ("[PATCH] x86-64 updates for 2.5.64-bk3"),
* Linux saved and restored fs and gs in these slots. This
* was counterproductive, as fsbase and gsbase were never
* saved, so arch_prctl was presumably unreliable.
*
* If these slots are ever needed for any other purpose, there
* is some risk that very old 64-bit binaries could get
* confused. I doubt that many such binaries still work,
* though, since the same patch in 2.5.64 also removed the
* 64-bit set_thread_area syscall, so it appears that there is
* no TLS API that works in both pre- and post-2.5.64 kernels.
*/
__u16 __pad2; /* Was gs. */
__u16 __pad1; /* Was fs. */
__u16 ss;
__u16 gs;
__u16 fs;
__u16 __pad0;
__u64 err;
__u64 trapno;
__u64 oldmask;

View File

@@ -2534,7 +2534,7 @@ static int intel_pmu_cpu_prepare(int cpu)
if (x86_pmu.extra_regs || x86_pmu.lbr_sel_map) {
cpuc->shared_regs = allocate_shared_regs(cpu);
if (!cpuc->shared_regs)
return NOTIFY_BAD;
goto err;
}
if (x86_pmu.flags & PMU_FL_EXCL_CNTRS) {
@@ -2542,18 +2542,27 @@ static int intel_pmu_cpu_prepare(int cpu)
cpuc->constraint_list = kzalloc(sz, GFP_KERNEL);
if (!cpuc->constraint_list)
return NOTIFY_BAD;
goto err_shared_regs;
cpuc->excl_cntrs = allocate_excl_cntrs(cpu);
if (!cpuc->excl_cntrs) {
kfree(cpuc->constraint_list);
kfree(cpuc->shared_regs);
return NOTIFY_BAD;
}
if (!cpuc->excl_cntrs)
goto err_constraint_list;
cpuc->excl_thread_id = 0;
}
return NOTIFY_OK;
err_constraint_list:
kfree(cpuc->constraint_list);
cpuc->constraint_list = NULL;
err_shared_regs:
kfree(cpuc->shared_regs);
cpuc->shared_regs = NULL;
err:
return NOTIFY_BAD;
}
static void intel_pmu_cpu_starting(int cpu)

View File

@@ -1255,7 +1255,7 @@ static inline void cqm_pick_event_reader(int cpu)
cpumask_set_cpu(cpu, &cqm_cpumask);
}
static void intel_cqm_cpu_prepare(unsigned int cpu)
static void intel_cqm_cpu_starting(unsigned int cpu)
{
struct intel_pqr_state *state = &per_cpu(pqr_state, cpu);
struct cpuinfo_x86 *c = &cpu_data(cpu);
@@ -1296,13 +1296,11 @@ static int intel_cqm_cpu_notifier(struct notifier_block *nb,
unsigned int cpu = (unsigned long)hcpu;
switch (action & ~CPU_TASKS_FROZEN) {
case CPU_UP_PREPARE:
intel_cqm_cpu_prepare(cpu);
break;
case CPU_DOWN_PREPARE:
intel_cqm_cpu_exit(cpu);
break;
case CPU_STARTING:
intel_cqm_cpu_starting(cpu);
cqm_pick_event_reader(cpu);
break;
}
@@ -1373,7 +1371,7 @@ static int __init intel_cqm_init(void)
goto out;
for_each_online_cpu(i) {
intel_cqm_cpu_prepare(i);
intel_cqm_cpu_starting(i);
cqm_pick_event_reader(i);
}

View File

@@ -93,8 +93,15 @@ int restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc)
COPY(r15);
#endif /* CONFIG_X86_64 */
#ifdef CONFIG_X86_32
COPY_SEG_CPL3(cs);
COPY_SEG_CPL3(ss);
#else /* !CONFIG_X86_32 */
/* Kernel saves and restores only the CS segment register on signals,
* which is the bare minimum needed to allow mixed 32/64-bit code.
* App's signal handler can save/restore other segments if needed. */
COPY_SEG_CPL3(cs);
#endif /* CONFIG_X86_32 */
get_user_ex(tmpflags, &sc->flags);
regs->flags = (regs->flags & ~FIX_EFLAGS) | (tmpflags & FIX_EFLAGS);
@@ -154,9 +161,8 @@ int setup_sigcontext(struct sigcontext __user *sc, void __user *fpstate,
#else /* !CONFIG_X86_32 */
put_user_ex(regs->flags, &sc->flags);
put_user_ex(regs->cs, &sc->cs);
put_user_ex(0, &sc->__pad2);
put_user_ex(0, &sc->__pad1);
put_user_ex(regs->ss, &sc->ss);
put_user_ex(0, &sc->gs);
put_user_ex(0, &sc->fs);
#endif /* CONFIG_X86_32 */
put_user_ex(fpstate, &sc->fpstate);
@@ -451,19 +457,9 @@ static int __setup_rt_frame(int sig, struct ksignal *ksig,
regs->sp = (unsigned long)frame;
/*
* Set up the CS and SS registers to run signal handlers in
* 64-bit mode, even if the handler happens to be interrupting
* 32-bit or 16-bit code.
*
* SS is subtle. In 64-bit mode, we don't need any particular
* SS descriptor, but we do need SS to be valid. It's possible
* that the old SS is entirely bogus -- this can happen if the
* signal we're trying to deliver is #GP or #SS caused by a bad
* SS value.
*/
/* Set up the CS register to run signal handlers in 64-bit mode,
even if the handler happens to be interrupting 32-bit code. */
regs->cs = __USER_CS;
regs->ss = __USER_DS;
return 0;
}

View File

@@ -28,11 +28,11 @@ unsigned long convert_ip_to_linear(struct task_struct *child, struct pt_regs *re
struct desc_struct *desc;
unsigned long base;
seg &= ~7UL;
seg >>= 3;
mutex_lock(&child->mm->context.lock);
if (unlikely(!child->mm->context.ldt ||
(seg >> 3) >= child->mm->context.ldt->size))
seg >= child->mm->context.ldt->size))
addr = -1L; /* bogus selector, access would fault */
else {
desc = &child->mm->context.ldt->entries[seg];

View File

@@ -2105,7 +2105,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
if (guest_cpuid_has_tsc_adjust(vcpu)) {
if (!msr_info->host_initiated) {
s64 adj = data - vcpu->arch.ia32_tsc_adjust_msr;
kvm_x86_ops->adjust_tsc_offset(vcpu, adj, true);
adjust_tsc_offset_guest(vcpu, adj);
}
vcpu->arch.ia32_tsc_adjust_msr = data;
}
@@ -6327,6 +6327,7 @@ static void process_smi_save_state_64(struct kvm_vcpu *vcpu, char *buf)
static void process_smi(struct kvm_vcpu *vcpu)
{
struct kvm_segment cs, ds;
struct desc_ptr dt;
char buf[512];
u32 cr0;
@@ -6359,6 +6360,10 @@ static void process_smi(struct kvm_vcpu *vcpu)
kvm_x86_ops->set_cr4(vcpu, 0);
/* Undocumented: IDT limit is set to zero on entry to SMM. */
dt.address = dt.size = 0;
kvm_x86_ops->set_idt(vcpu, &dt);
__kvm_set_dr(vcpu, 7, DR7_FIXED_1);
cs.selector = (vcpu->arch.smbase >> 4) & 0xffff;

View File

@@ -29,7 +29,6 @@
#include <asm/uaccess.h>
#include <asm/traps.h>
#include <asm/desc.h>
#include <asm/user.h>
#include <asm/fpu/internal.h>
@@ -181,7 +180,7 @@ void math_emulate(struct math_emu_info *info)
math_abort(FPU_info, SIGILL);
}
code_descriptor = LDT_DESCRIPTOR(FPU_CS);
code_descriptor = FPU_get_ldt_descriptor(FPU_CS);
if (SEG_D_SIZE(code_descriptor)) {
/* The above test may be wrong, the book is not clear */
/* Segmented 32 bit protected mode */

View File

@@ -16,9 +16,24 @@
#include <linux/kernel.h>
#include <linux/mm.h>
/* s is always from a cpu register, and the cpu does bounds checking
* during register load --> no further bounds checks needed */
#define LDT_DESCRIPTOR(s) (((struct desc_struct *)current->mm->context.ldt)[(s) >> 3])
#include <asm/desc.h>
#include <asm/mmu_context.h>
static inline struct desc_struct FPU_get_ldt_descriptor(unsigned seg)
{
static struct desc_struct zero_desc;
struct desc_struct ret = zero_desc;
#ifdef CONFIG_MODIFY_LDT_SYSCALL
seg >>= 3;
mutex_lock(&current->mm->context.lock);
if (current->mm->context.ldt && seg < current->mm->context.ldt->size)
ret = current->mm->context.ldt->entries[seg];
mutex_unlock(&current->mm->context.lock);
#endif
return ret;
}
#define SEG_D_SIZE(x) ((x).b & (3 << 21))
#define SEG_G_BIT(x) ((x).b & (1 << 23))
#define SEG_GRANULARITY(x) (((x).b & (1 << 23)) ? 4096 : 1)

View File

@@ -20,7 +20,6 @@
#include <linux/stddef.h>
#include <asm/uaccess.h>
#include <asm/desc.h>
#include "fpu_system.h"
#include "exception.h"
@@ -158,7 +157,7 @@ static long pm_address(u_char FPU_modrm, u_char segment,
addr->selector = PM_REG_(segment);
}
descriptor = LDT_DESCRIPTOR(PM_REG_(segment));
descriptor = FPU_get_ldt_descriptor(addr->selector);
base_address = SEG_BASE_ADDR(descriptor);
address = base_address + offset;
limit = base_address

View File

@@ -13,13 +13,13 @@ CFLAGS_mmu.o := $(nostackp)
obj-y := enlighten.o setup.o multicalls.o mmu.o irq.o \
time.o xen-asm.o xen-asm_$(BITS).o \
grant-table.o suspend.o platform-pci-unplug.o \
p2m.o
p2m.o apic.o
obj-$(CONFIG_EVENT_TRACING) += trace.o
obj-$(CONFIG_SMP) += smp.o
obj-$(CONFIG_PARAVIRT_SPINLOCKS)+= spinlock.o
obj-$(CONFIG_XEN_DEBUG_FS) += debugfs.o
obj-$(CONFIG_XEN_DOM0) += apic.o vga.o
obj-$(CONFIG_XEN_DOM0) += vga.o
obj-$(CONFIG_SWIOTLB_XEN) += pci-swiotlb-xen.o
obj-$(CONFIG_XEN_EFI) += efi.o

View File

@@ -101,17 +101,15 @@ struct dom0_vga_console_info;
#ifdef CONFIG_XEN_DOM0
void __init xen_init_vga(const struct dom0_vga_console_info *, size_t size);
void __init xen_init_apic(void);
#else
static inline void __init xen_init_vga(const struct dom0_vga_console_info *info,
size_t size)
{
}
static inline void __init xen_init_apic(void)
{
}
#endif
void __init xen_init_apic(void);
#ifdef CONFIG_XEN_EFI
extern void xen_efi_init(void);
#else

View File

@@ -241,8 +241,8 @@ EXPORT_SYMBOL(blk_queue_bounce_limit);
* Description:
* Enables a low level driver to set a hard upper limit,
* max_hw_sectors, on the size of requests. max_hw_sectors is set by
* the device driver based upon the combined capabilities of I/O
* controller and storage device.
* the device driver based upon the capabilities of the I/O
* controller.
*
* max_sectors is a soft limit imposed by the block layer for
* filesystem type requests. This value can be overridden on a

View File

@@ -296,11 +296,20 @@ static int regcache_rbtree_insert_to_block(struct regmap *map,
if (!blk)
return -ENOMEM;
present = krealloc(rbnode->cache_present,
BITS_TO_LONGS(blklen) * sizeof(*present), GFP_KERNEL);
if (!present) {
kfree(blk);
return -ENOMEM;
if (BITS_TO_LONGS(blklen) > BITS_TO_LONGS(rbnode->blklen)) {
present = krealloc(rbnode->cache_present,
BITS_TO_LONGS(blklen) * sizeof(*present),
GFP_KERNEL);
if (!present) {
kfree(blk);
return -ENOMEM;
}
memset(present + BITS_TO_LONGS(rbnode->blklen), 0,
(BITS_TO_LONGS(blklen) - BITS_TO_LONGS(rbnode->blklen))
* sizeof(*present));
} else {
present = rbnode->cache_present;
}
/* insert the register value in the correct place in the rbnode block */

View File

@@ -369,8 +369,8 @@ static void purge_persistent_gnt(struct xen_blkif *blkif)
return;
}
if (work_pending(&blkif->persistent_purge_work)) {
pr_alert_ratelimited("Scheduled work from previous purge is still pending, cannot purge list\n");
if (work_busy(&blkif->persistent_purge_work)) {
pr_alert_ratelimited("Scheduled work from previous purge is still busy, cannot purge list\n");
return;
}

View File

@@ -179,6 +179,7 @@ static DEFINE_SPINLOCK(minor_lock);
((_segs + SEGS_PER_INDIRECT_FRAME - 1)/SEGS_PER_INDIRECT_FRAME)
static int blkfront_setup_indirect(struct blkfront_info *info);
static int blkfront_gather_backend_features(struct blkfront_info *info);
static int get_id_from_freelist(struct blkfront_info *info)
{
@@ -1128,8 +1129,10 @@ static void blkif_completion(struct blk_shadow *s, struct blkfront_info *info,
* Add the used indirect page back to the list of
* available pages for indirect grefs.
*/
indirect_page = pfn_to_page(s->indirect_grants[i]->pfn);
list_add(&indirect_page->lru, &info->indirect_pages);
if (!info->feature_persistent) {
indirect_page = pfn_to_page(s->indirect_grants[i]->pfn);
list_add(&indirect_page->lru, &info->indirect_pages);
}
s->indirect_grants[i]->gref = GRANT_INVALID_REF;
list_add_tail(&s->indirect_grants[i]->node, &info->grants);
}
@@ -1519,7 +1522,7 @@ static int blkif_recover(struct blkfront_info *info)
info->shadow_free = info->ring.req_prod_pvt;
info->shadow[BLK_RING_SIZE(info)-1].req.u.rw.id = 0x0fffffff;
rc = blkfront_setup_indirect(info);
rc = blkfront_gather_backend_features(info);
if (rc) {
kfree(copy);
return rc;
@@ -1720,20 +1723,13 @@ static void blkfront_setup_discard(struct blkfront_info *info)
static int blkfront_setup_indirect(struct blkfront_info *info)
{
unsigned int indirect_segments, segs;
unsigned int segs;
int err, i;
err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
"feature-max-indirect-segments", "%u", &indirect_segments,
NULL);
if (err) {
info->max_indirect_segments = 0;
if (info->max_indirect_segments == 0)
segs = BLKIF_MAX_SEGMENTS_PER_REQUEST;
} else {
info->max_indirect_segments = min(indirect_segments,
xen_blkif_max_segments);
else
segs = info->max_indirect_segments;
}
err = fill_grant_buffer(info, (segs + INDIRECT_GREFS(segs)) * BLK_RING_SIZE(info));
if (err)
@@ -1796,6 +1792,68 @@ out_of_memory:
return -ENOMEM;
}
/*
* Gather all backend feature-*
*/
static int blkfront_gather_backend_features(struct blkfront_info *info)
{
int err;
int barrier, flush, discard, persistent;
unsigned int indirect_segments;
info->feature_flush = 0;
err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
"feature-barrier", "%d", &barrier,
NULL);
/*
* If there's no "feature-barrier" defined, then it means
* we're dealing with a very old backend which writes
* synchronously; nothing to do.
*
* If there are barriers, then we use flush.
*/
if (!err && barrier)
info->feature_flush = REQ_FLUSH | REQ_FUA;
/*
* And if there is "feature-flush-cache" use that above
* barriers.
*/
err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
"feature-flush-cache", "%d", &flush,
NULL);
if (!err && flush)
info->feature_flush = REQ_FLUSH;
err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
"feature-discard", "%d", &discard,
NULL);
if (!err && discard)
blkfront_setup_discard(info);
err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
"feature-persistent", "%u", &persistent,
NULL);
if (err)
info->feature_persistent = 0;
else
info->feature_persistent = persistent;
err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
"feature-max-indirect-segments", "%u", &indirect_segments,
NULL);
if (err)
info->max_indirect_segments = 0;
else
info->max_indirect_segments = min(indirect_segments,
xen_blkif_max_segments);
return blkfront_setup_indirect(info);
}
/*
* Invoked when the backend is finally 'ready' (and has told produced
* the details about the physical device - #sectors, size, etc).
@@ -1807,7 +1865,6 @@ static void blkfront_connect(struct blkfront_info *info)
unsigned int physical_sector_size;
unsigned int binfo;
int err;
int barrier, flush, discard, persistent;
switch (info->connected) {
case BLKIF_STATE_CONNECTED:
@@ -1864,48 +1921,7 @@ static void blkfront_connect(struct blkfront_info *info)
if (err != 1)
physical_sector_size = sector_size;
info->feature_flush = 0;
err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
"feature-barrier", "%d", &barrier,
NULL);
/*
* If there's no "feature-barrier" defined, then it means
* we're dealing with a very old backend which writes
* synchronously; nothing to do.
*
* If there are barriers, then we use flush.
*/
if (!err && barrier)
info->feature_flush = REQ_FLUSH | REQ_FUA;
/*
* And if there is "feature-flush-cache" use that above
* barriers.
*/
err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
"feature-flush-cache", "%d", &flush,
NULL);
if (!err && flush)
info->feature_flush = REQ_FLUSH;
err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
"feature-discard", "%d", &discard,
NULL);
if (!err && discard)
blkfront_setup_discard(info);
err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
"feature-persistent", "%u", &persistent,
NULL);
if (err)
info->feature_persistent = 0;
else
info->feature_persistent = persistent;
err = blkfront_setup_indirect(info);
err = blkfront_gather_backend_features(info);
if (err) {
xenbus_dev_fatal(info->xbdev, err, "setup_indirect at %s",
info->xbdev->otherend);

View File

@@ -496,10 +496,9 @@ static void zram_meta_free(struct zram_meta *meta, u64 disksize)
kfree(meta);
}
static struct zram_meta *zram_meta_alloc(int device_id, u64 disksize)
static struct zram_meta *zram_meta_alloc(char *pool_name, u64 disksize)
{
size_t num_pages;
char pool_name[8];
struct zram_meta *meta = kmalloc(sizeof(*meta), GFP_KERNEL);
if (!meta)
@@ -512,7 +511,6 @@ static struct zram_meta *zram_meta_alloc(int device_id, u64 disksize)
goto out_error;
}
snprintf(pool_name, sizeof(pool_name), "zram%d", device_id);
meta->mem_pool = zs_create_pool(pool_name, GFP_NOIO | __GFP_HIGHMEM);
if (!meta->mem_pool) {
pr_err("Error creating memory pool\n");
@@ -1031,7 +1029,7 @@ static ssize_t disksize_store(struct device *dev,
return -EINVAL;
disksize = PAGE_ALIGN(disksize);
meta = zram_meta_alloc(zram->disk->first_minor, disksize);
meta = zram_meta_alloc(zram->disk->disk_name, disksize);
if (!meta)
return -ENOMEM;

View File

@@ -126,7 +126,7 @@ PARENTS(pxa3xx_ac97_bus) = { "ring_osc_60mhz", "ac97" };
PARENTS(pxa3xx_sbus) = { "ring_osc_60mhz", "system_bus" };
PARENTS(pxa3xx_smemcbus) = { "ring_osc_60mhz", "smemc" };
#define CKEN_AB(bit) ((CKEN_ ## bit > 31) ? &CKENA : &CKENB)
#define CKEN_AB(bit) ((CKEN_ ## bit > 31) ? &CKENB : &CKENA)
#define PXA3XX_CKEN(dev_id, con_id, parents, mult_lp, div_lp, mult_hp, \
div_hp, bit, is_lp, flags) \
PXA_CKEN(dev_id, con_id, bit, parents, mult_lp, div_lp, \

View File

@@ -661,6 +661,9 @@ static void sh_cmt_clocksource_suspend(struct clocksource *cs)
{
struct sh_cmt_channel *ch = cs_to_sh_cmt(cs);
if (!ch->cs_enabled)
return;
sh_cmt_stop(ch, FLAG_CLOCKSOURCE);
pm_genpd_syscore_poweroff(&ch->cmt->pdev->dev);
}
@@ -669,6 +672,9 @@ static void sh_cmt_clocksource_resume(struct clocksource *cs)
{
struct sh_cmt_channel *ch = cs_to_sh_cmt(cs);
if (!ch->cs_enabled)
return;
pm_genpd_syscore_poweron(&ch->cmt->pdev->dev);
sh_cmt_start(ch, FLAG_CLOCKSOURCE);
}

View File

@@ -920,7 +920,7 @@ static int ppc4xx_edac_init_csrows(struct mem_ctl_info *mci, u32 mcopt1)
*/
for (row = 0; row < mci->nr_csrows; row++) {
struct csrow_info *csi = &mci->csrows[row];
struct csrow_info *csi = mci->csrows[row];
/*
* Get the configuration settings for this

View File

@@ -374,7 +374,7 @@ static int amdgpu_uvd_cs_msg_decode(uint32_t *msg, unsigned buf_sizes[])
unsigned height_in_mb = ALIGN(height / 16, 2);
unsigned fs_in_mb = width_in_mb * height_in_mb;
unsigned image_size, tmp, min_dpb_size, num_dpb_buffer;
unsigned image_size, tmp, min_dpb_size, num_dpb_buffer, min_ctx_size;
image_size = width * height;
image_size += image_size / 2;
@@ -466,6 +466,8 @@ static int amdgpu_uvd_cs_msg_decode(uint32_t *msg, unsigned buf_sizes[])
num_dpb_buffer = (le32_to_cpu(msg[59]) & 0xff) + 2;
min_dpb_size = image_size * num_dpb_buffer;
min_ctx_size = ((width + 255) / 16) * ((height + 255) / 16)
* 16 * num_dpb_buffer + 52 * 1024;
break;
default:
@@ -486,6 +488,7 @@ static int amdgpu_uvd_cs_msg_decode(uint32_t *msg, unsigned buf_sizes[])
buf_sizes[0x1] = dpb_size;
buf_sizes[0x2] = image_size;
buf_sizes[0x4] = min_ctx_size;
return 0;
}
@@ -628,6 +631,13 @@ static int amdgpu_uvd_cs_pass2(struct amdgpu_uvd_cs_ctx *ctx)
return -EINVAL;
}
} else if (cmd == 0x206) {
if ((end - start) < ctx->buf_sizes[4]) {
DRM_ERROR("buffer (%d) to small (%d / %d)!\n", cmd,
(unsigned)(end - start),
ctx->buf_sizes[4]);
return -EINVAL;
}
} else if ((cmd != 0x100) && (cmd != 0x204)) {
DRM_ERROR("invalid UVD command %X!\n", cmd);
return -EINVAL;
@@ -755,9 +765,10 @@ int amdgpu_uvd_ring_parse_cs(struct amdgpu_cs_parser *parser, uint32_t ib_idx)
struct amdgpu_uvd_cs_ctx ctx = {};
unsigned buf_sizes[] = {
[0x00000000] = 2048,
[0x00000001] = 32 * 1024 * 1024,
[0x00000002] = 2048 * 1152 * 3,
[0x00000001] = 0xFFFFFFFF,
[0x00000002] = 0xFFFFFFFF,
[0x00000003] = 2048,
[0x00000004] = 0xFFFFFFFF,
};
struct amdgpu_ib *ib = &parser->ibs[ib_idx];
int r;

View File

@@ -3135,7 +3135,7 @@ static int gfx_v8_0_cp_compute_resume(struct amdgpu_device *adev)
WREG32(mmCP_MEC_DOORBELL_RANGE_LOWER,
AMDGPU_DOORBELL_KIQ << 2);
WREG32(mmCP_MEC_DOORBELL_RANGE_UPPER,
0x7FFFF << 2);
AMDGPU_DOORBELL_MEC_RING7 << 2);
}
tmp = RREG32(mmCP_HQD_PQ_DOORBELL_CONTROL);
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,

View File

@@ -1745,7 +1745,6 @@ static int fimc_probe(struct platform_device *pdev)
spin_lock_init(&ctx->lock);
platform_set_drvdata(pdev, ctx);
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
ret = exynos_drm_ippdrv_register(ippdrv);

View File

@@ -593,8 +593,7 @@ static int gsc_src_set_transf(struct device *dev,
gsc_write(cfg, GSC_IN_CON);
ctx->rotation = cfg &
(GSC_IN_ROT_90 | GSC_IN_ROT_270) ? 1 : 0;
ctx->rotation = (cfg & GSC_IN_ROT_90) ? 1 : 0;
*swap = ctx->rotation;
return 0;
@@ -857,8 +856,7 @@ static int gsc_dst_set_transf(struct device *dev,
gsc_write(cfg, GSC_IN_CON);
ctx->rotation = cfg &
(GSC_IN_ROT_90 | GSC_IN_ROT_270) ? 1 : 0;
ctx->rotation = (cfg & GSC_IN_ROT_90) ? 1 : 0;
*swap = ctx->rotation;
return 0;

View File

@@ -1064,6 +1064,7 @@ static int hdmi_get_modes(struct drm_connector *connector)
{
struct hdmi_context *hdata = ctx_from_connector(connector);
struct edid *edid;
int ret;
if (!hdata->ddc_adpt)
return -ENODEV;
@@ -1079,7 +1080,11 @@ static int hdmi_get_modes(struct drm_connector *connector)
drm_mode_connector_update_edid_property(connector, edid);
return drm_add_edid_modes(connector, edid);
ret = drm_add_edid_modes(connector, edid);
kfree(edid);
return ret;
}
static int hdmi_find_phy_conf(struct hdmi_context *hdata, u32 pixel_clock)

View File

@@ -718,6 +718,10 @@ static irqreturn_t mixer_irq_handler(int irq, void *arg)
/* handling VSYNC */
if (val & MXR_INT_STATUS_VSYNC) {
/* vsync interrupt use different bit for read and clear */
val |= MXR_INT_CLEAR_VSYNC;
val &= ~MXR_INT_STATUS_VSYNC;
/* interlace scan need to check shadow register */
if (ctx->interlace) {
base = mixer_reg_read(res, MXR_GRAPHIC_BASE(0));
@@ -743,11 +747,6 @@ static irqreturn_t mixer_irq_handler(int irq, void *arg)
out:
/* clear interrupts */
if (~val & MXR_INT_EN_VSYNC) {
/* vsync interrupt use different bit for read and clear */
val &= ~MXR_INT_EN_VSYNC;
val |= MXR_INT_CLEAR_VSYNC;
}
mixer_reg_write(res, MXR_INT_STATUS, val);
spin_unlock(&res->reg_slock);
@@ -907,8 +906,8 @@ static int mixer_enable_vblank(struct exynos_drm_crtc *crtc)
}
/* enable vsync interrupt */
mixer_reg_writemask(res, MXR_INT_EN, MXR_INT_EN_VSYNC,
MXR_INT_EN_VSYNC);
mixer_reg_writemask(res, MXR_INT_STATUS, ~0, MXR_INT_CLEAR_VSYNC);
mixer_reg_writemask(res, MXR_INT_EN, ~0, MXR_INT_EN_VSYNC);
return 0;
}
@@ -918,7 +917,13 @@ static void mixer_disable_vblank(struct exynos_drm_crtc *crtc)
struct mixer_context *mixer_ctx = crtc->ctx;
struct mixer_resources *res = &mixer_ctx->mixer_res;
if (!mixer_ctx->powered) {
mixer_ctx->int_en &= MXR_INT_EN_VSYNC;
return;
}
/* disable vsync interrupt */
mixer_reg_writemask(res, MXR_INT_STATUS, ~0, MXR_INT_CLEAR_VSYNC);
mixer_reg_writemask(res, MXR_INT_EN, 0, MXR_INT_EN_VSYNC);
}
@@ -1047,6 +1052,8 @@ static void mixer_enable(struct exynos_drm_crtc *crtc)
mixer_reg_writemask(res, MXR_STATUS, ~0, MXR_STATUS_SOFT_RESET);
if (ctx->int_en & MXR_INT_EN_VSYNC)
mixer_reg_writemask(res, MXR_INT_STATUS, ~0, MXR_INT_CLEAR_VSYNC);
mixer_reg_write(res, MXR_INT_EN, ctx->int_en);
mixer_win_reset(ctx);
}

View File

@@ -165,31 +165,15 @@ gk104_fifo_context_attach(struct nvkm_object *parent,
return 0;
}
static int
gk104_fifo_chan_kick(struct gk104_fifo_chan *chan)
{
struct nvkm_object *obj = (void *)chan;
struct gk104_fifo_priv *priv = (void *)obj->engine;
nv_wr32(priv, 0x002634, chan->base.chid);
if (!nv_wait(priv, 0x002634, 0x100000, 0x000000)) {
nv_error(priv, "channel %d [%s] kick timeout\n",
chan->base.chid, nvkm_client_name(chan));
return -EBUSY;
}
return 0;
}
static int
gk104_fifo_context_detach(struct nvkm_object *parent, bool suspend,
struct nvkm_object *object)
{
struct nvkm_bar *bar = nvkm_bar(parent);
struct gk104_fifo_priv *priv = (void *)parent->engine;
struct gk104_fifo_base *base = (void *)parent->parent;
struct gk104_fifo_chan *chan = (void *)parent;
u32 addr;
int ret;
switch (nv_engidx(object->engine)) {
case NVDEV_ENGINE_SW : return 0;
@@ -204,9 +188,13 @@ gk104_fifo_context_detach(struct nvkm_object *parent, bool suspend,
return -EINVAL;
}
ret = gk104_fifo_chan_kick(chan);
if (ret && suspend)
return ret;
nv_wr32(priv, 0x002634, chan->base.chid);
if (!nv_wait(priv, 0x002634, 0xffffffff, chan->base.chid)) {
nv_error(priv, "channel %d [%s] kick timeout\n",
chan->base.chid, nvkm_client_name(chan));
if (suspend)
return -EBUSY;
}
if (addr) {
nv_wo32(base, addr + 0x00, 0x00000000);
@@ -331,7 +319,6 @@ gk104_fifo_chan_fini(struct nvkm_object *object, bool suspend)
gk104_fifo_runlist_update(priv, chan->engine);
}
gk104_fifo_chan_kick(chan);
nv_wr32(priv, 0x800000 + (chid * 8), 0x00000000);
return nvkm_fifo_channel_fini(&chan->base, suspend);
}

View File

@@ -2492,7 +2492,7 @@ int vmw_execbuf_process(struct drm_file *file_priv,
ret = ttm_eu_reserve_buffers(&ticket, &sw_context->validate_nodes,
true, NULL);
if (unlikely(ret != 0))
goto out_err;
goto out_err_nores;
ret = vmw_validate_buffers(dev_priv, sw_context);
if (unlikely(ret != 0))
@@ -2536,6 +2536,7 @@ int vmw_execbuf_process(struct drm_file *file_priv,
vmw_resource_relocations_free(&sw_context->res_relocations);
vmw_fifo_commit(dev_priv, command_size);
mutex_unlock(&dev_priv->binding_mutex);
vmw_query_bo_switch_commit(dev_priv, sw_context);
ret = vmw_execbuf_fence_commands(file_priv, dev_priv,
@@ -2551,7 +2552,6 @@ int vmw_execbuf_process(struct drm_file *file_priv,
DRM_ERROR("Fence submission error. Syncing.\n");
vmw_resource_list_unreserve(&sw_context->resource_list, false);
mutex_unlock(&dev_priv->binding_mutex);
ttm_eu_fence_buffer_objects(&ticket, &sw_context->validate_nodes,
(void *) fence);

View File

@@ -462,12 +462,15 @@ out:
static void hidinput_cleanup_battery(struct hid_device *dev)
{
const struct power_supply_desc *psy_desc;
if (!dev->battery)
return;
psy_desc = dev->battery->desc;
power_supply_unregister(dev->battery);
kfree(dev->battery->desc->name);
kfree(dev->battery->desc);
kfree(psy_desc->name);
kfree(psy_desc);
dev->battery = NULL;
}
#else /* !CONFIG_HID_BATTERY_STRENGTH */

View File

@@ -858,7 +858,7 @@ static int uclogic_tablet_enable(struct hid_device *hdev)
for (p = drvdata->rdesc;
p <= drvdata->rdesc + drvdata->rsize - 4;) {
if (p[0] == 0xFE && p[1] == 0xED && p[2] == 0x1D &&
p[3] < sizeof(params)) {
p[3] < ARRAY_SIZE(params)) {
v = params[p[3]];
put_unaligned(cpu_to_le32(v), (s32 *)p);
p += 4;

View File

@@ -1284,6 +1284,39 @@ fail_register_pen_input:
return error;
}
/*
* Not all devices report physical dimensions from HID.
* Compute the default from hardcoded logical dimension
* and resolution before driver overwrites them.
*/
static void wacom_set_default_phy(struct wacom_features *features)
{
if (features->x_resolution) {
features->x_phy = (features->x_max * 100) /
features->x_resolution;
features->y_phy = (features->y_max * 100) /
features->y_resolution;
}
}
static void wacom_calculate_res(struct wacom_features *features)
{
/* set unit to "100th of a mm" for devices not reported by HID */
if (!features->unit) {
features->unit = 0x11;
features->unitExpo = -3;
}
features->x_resolution = wacom_calc_hid_res(features->x_max,
features->x_phy,
features->unit,
features->unitExpo);
features->y_resolution = wacom_calc_hid_res(features->y_max,
features->y_phy,
features->unit,
features->unitExpo);
}
static void wacom_wireless_work(struct work_struct *work)
{
struct wacom *wacom = container_of(work, struct wacom, work);
@@ -1341,6 +1374,8 @@ static void wacom_wireless_work(struct work_struct *work)
if (wacom_wac1->features.type != INTUOSHT &&
wacom_wac1->features.type != BAMBOO_PT)
wacom_wac1->features.device_type |= WACOM_DEVICETYPE_PAD;
wacom_set_default_phy(&wacom_wac1->features);
wacom_calculate_res(&wacom_wac1->features);
snprintf(wacom_wac1->pen_name, WACOM_NAME_MAX, "%s (WL) Pen",
wacom_wac1->features.name);
snprintf(wacom_wac1->pad_name, WACOM_NAME_MAX, "%s (WL) Pad",
@@ -1359,7 +1394,9 @@ static void wacom_wireless_work(struct work_struct *work)
wacom_wac2->features =
*((struct wacom_features *)id->driver_data);
wacom_wac2->features.pktlen = WACOM_PKGLEN_BBTOUCH3;
wacom_set_default_phy(&wacom_wac2->features);
wacom_wac2->features.x_max = wacom_wac2->features.y_max = 4096;
wacom_calculate_res(&wacom_wac2->features);
snprintf(wacom_wac2->touch_name, WACOM_NAME_MAX,
"%s (WL) Finger",wacom_wac2->features.name);
snprintf(wacom_wac2->pad_name, WACOM_NAME_MAX,
@@ -1407,39 +1444,6 @@ void wacom_battery_work(struct work_struct *work)
}
}
/*
* Not all devices report physical dimensions from HID.
* Compute the default from hardcoded logical dimension
* and resolution before driver overwrites them.
*/
static void wacom_set_default_phy(struct wacom_features *features)
{
if (features->x_resolution) {
features->x_phy = (features->x_max * 100) /
features->x_resolution;
features->y_phy = (features->y_max * 100) /
features->y_resolution;
}
}
static void wacom_calculate_res(struct wacom_features *features)
{
/* set unit to "100th of a mm" for devices not reported by HID */
if (!features->unit) {
features->unit = 0x11;
features->unitExpo = -3;
}
features->x_resolution = wacom_calc_hid_res(features->x_max,
features->x_phy,
features->unit,
features->unitExpo);
features->y_resolution = wacom_calc_hid_res(features->y_max,
features->y_phy,
features->unit,
features->unitExpo);
}
static size_t wacom_compute_pktlen(struct hid_device *hdev)
{
struct hid_report_enum *report_enum;

View File

@@ -1471,5 +1471,3 @@ module_exit(mq_exit);
MODULE_AUTHOR("Joe Thornber <dm-devel@redhat.com>");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("mq cache policy");
MODULE_ALIAS("dm-cache-default");

View File

@@ -1789,3 +1789,5 @@ module_exit(smq_exit);
MODULE_AUTHOR("Joe Thornber <dm-devel@redhat.com>");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("smq cache policy");
MODULE_ALIAS("dm-cache-default");

View File

@@ -1293,8 +1293,8 @@ static int __release_metadata_snap(struct dm_pool_metadata *pmd)
return r;
disk_super = dm_block_data(copy);
dm_sm_dec_block(pmd->metadata_sm, le64_to_cpu(disk_super->data_mapping_root));
dm_sm_dec_block(pmd->metadata_sm, le64_to_cpu(disk_super->device_details_root));
dm_btree_del(&pmd->info, le64_to_cpu(disk_super->data_mapping_root));
dm_btree_del(&pmd->details_info, le64_to_cpu(disk_super->device_details_root));
dm_sm_dec_block(pmd->metadata_sm, held_root);
return dm_tm_unlock(pmd->tm, copy);

View File

@@ -138,4 +138,10 @@ int lower_bound(struct btree_node *n, uint64_t key);
extern struct dm_block_validator btree_node_validator;
/*
* Value type for upper levels of multi-level btrees.
*/
extern void init_le64_type(struct dm_transaction_manager *tm,
struct dm_btree_value_type *vt);
#endif /* DM_BTREE_INTERNAL_H */

View File

@@ -544,14 +544,6 @@ static int remove_raw(struct shadow_spine *s, struct dm_btree_info *info,
return r;
}
static struct dm_btree_value_type le64_type = {
.context = NULL,
.size = sizeof(__le64),
.inc = NULL,
.dec = NULL,
.equal = NULL
};
int dm_btree_remove(struct dm_btree_info *info, dm_block_t root,
uint64_t *keys, dm_block_t *new_root)
{
@@ -559,12 +551,14 @@ int dm_btree_remove(struct dm_btree_info *info, dm_block_t root,
int index = 0, r = 0;
struct shadow_spine spine;
struct btree_node *n;
struct dm_btree_value_type le64_vt;
init_le64_type(info->tm, &le64_vt);
init_shadow_spine(&spine, info);
for (level = 0; level < info->levels; level++) {
r = remove_raw(&spine, info,
(level == last_level ?
&info->value_type : &le64_type),
&info->value_type : &le64_vt),
root, keys[level], (unsigned *)&index);
if (r < 0)
break;
@@ -654,11 +648,13 @@ static int remove_one(struct dm_btree_info *info, dm_block_t root,
int index = 0, r = 0;
struct shadow_spine spine;
struct btree_node *n;
struct dm_btree_value_type le64_vt;
uint64_t k;
init_le64_type(info->tm, &le64_vt);
init_shadow_spine(&spine, info);
for (level = 0; level < last_level; level++) {
r = remove_raw(&spine, info, &le64_type,
r = remove_raw(&spine, info, &le64_vt,
root, keys[level], (unsigned *) &index);
if (r < 0)
goto out;

View File

@@ -249,3 +249,40 @@ int shadow_root(struct shadow_spine *s)
{
return s->root;
}
static void le64_inc(void *context, const void *value_le)
{
struct dm_transaction_manager *tm = context;
__le64 v_le;
memcpy(&v_le, value_le, sizeof(v_le));
dm_tm_inc(tm, le64_to_cpu(v_le));
}
static void le64_dec(void *context, const void *value_le)
{
struct dm_transaction_manager *tm = context;
__le64 v_le;
memcpy(&v_le, value_le, sizeof(v_le));
dm_tm_dec(tm, le64_to_cpu(v_le));
}
static int le64_equal(void *context, const void *value1_le, const void *value2_le)
{
__le64 v1_le, v2_le;
memcpy(&v1_le, value1_le, sizeof(v1_le));
memcpy(&v2_le, value2_le, sizeof(v2_le));
return v1_le == v2_le;
}
void init_le64_type(struct dm_transaction_manager *tm,
struct dm_btree_value_type *vt)
{
vt->context = tm;
vt->size = sizeof(__le64);
vt->inc = le64_inc;
vt->dec = le64_dec;
vt->equal = le64_equal;
}

View File

@@ -667,12 +667,7 @@ static int insert(struct dm_btree_info *info, dm_block_t root,
struct btree_node *n;
struct dm_btree_value_type le64_type;
le64_type.context = NULL;
le64_type.size = sizeof(__le64);
le64_type.inc = NULL;
le64_type.dec = NULL;
le64_type.equal = NULL;
init_le64_type(info->tm, &le64_type);
init_shadow_spine(&spine, info);
for (level = 0; level < (info->levels - 1); level++) {

View File

@@ -2245,6 +2245,9 @@ void omap3_gpmc_save_context(void)
{
int i;
if (!gpmc_base)
return;
gpmc_context.sysconfig = gpmc_read_reg(GPMC_SYSCONFIG);
gpmc_context.irqenable = gpmc_read_reg(GPMC_IRQENABLE);
gpmc_context.timeout_ctrl = gpmc_read_reg(GPMC_TIMEOUT_CONTROL);
@@ -2277,6 +2280,9 @@ void omap3_gpmc_restore_context(void)
{
int i;
if (!gpmc_base)
return;
gpmc_write_reg(GPMC_SYSCONFIG, gpmc_context.sysconfig);
gpmc_write_reg(GPMC_IRQENABLE, gpmc_context.irqenable);
gpmc_write_reg(GPMC_TIMEOUT_CONTROL, gpmc_context.timeout_ctrl);

View File

@@ -115,7 +115,7 @@ config MFD_CROS_EC_I2C
config MFD_CROS_EC_SPI
tristate "ChromeOS Embedded Controller (SPI)"
depends on MFD_CROS_EC && CROS_EC_PROTO && SPI && OF
depends on MFD_CROS_EC && CROS_EC_PROTO && SPI
---help---
If you say Y here, you get support for talking to the ChromeOS EC

View File

@@ -651,7 +651,7 @@ static int arizona_runtime_suspend(struct device *dev)
arizona->has_fully_powered_off = true;
disable_irq(arizona->irq);
disable_irq_nosync(arizona->irq);
arizona_enable_reset(arizona);
regulator_bulk_disable(arizona->num_core_supplies,
arizona->core_supplies);
@@ -1141,10 +1141,6 @@ int arizona_dev_init(struct arizona *arizona)
arizona->pdata.gpio_defaults[i]);
}
pm_runtime_set_autosuspend_delay(arizona->dev, 100);
pm_runtime_use_autosuspend(arizona->dev);
pm_runtime_enable(arizona->dev);
/* Chip default */
if (!arizona->pdata.clk32k_src)
arizona->pdata.clk32k_src = ARIZONA_32KZ_MCLK2;
@@ -1245,11 +1241,17 @@ int arizona_dev_init(struct arizona *arizona)
arizona->pdata.spk_fmt[i]);
}
pm_runtime_set_active(arizona->dev);
pm_runtime_enable(arizona->dev);
/* Set up for interrupts */
ret = arizona_irq_init(arizona);
if (ret != 0)
goto err_reset;
pm_runtime_set_autosuspend_delay(arizona->dev, 100);
pm_runtime_use_autosuspend(arizona->dev);
arizona_request_irq(arizona, ARIZONA_IRQ_CLKGEN_ERR, "CLKGEN error",
arizona_clkgen_err, arizona);
arizona_request_irq(arizona, ARIZONA_IRQ_OVERCLOCKED, "Overclocked",
@@ -1278,10 +1280,6 @@ int arizona_dev_init(struct arizona *arizona)
goto err_irq;
}
#ifdef CONFIG_PM
regulator_disable(arizona->dcvdd);
#endif
return 0;
err_irq:

View File

@@ -786,6 +786,7 @@ static bool bond_should_notify_peers(struct bonding *bond)
slave ? slave->dev->name : "NULL");
if (!slave || !bond->send_peer_notif ||
!netif_carrier_ok(bond->dev) ||
test_bit(__LINK_STATE_LINKWATCH_PENDING, &slave->dev->state))
return false;

View File

@@ -1763,16 +1763,9 @@ vortex_open(struct net_device *dev)
vp->rx_ring[i].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data, PKT_BUF_SZ, PCI_DMA_FROMDEVICE));
}
if (i != RX_RING_SIZE) {
int j;
pr_emerg("%s: no memory for rx ring\n", dev->name);
for (j = 0; j < i; j++) {
if (vp->rx_skbuff[j]) {
dev_kfree_skb(vp->rx_skbuff[j]);
vp->rx_skbuff[j] = NULL;
}
}
retval = -ENOMEM;
goto err_free_irq;
goto err_free_skb;
}
/* Wrap the ring. */
vp->rx_ring[i-1].next = cpu_to_le32(vp->rx_ring_dma);
@@ -1782,7 +1775,13 @@ vortex_open(struct net_device *dev)
if (!retval)
goto out;
err_free_irq:
err_free_skb:
for (i = 0; i < RX_RING_SIZE; i++) {
if (vp->rx_skbuff[i]) {
dev_kfree_skb(vp->rx_skbuff[i]);
vp->rx_skbuff[i] = NULL;
}
}
free_irq(dev->irq, dev);
err:
if (vortex_debug > 1)

View File

@@ -262,9 +262,9 @@ static u16 bnx2x_free_tx_pkt(struct bnx2x *bp, struct bnx2x_fp_txdata *txdata,
if (likely(skb)) {
(*pkts_compl)++;
(*bytes_compl) += skb->len;
dev_kfree_skb_any(skb);
}
dev_kfree_skb_any(skb);
tx_buf->first_bd = 0;
tx_buf->skb = NULL;

View File

@@ -1718,6 +1718,22 @@ static int bnx2x_nvram_write(struct bnx2x *bp, u32 offset, u8 *data_buf,
offset += sizeof(u32);
data_buf += sizeof(u32);
written_so_far += sizeof(u32);
/* At end of each 4Kb page, release nvram lock to allow MFW
* chance to take it for its own use.
*/
if ((cmd_flags & MCPR_NVM_COMMAND_LAST) &&
(written_so_far < buf_size)) {
DP(BNX2X_MSG_ETHTOOL | BNX2X_MSG_NVM,
"Releasing NVM lock after offset 0x%x\n",
(u32)(offset - sizeof(u32)));
bnx2x_release_nvram_lock(bp);
usleep_range(1000, 2000);
rc = bnx2x_acquire_nvram_lock(bp);
if (rc)
return rc;
}
cmd_flags = 0;
}

View File

@@ -676,6 +676,7 @@ bnad_cq_process(struct bnad *bnad, struct bna_ccb *ccb, int budget)
if (!next_cmpl->valid)
break;
}
packets++;
/* TODO: BNA_CQ_EF_LOCAL ? */
if (unlikely(flags & (BNA_CQ_EF_MAC_ERROR |
@@ -692,7 +693,6 @@ bnad_cq_process(struct bnad *bnad, struct bna_ccb *ccb, int budget)
else
bnad_cq_setup_skb_frags(rcb, skb, sop_ci, nvecs, len);
packets++;
rcb->rxq->rx_packets++;
rcb->rxq->rx_bytes += totlen;
ccb->bytes_per_intr += totlen;

View File

@@ -16,7 +16,6 @@ if NET_VENDOR_CAVIUM
config THUNDER_NIC_PF
tristate "Thunder Physical function driver"
depends on 64BIT
default ARCH_THUNDER
select THUNDER_NIC_BGX
---help---
This driver supports Thunder's NIC physical function.
@@ -29,14 +28,12 @@ config THUNDER_NIC_PF
config THUNDER_NIC_VF
tristate "Thunder Virtual function driver"
depends on 64BIT
default ARCH_THUNDER
---help---
This driver supports Thunder's NIC virtual function
config THUNDER_NIC_BGX
tristate "Thunder MAC interface driver (BGX)"
depends on 64BIT
default ARCH_THUNDER
---help---
This driver supports programming and controlling of MAC
interface from NIC physical function driver.

View File

@@ -2332,10 +2332,11 @@ int t4_setup_debugfs(struct adapter *adap)
EXT_MEM1_SIZE_G(size));
}
} else {
if (i & EXT_MEM_ENABLE_F)
if (i & EXT_MEM_ENABLE_F) {
size = t4_read_reg(adap, MA_EXT_MEMORY_BAR_A);
add_debugfs_mem(adap, "mc", MEM_MC,
EXT_MEM_SIZE_G(size));
}
}
de = debugfs_create_file_size("flash", S_IRUSR, adap->debugfs_root, adap,

View File

@@ -620,6 +620,11 @@ enum be_if_flags {
BE_IF_FLAGS_VLAN_PROMISCUOUS |\
BE_IF_FLAGS_MCAST_PROMISCUOUS)
#define BE_IF_EN_FLAGS (BE_IF_FLAGS_BROADCAST | BE_IF_FLAGS_PASS_L3L4_ERRORS |\
BE_IF_FLAGS_MULTICAST | BE_IF_FLAGS_UNTAGGED)
#define BE_IF_ALL_FILT_FLAGS (BE_IF_EN_FLAGS | BE_IF_FLAGS_ALL_PROMISCUOUS)
/* An RX interface is an object with one or more MAC addresses and
* filtering capabilities. */
struct be_cmd_req_if_create {

View File

@@ -273,6 +273,10 @@ static int be_mac_addr_set(struct net_device *netdev, void *p)
if (ether_addr_equal(addr->sa_data, netdev->dev_addr))
return 0;
/* if device is not running, copy MAC to netdev->dev_addr */
if (!netif_running(netdev))
goto done;
/* The PMAC_ADD cmd may fail if the VF doesn't have FILTMGMT
* privilege or if PF did not provision the new MAC address.
* On BE3, this cmd will always fail if the VF doesn't have the
@@ -307,9 +311,9 @@ static int be_mac_addr_set(struct net_device *netdev, void *p)
status = -EPERM;
goto err;
}
memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len);
dev_info(dev, "MAC address changed to %pM\n", mac);
done:
ether_addr_copy(netdev->dev_addr, addr->sa_data);
dev_info(dev, "MAC address changed to %pM\n", addr->sa_data);
return 0;
err:
dev_warn(dev, "MAC address change to %pM failed\n", addr->sa_data);
@@ -2447,10 +2451,24 @@ static void be_eq_clean(struct be_eq_obj *eqo)
be_eq_notify(eqo->adapter, eqo->q.id, false, true, num, 0);
}
/* Free posted rx buffers that were not used */
static void be_rxq_clean(struct be_rx_obj *rxo)
{
struct be_queue_info *rxq = &rxo->q;
struct be_rx_page_info *page_info;
while (atomic_read(&rxq->used) > 0) {
page_info = get_rx_page_info(rxo);
put_page(page_info->page);
memset(page_info, 0, sizeof(*page_info));
}
BUG_ON(atomic_read(&rxq->used));
rxq->tail = 0;
rxq->head = 0;
}
static void be_rx_cq_clean(struct be_rx_obj *rxo)
{
struct be_rx_page_info *page_info;
struct be_queue_info *rxq = &rxo->q;
struct be_queue_info *rx_cq = &rxo->cq;
struct be_rx_compl_info *rxcp;
struct be_adapter *adapter = rxo->adapter;
@@ -2487,16 +2505,6 @@ static void be_rx_cq_clean(struct be_rx_obj *rxo)
/* After cleanup, leave the CQ in unarmed state */
be_cq_notify(adapter, rx_cq->id, false, 0);
/* Then free posted rx buffers that were not used */
while (atomic_read(&rxq->used) > 0) {
page_info = get_rx_page_info(rxo);
put_page(page_info->page);
memset(page_info, 0, sizeof(*page_info));
}
BUG_ON(atomic_read(&rxq->used));
rxq->tail = 0;
rxq->head = 0;
}
static void be_tx_compl_clean(struct be_adapter *adapter)
@@ -2576,8 +2584,8 @@ static void be_evt_queues_destroy(struct be_adapter *adapter)
be_cmd_q_destroy(adapter, &eqo->q, QTYPE_EQ);
napi_hash_del(&eqo->napi);
netif_napi_del(&eqo->napi);
free_cpumask_var(eqo->affinity_mask);
}
free_cpumask_var(eqo->affinity_mask);
be_queue_free(adapter, &eqo->q);
}
}
@@ -2594,13 +2602,7 @@ static int be_evt_queues_create(struct be_adapter *adapter)
for_all_evt_queues(adapter, eqo, i) {
int numa_node = dev_to_node(&adapter->pdev->dev);
if (!zalloc_cpumask_var(&eqo->affinity_mask, GFP_KERNEL))
return -ENOMEM;
cpumask_set_cpu(cpumask_local_spread(i, numa_node),
eqo->affinity_mask);
netif_napi_add(adapter->netdev, &eqo->napi, be_poll,
BE_NAPI_WEIGHT);
napi_hash_add(&eqo->napi);
aic = &adapter->aic_obj[i];
eqo->adapter = adapter;
eqo->idx = i;
@@ -2616,6 +2618,14 @@ static int be_evt_queues_create(struct be_adapter *adapter)
rc = be_cmd_eq_create(adapter, eqo);
if (rc)
return rc;
if (!zalloc_cpumask_var(&eqo->affinity_mask, GFP_KERNEL))
return -ENOMEM;
cpumask_set_cpu(cpumask_local_spread(i, numa_node),
eqo->affinity_mask);
netif_napi_add(adapter->netdev, &eqo->napi, be_poll,
BE_NAPI_WEIGHT);
napi_hash_add(&eqo->napi);
}
return 0;
}
@@ -3354,13 +3364,54 @@ static void be_rx_qs_destroy(struct be_adapter *adapter)
for_all_rx_queues(adapter, rxo, i) {
q = &rxo->q;
if (q->created) {
/* If RXQs are destroyed while in an "out of buffer"
* state, there is a possibility of an HW stall on
* Lancer. So, post 64 buffers to each queue to relieve
* the "out of buffer" condition.
* Make sure there's space in the RXQ before posting.
*/
if (lancer_chip(adapter)) {
be_rx_cq_clean(rxo);
if (atomic_read(&q->used) == 0)
be_post_rx_frags(rxo, GFP_KERNEL,
MAX_RX_POST);
}
be_cmd_rxq_destroy(adapter, q);
be_rx_cq_clean(rxo);
be_rxq_clean(rxo);
}
be_queue_free(adapter, q);
}
}
static void be_disable_if_filters(struct be_adapter *adapter)
{
be_cmd_pmac_del(adapter, adapter->if_handle,
adapter->pmac_id[0], 0);
be_clear_uc_list(adapter);
/* The IFACE flags are enabled in the open path and cleared
* in the close path. When a VF gets detached from the host and
* assigned to a VM the following happens:
* - VF's IFACE flags get cleared in the detach path
* - IFACE create is issued by the VF in the attach path
* Due to a bug in the BE3/Skyhawk-R FW
* (Lancer FW doesn't have the bug), the IFACE capability flags
* specified along with the IFACE create cmd issued by a VF are not
* honoured by FW. As a consequence, if a *new* driver
* (that enables/disables IFACE flags in open/close)
* is loaded in the host and an *old* driver is * used by a VM/VF,
* the IFACE gets created *without* the needed flags.
* To avoid this, disable RX-filter flags only for Lancer.
*/
if (lancer_chip(adapter)) {
be_cmd_rx_filter(adapter, BE_IF_ALL_FILT_FLAGS, OFF);
adapter->if_flags &= ~BE_IF_ALL_FILT_FLAGS;
}
}
static int be_close(struct net_device *netdev)
{
struct be_adapter *adapter = netdev_priv(netdev);
@@ -3373,6 +3424,8 @@ static int be_close(struct net_device *netdev)
if (!(adapter->flags & BE_FLAGS_SETUP_DONE))
return 0;
be_disable_if_filters(adapter);
be_roce_dev_close(adapter);
if (adapter->flags & BE_FLAGS_NAPI_ENABLED) {
@@ -3392,7 +3445,6 @@ static int be_close(struct net_device *netdev)
be_tx_compl_clean(adapter);
be_rx_qs_destroy(adapter);
be_clear_uc_list(adapter);
for_all_evt_queues(adapter, eqo, i) {
if (msix_enabled(adapter))
@@ -3477,6 +3529,31 @@ static int be_rx_qs_create(struct be_adapter *adapter)
return 0;
}
static int be_enable_if_filters(struct be_adapter *adapter)
{
int status;
status = be_cmd_rx_filter(adapter, BE_IF_EN_FLAGS, ON);
if (status)
return status;
/* For BE3 VFs, the PF programs the initial MAC address */
if (!(BEx_chip(adapter) && be_virtfn(adapter))) {
status = be_cmd_pmac_add(adapter, adapter->netdev->dev_addr,
adapter->if_handle,
&adapter->pmac_id[0], 0);
if (status)
return status;
}
if (adapter->vlans_added)
be_vid_config(adapter);
be_set_rx_mode(adapter->netdev);
return 0;
}
static int be_open(struct net_device *netdev)
{
struct be_adapter *adapter = netdev_priv(netdev);
@@ -3490,6 +3567,10 @@ static int be_open(struct net_device *netdev)
if (status)
goto err;
status = be_enable_if_filters(adapter);
if (status)
goto err;
status = be_irq_register(adapter);
if (status)
goto err;
@@ -3686,16 +3767,6 @@ static void be_cancel_err_detection(struct be_adapter *adapter)
}
}
static void be_mac_clear(struct be_adapter *adapter)
{
if (adapter->pmac_id) {
be_cmd_pmac_del(adapter, adapter->if_handle,
adapter->pmac_id[0], 0);
kfree(adapter->pmac_id);
adapter->pmac_id = NULL;
}
}
#ifdef CONFIG_BE2NET_VXLAN
static void be_disable_vxlan_offloads(struct be_adapter *adapter)
{
@@ -3770,8 +3841,8 @@ static int be_clear(struct be_adapter *adapter)
#ifdef CONFIG_BE2NET_VXLAN
be_disable_vxlan_offloads(adapter);
#endif
/* delete the primary mac along with the uc-mac list */
be_mac_clear(adapter);
kfree(adapter->pmac_id);
adapter->pmac_id = NULL;
be_cmd_if_destroy(adapter, adapter->if_handle, 0);
@@ -3782,25 +3853,11 @@ static int be_clear(struct be_adapter *adapter)
return 0;
}
static int be_if_create(struct be_adapter *adapter, u32 *if_handle,
u32 cap_flags, u32 vf)
{
u32 en_flags;
en_flags = BE_IF_FLAGS_UNTAGGED | BE_IF_FLAGS_BROADCAST |
BE_IF_FLAGS_MULTICAST | BE_IF_FLAGS_PASS_L3L4_ERRORS |
BE_IF_FLAGS_RSS | BE_IF_FLAGS_DEFQ_RSS;
en_flags &= cap_flags;
return be_cmd_if_create(adapter, cap_flags, en_flags, if_handle, vf);
}
static int be_vfs_if_create(struct be_adapter *adapter)
{
struct be_resources res = {0};
u32 cap_flags, en_flags, vf;
struct be_vf_cfg *vf_cfg;
u32 cap_flags, vf;
int status;
/* If a FW profile exists, then cap_flags are updated */
@@ -3821,8 +3878,12 @@ static int be_vfs_if_create(struct be_adapter *adapter)
}
}
status = be_if_create(adapter, &vf_cfg->if_handle,
cap_flags, vf + 1);
en_flags = cap_flags & (BE_IF_FLAGS_UNTAGGED |
BE_IF_FLAGS_BROADCAST |
BE_IF_FLAGS_MULTICAST |
BE_IF_FLAGS_PASS_L3L4_ERRORS);
status = be_cmd_if_create(adapter, cap_flags, en_flags,
&vf_cfg->if_handle, vf + 1);
if (status)
return status;
}
@@ -4194,15 +4255,8 @@ static int be_mac_setup(struct be_adapter *adapter)
memcpy(adapter->netdev->dev_addr, mac, ETH_ALEN);
memcpy(adapter->netdev->perm_addr, mac, ETH_ALEN);
} else {
/* Maybe the HW was reset; dev_addr must be re-programmed */
memcpy(mac, adapter->netdev->dev_addr, ETH_ALEN);
}
/* For BE3-R VFs, the PF programs the initial MAC address */
if (!(BEx_chip(adapter) && be_virtfn(adapter)))
be_cmd_pmac_add(adapter, mac, adapter->if_handle,
&adapter->pmac_id[0], 0);
return 0;
}
@@ -4342,6 +4396,7 @@ static int be_func_init(struct be_adapter *adapter)
static int be_setup(struct be_adapter *adapter)
{
struct device *dev = &adapter->pdev->dev;
u32 en_flags;
int status;
status = be_func_init(adapter);
@@ -4364,8 +4419,11 @@ static int be_setup(struct be_adapter *adapter)
if (status)
goto err;
status = be_if_create(adapter, &adapter->if_handle,
be_if_cap_flags(adapter), 0);
/* will enable all the needed filter flags in be_open() */
en_flags = BE_IF_FLAGS_RSS | BE_IF_FLAGS_DEFQ_RSS;
en_flags = en_flags & be_if_cap_flags(adapter);
status = be_cmd_if_create(adapter, be_if_cap_flags(adapter), en_flags,
&adapter->if_handle, 0);
if (status)
goto err;
@@ -4391,11 +4449,6 @@ static int be_setup(struct be_adapter *adapter)
dev_err(dev, "Please upgrade firmware to version >= 4.0\n");
}
if (adapter->vlans_added)
be_vid_config(adapter);
be_set_rx_mode(adapter->netdev);
status = be_cmd_set_flow_control(adapter, adapter->tx_fc,
adapter->rx_fc);
if (status)

View File

@@ -3433,6 +3433,7 @@ fec_probe(struct platform_device *pdev)
pm_runtime_set_autosuspend_delay(&pdev->dev, FEC_MDIO_PM_TIMEOUT);
pm_runtime_use_autosuspend(&pdev->dev);
pm_runtime_get_noresume(&pdev->dev);
pm_runtime_set_active(&pdev->dev);
pm_runtime_enable(&pdev->dev);

View File

@@ -586,7 +586,8 @@ static int fs_enet_start_xmit(struct sk_buff *skb, struct net_device *dev)
frag = skb_shinfo(skb)->frags;
while (nr_frags) {
CBDC_SC(bdp,
BD_ENET_TX_STATS | BD_ENET_TX_LAST | BD_ENET_TX_TC);
BD_ENET_TX_STATS | BD_ENET_TX_INTR | BD_ENET_TX_LAST |
BD_ENET_TX_TC);
CBDS_SC(bdp, BD_ENET_TX_READY);
if ((CBDR_SC(bdp) & BD_ENET_TX_WRAP) == 0)

View File

@@ -110,7 +110,7 @@ static int do_pd_setup(struct fs_enet_private *fep)
}
#define FEC_NAPI_RX_EVENT_MSK (FEC_ENET_RXF | FEC_ENET_RXB)
#define FEC_NAPI_TX_EVENT_MSK (FEC_ENET_TXF | FEC_ENET_TXB)
#define FEC_NAPI_TX_EVENT_MSK (FEC_ENET_TXF)
#define FEC_RX_EVENT (FEC_ENET_RXF)
#define FEC_TX_EVENT (FEC_ENET_TXF)
#define FEC_ERR_EVENT_MSK (FEC_ENET_HBERR | FEC_ENET_BABR | \

View File

@@ -900,27 +900,6 @@ static int gfar_check_filer_hardware(struct gfar_private *priv)
return 0;
}
static int gfar_comp_asc(const void *a, const void *b)
{
return memcmp(a, b, 4);
}
static int gfar_comp_desc(const void *a, const void *b)
{
return -memcmp(a, b, 4);
}
static void gfar_swap(void *a, void *b, int size)
{
u32 *_a = a;
u32 *_b = b;
swap(_a[0], _b[0]);
swap(_a[1], _b[1]);
swap(_a[2], _b[2]);
swap(_a[3], _b[3]);
}
/* Write a mask to filer cache */
static void gfar_set_mask(u32 mask, struct filer_table *tab)
{
@@ -1270,310 +1249,6 @@ static int gfar_convert_to_filer(struct ethtool_rx_flow_spec *rule,
return 0;
}
/* Copy size filer entries */
static void gfar_copy_filer_entries(struct gfar_filer_entry dst[0],
struct gfar_filer_entry src[0], s32 size)
{
while (size > 0) {
size--;
dst[size].ctrl = src[size].ctrl;
dst[size].prop = src[size].prop;
}
}
/* Delete the contents of the filer-table between start and end
* and collapse them
*/
static int gfar_trim_filer_entries(u32 begin, u32 end, struct filer_table *tab)
{
int length;
if (end > MAX_FILER_CACHE_IDX || end < begin)
return -EINVAL;
end++;
length = end - begin;
/* Copy */
while (end < tab->index) {
tab->fe[begin].ctrl = tab->fe[end].ctrl;
tab->fe[begin++].prop = tab->fe[end++].prop;
}
/* Fill up with don't cares */
while (begin < tab->index) {
tab->fe[begin].ctrl = 0x60;
tab->fe[begin].prop = 0xFFFFFFFF;
begin++;
}
tab->index -= length;
return 0;
}
/* Make space on the wanted location */
static int gfar_expand_filer_entries(u32 begin, u32 length,
struct filer_table *tab)
{
if (length == 0 || length + tab->index > MAX_FILER_CACHE_IDX ||
begin > MAX_FILER_CACHE_IDX)
return -EINVAL;
gfar_copy_filer_entries(&(tab->fe[begin + length]), &(tab->fe[begin]),
tab->index - length + 1);
tab->index += length;
return 0;
}
static int gfar_get_next_cluster_start(int start, struct filer_table *tab)
{
for (; (start < tab->index) && (start < MAX_FILER_CACHE_IDX - 1);
start++) {
if ((tab->fe[start].ctrl & (RQFCR_AND | RQFCR_CLE)) ==
(RQFCR_AND | RQFCR_CLE))
return start;
}
return -1;
}
static int gfar_get_next_cluster_end(int start, struct filer_table *tab)
{
for (; (start < tab->index) && (start < MAX_FILER_CACHE_IDX - 1);
start++) {
if ((tab->fe[start].ctrl & (RQFCR_AND | RQFCR_CLE)) ==
(RQFCR_CLE))
return start;
}
return -1;
}
/* Uses hardwares clustering option to reduce
* the number of filer table entries
*/
static void gfar_cluster_filer(struct filer_table *tab)
{
s32 i = -1, j, iend, jend;
while ((i = gfar_get_next_cluster_start(++i, tab)) != -1) {
j = i;
while ((j = gfar_get_next_cluster_start(++j, tab)) != -1) {
/* The cluster entries self and the previous one
* (a mask) must be identical!
*/
if (tab->fe[i].ctrl != tab->fe[j].ctrl)
break;
if (tab->fe[i].prop != tab->fe[j].prop)
break;
if (tab->fe[i - 1].ctrl != tab->fe[j - 1].ctrl)
break;
if (tab->fe[i - 1].prop != tab->fe[j - 1].prop)
break;
iend = gfar_get_next_cluster_end(i, tab);
jend = gfar_get_next_cluster_end(j, tab);
if (jend == -1 || iend == -1)
break;
/* First we make some free space, where our cluster
* element should be. Then we copy it there and finally
* delete in from its old location.
*/
if (gfar_expand_filer_entries(iend, (jend - j), tab) ==
-EINVAL)
break;
gfar_copy_filer_entries(&(tab->fe[iend + 1]),
&(tab->fe[jend + 1]), jend - j);
if (gfar_trim_filer_entries(jend - 1,
jend + (jend - j),
tab) == -EINVAL)
return;
/* Mask out cluster bit */
tab->fe[iend].ctrl &= ~(RQFCR_CLE);
}
}
}
/* Swaps the masked bits of a1<>a2 and b1<>b2 */
static void gfar_swap_bits(struct gfar_filer_entry *a1,
struct gfar_filer_entry *a2,
struct gfar_filer_entry *b1,
struct gfar_filer_entry *b2, u32 mask)
{
u32 temp[4];
temp[0] = a1->ctrl & mask;
temp[1] = a2->ctrl & mask;
temp[2] = b1->ctrl & mask;
temp[3] = b2->ctrl & mask;
a1->ctrl &= ~mask;
a2->ctrl &= ~mask;
b1->ctrl &= ~mask;
b2->ctrl &= ~mask;
a1->ctrl |= temp[1];
a2->ctrl |= temp[0];
b1->ctrl |= temp[3];
b2->ctrl |= temp[2];
}
/* Generate a list consisting of masks values with their start and
* end of validity and block as indicator for parts belonging
* together (glued by ANDs) in mask_table
*/
static u32 gfar_generate_mask_table(struct gfar_mask_entry *mask_table,
struct filer_table *tab)
{
u32 i, and_index = 0, block_index = 1;
for (i = 0; i < tab->index; i++) {
/* LSByte of control = 0 sets a mask */
if (!(tab->fe[i].ctrl & 0xF)) {
mask_table[and_index].mask = tab->fe[i].prop;
mask_table[and_index].start = i;
mask_table[and_index].block = block_index;
if (and_index >= 1)
mask_table[and_index - 1].end = i - 1;
and_index++;
}
/* cluster starts and ends will be separated because they should
* hold their position
*/
if (tab->fe[i].ctrl & RQFCR_CLE)
block_index++;
/* A not set AND indicates the end of a depended block */
if (!(tab->fe[i].ctrl & RQFCR_AND))
block_index++;
}
mask_table[and_index - 1].end = i - 1;
return and_index;
}
/* Sorts the entries of mask_table by the values of the masks.
* Important: The 0xFF80 flags of the first and last entry of a
* block must hold their position (which queue, CLusterEnable, ReJEct,
* AND)
*/
static void gfar_sort_mask_table(struct gfar_mask_entry *mask_table,
struct filer_table *temp_table, u32 and_index)
{
/* Pointer to compare function (_asc or _desc) */
int (*gfar_comp)(const void *, const void *);
u32 i, size = 0, start = 0, prev = 1;
u32 old_first, old_last, new_first, new_last;
gfar_comp = &gfar_comp_desc;
for (i = 0; i < and_index; i++) {
if (prev != mask_table[i].block) {
old_first = mask_table[start].start + 1;
old_last = mask_table[i - 1].end;
sort(mask_table + start, size,
sizeof(struct gfar_mask_entry),
gfar_comp, &gfar_swap);
/* Toggle order for every block. This makes the
* thing more efficient!
*/
if (gfar_comp == gfar_comp_desc)
gfar_comp = &gfar_comp_asc;
else
gfar_comp = &gfar_comp_desc;
new_first = mask_table[start].start + 1;
new_last = mask_table[i - 1].end;
gfar_swap_bits(&temp_table->fe[new_first],
&temp_table->fe[old_first],
&temp_table->fe[new_last],
&temp_table->fe[old_last],
RQFCR_QUEUE | RQFCR_CLE |
RQFCR_RJE | RQFCR_AND);
start = i;
size = 0;
}
size++;
prev = mask_table[i].block;
}
}
/* Reduces the number of masks needed in the filer table to save entries
* This is done by sorting the masks of a depended block. A depended block is
* identified by gluing ANDs or CLE. The sorting order toggles after every
* block. Of course entries in scope of a mask must change their location with
* it.
*/
static int gfar_optimize_filer_masks(struct filer_table *tab)
{
struct filer_table *temp_table;
struct gfar_mask_entry *mask_table;
u32 and_index = 0, previous_mask = 0, i = 0, j = 0, size = 0;
s32 ret = 0;
/* We need a copy of the filer table because
* we want to change its order
*/
temp_table = kmemdup(tab, sizeof(*temp_table), GFP_KERNEL);
if (temp_table == NULL)
return -ENOMEM;
mask_table = kcalloc(MAX_FILER_CACHE_IDX / 2 + 1,
sizeof(struct gfar_mask_entry), GFP_KERNEL);
if (mask_table == NULL) {
ret = -ENOMEM;
goto end;
}
and_index = gfar_generate_mask_table(mask_table, tab);
gfar_sort_mask_table(mask_table, temp_table, and_index);
/* Now we can copy the data from our duplicated filer table to
* the real one in the order the mask table says
*/
for (i = 0; i < and_index; i++) {
size = mask_table[i].end - mask_table[i].start + 1;
gfar_copy_filer_entries(&(tab->fe[j]),
&(temp_table->fe[mask_table[i].start]), size);
j += size;
}
/* And finally we just have to check for duplicated masks and drop the
* second ones
*/
for (i = 0; i < tab->index && i < MAX_FILER_CACHE_IDX; i++) {
if (tab->fe[i].ctrl == 0x80) {
previous_mask = i++;
break;
}
}
for (; i < tab->index && i < MAX_FILER_CACHE_IDX; i++) {
if (tab->fe[i].ctrl == 0x80) {
if (tab->fe[i].prop == tab->fe[previous_mask].prop) {
/* Two identical ones found!
* So drop the second one!
*/
gfar_trim_filer_entries(i, i, tab);
} else
/* Not identical! */
previous_mask = i;
}
}
kfree(mask_table);
end: kfree(temp_table);
return ret;
}
/* Write the bit-pattern from software's buffer to hardware registers */
static int gfar_write_filer_table(struct gfar_private *priv,
struct filer_table *tab)
@@ -1583,11 +1258,10 @@ static int gfar_write_filer_table(struct gfar_private *priv,
return -EBUSY;
/* Fill regular entries */
for (; i < MAX_FILER_IDX - 1 && (tab->fe[i].ctrl | tab->fe[i].prop);
i++)
for (; i < MAX_FILER_IDX && (tab->fe[i].ctrl | tab->fe[i].prop); i++)
gfar_write_filer(priv, i, tab->fe[i].ctrl, tab->fe[i].prop);
/* Fill the rest with fall-troughs */
for (; i < MAX_FILER_IDX - 1; i++)
for (; i < MAX_FILER_IDX; i++)
gfar_write_filer(priv, i, 0x60, 0xFFFFFFFF);
/* Last entry must be default accept
* because that's what people expect
@@ -1621,7 +1295,6 @@ static int gfar_process_filer_changes(struct gfar_private *priv)
{
struct ethtool_flow_spec_container *j;
struct filer_table *tab;
s32 i = 0;
s32 ret = 0;
/* So index is set to zero, too! */
@@ -1646,17 +1319,6 @@ static int gfar_process_filer_changes(struct gfar_private *priv)
}
}
i = tab->index;
/* Optimizations to save entries */
gfar_cluster_filer(tab);
gfar_optimize_filer_masks(tab);
pr_debug("\tSummary:\n"
"\tData on hardware: %d\n"
"\tCompression rate: %d%%\n",
tab->index, 100 - (100 * tab->index) / i);
/* Write everything to hardware */
ret = gfar_write_filer_table(priv, tab);
if (ret == -EBUSY) {
@@ -1722,13 +1384,14 @@ static int gfar_add_cls(struct gfar_private *priv,
}
process:
priv->rx_list.count++;
ret = gfar_process_filer_changes(priv);
if (ret)
goto clean_list;
priv->rx_list.count++;
return ret;
clean_list:
priv->rx_list.count--;
list_del(&temp->list);
clean_mem:
kfree(temp);

View File

@@ -27,6 +27,8 @@
#include <linux/of_address.h>
#include <linux/phy.h>
#include <linux/clk.h>
#include <linux/hrtimer.h>
#include <linux/ktime.h>
#include <uapi/linux/ppp_defs.h>
#include <net/ip.h>
#include <net/ipv6.h>
@@ -299,6 +301,7 @@
/* Coalescing */
#define MVPP2_TXDONE_COAL_PKTS_THRESH 15
#define MVPP2_TXDONE_HRTIMER_PERIOD_NS 1000000UL
#define MVPP2_RX_COAL_PKTS 32
#define MVPP2_RX_COAL_USEC 100
@@ -660,6 +663,14 @@ struct mvpp2_pcpu_stats {
u64 tx_bytes;
};
/* Per-CPU port control */
struct mvpp2_port_pcpu {
struct hrtimer tx_done_timer;
bool timer_scheduled;
/* Tasklet for egress finalization */
struct tasklet_struct tx_done_tasklet;
};
struct mvpp2_port {
u8 id;
@@ -679,6 +690,9 @@ struct mvpp2_port {
u32 pending_cause_rx;
struct napi_struct napi;
/* Per-CPU port control */
struct mvpp2_port_pcpu __percpu *pcpu;
/* Flags */
unsigned long flags;
@@ -776,6 +790,9 @@ struct mvpp2_txq_pcpu {
/* Array of transmitted skb */
struct sk_buff **tx_skb;
/* Array of transmitted buffers' physical addresses */
dma_addr_t *tx_buffs;
/* Index of last TX DMA descriptor that was inserted */
int txq_put_index;
@@ -913,8 +930,6 @@ struct mvpp2_bm_pool {
/* Occupied buffers indicator */
atomic_t in_use;
int in_use_thresh;
spinlock_t lock;
};
struct mvpp2_buff_hdr {
@@ -963,9 +978,13 @@ static void mvpp2_txq_inc_get(struct mvpp2_txq_pcpu *txq_pcpu)
}
static void mvpp2_txq_inc_put(struct mvpp2_txq_pcpu *txq_pcpu,
struct sk_buff *skb)
struct sk_buff *skb,
struct mvpp2_tx_desc *tx_desc)
{
txq_pcpu->tx_skb[txq_pcpu->txq_put_index] = skb;
if (skb)
txq_pcpu->tx_buffs[txq_pcpu->txq_put_index] =
tx_desc->buf_phys_addr;
txq_pcpu->txq_put_index++;
if (txq_pcpu->txq_put_index == txq_pcpu->size)
txq_pcpu->txq_put_index = 0;
@@ -3376,7 +3395,6 @@ static int mvpp2_bm_pool_create(struct platform_device *pdev,
bm_pool->pkt_size = 0;
bm_pool->buf_num = 0;
atomic_set(&bm_pool->in_use, 0);
spin_lock_init(&bm_pool->lock);
return 0;
}
@@ -3647,7 +3665,6 @@ static struct mvpp2_bm_pool *
mvpp2_bm_pool_use(struct mvpp2_port *port, int pool, enum mvpp2_bm_type type,
int pkt_size)
{
unsigned long flags = 0;
struct mvpp2_bm_pool *new_pool = &port->priv->bm_pools[pool];
int num;
@@ -3656,8 +3673,6 @@ mvpp2_bm_pool_use(struct mvpp2_port *port, int pool, enum mvpp2_bm_type type,
return NULL;
}
spin_lock_irqsave(&new_pool->lock, flags);
if (new_pool->type == MVPP2_BM_FREE)
new_pool->type = type;
@@ -3686,8 +3701,6 @@ mvpp2_bm_pool_use(struct mvpp2_port *port, int pool, enum mvpp2_bm_type type,
if (num != pkts_num) {
WARN(1, "pool %d: %d of %d allocated\n",
new_pool->id, num, pkts_num);
/* We need to undo the bufs_add() allocations */
spin_unlock_irqrestore(&new_pool->lock, flags);
return NULL;
}
}
@@ -3695,15 +3708,12 @@ mvpp2_bm_pool_use(struct mvpp2_port *port, int pool, enum mvpp2_bm_type type,
mvpp2_bm_pool_bufsize_set(port->priv, new_pool,
MVPP2_RX_BUF_SIZE(new_pool->pkt_size));
spin_unlock_irqrestore(&new_pool->lock, flags);
return new_pool;
}
/* Initialize pools for swf */
static int mvpp2_swf_bm_pool_init(struct mvpp2_port *port)
{
unsigned long flags = 0;
int rxq;
if (!port->pool_long) {
@@ -3714,9 +3724,7 @@ static int mvpp2_swf_bm_pool_init(struct mvpp2_port *port)
if (!port->pool_long)
return -ENOMEM;
spin_lock_irqsave(&port->pool_long->lock, flags);
port->pool_long->port_map |= (1 << port->id);
spin_unlock_irqrestore(&port->pool_long->lock, flags);
for (rxq = 0; rxq < rxq_number; rxq++)
mvpp2_rxq_long_pool_set(port, rxq, port->pool_long->id);
@@ -3730,9 +3738,7 @@ static int mvpp2_swf_bm_pool_init(struct mvpp2_port *port)
if (!port->pool_short)
return -ENOMEM;
spin_lock_irqsave(&port->pool_short->lock, flags);
port->pool_short->port_map |= (1 << port->id);
spin_unlock_irqrestore(&port->pool_short->lock, flags);
for (rxq = 0; rxq < rxq_number; rxq++)
mvpp2_rxq_short_pool_set(port, rxq,
@@ -3806,7 +3812,6 @@ static void mvpp2_interrupts_unmask(void *arg)
mvpp2_write(port->priv, MVPP2_ISR_RX_TX_MASK_REG(port->id),
(MVPP2_CAUSE_MISC_SUM_MASK |
MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_MASK |
MVPP2_CAUSE_RXQ_OCCUP_DESC_ALL_MASK));
}
@@ -4382,23 +4387,6 @@ static void mvpp2_rx_time_coal_set(struct mvpp2_port *port,
rxq->time_coal = usec;
}
/* Set threshold for TX_DONE pkts coalescing */
static void mvpp2_tx_done_pkts_coal_set(void *arg)
{
struct mvpp2_port *port = arg;
int queue;
u32 val;
for (queue = 0; queue < txq_number; queue++) {
struct mvpp2_tx_queue *txq = port->txqs[queue];
val = (txq->done_pkts_coal << MVPP2_TRANSMITTED_THRESH_OFFSET) &
MVPP2_TRANSMITTED_THRESH_MASK;
mvpp2_write(port->priv, MVPP2_TXQ_NUM_REG, txq->id);
mvpp2_write(port->priv, MVPP2_TXQ_THRESH_REG, val);
}
}
/* Free Tx queue skbuffs */
static void mvpp2_txq_bufs_free(struct mvpp2_port *port,
struct mvpp2_tx_queue *txq,
@@ -4407,8 +4395,8 @@ static void mvpp2_txq_bufs_free(struct mvpp2_port *port,
int i;
for (i = 0; i < num; i++) {
struct mvpp2_tx_desc *tx_desc = txq->descs +
txq_pcpu->txq_get_index;
dma_addr_t buf_phys_addr =
txq_pcpu->tx_buffs[txq_pcpu->txq_get_index];
struct sk_buff *skb = txq_pcpu->tx_skb[txq_pcpu->txq_get_index];
mvpp2_txq_inc_get(txq_pcpu);
@@ -4416,8 +4404,8 @@ static void mvpp2_txq_bufs_free(struct mvpp2_port *port,
if (!skb)
continue;
dma_unmap_single(port->dev->dev.parent, tx_desc->buf_phys_addr,
tx_desc->data_size, DMA_TO_DEVICE);
dma_unmap_single(port->dev->dev.parent, buf_phys_addr,
skb_headlen(skb), DMA_TO_DEVICE);
dev_kfree_skb_any(skb);
}
}
@@ -4433,7 +4421,7 @@ static inline struct mvpp2_rx_queue *mvpp2_get_rx_queue(struct mvpp2_port *port,
static inline struct mvpp2_tx_queue *mvpp2_get_tx_queue(struct mvpp2_port *port,
u32 cause)
{
int queue = fls(cause >> 16) - 1;
int queue = fls(cause) - 1;
return port->txqs[queue];
}
@@ -4460,6 +4448,29 @@ static void mvpp2_txq_done(struct mvpp2_port *port, struct mvpp2_tx_queue *txq,
netif_tx_wake_queue(nq);
}
static unsigned int mvpp2_tx_done(struct mvpp2_port *port, u32 cause)
{
struct mvpp2_tx_queue *txq;
struct mvpp2_txq_pcpu *txq_pcpu;
unsigned int tx_todo = 0;
while (cause) {
txq = mvpp2_get_tx_queue(port, cause);
if (!txq)
break;
txq_pcpu = this_cpu_ptr(txq->pcpu);
if (txq_pcpu->count) {
mvpp2_txq_done(port, txq, txq_pcpu);
tx_todo += txq_pcpu->count;
}
cause &= ~(1 << txq->log_id);
}
return tx_todo;
}
/* Rx/Tx queue initialization/cleanup methods */
/* Allocate and initialize descriptors for aggr TXQ */
@@ -4649,12 +4660,13 @@ static int mvpp2_txq_init(struct mvpp2_port *port,
txq_pcpu->tx_skb = kmalloc(txq_pcpu->size *
sizeof(*txq_pcpu->tx_skb),
GFP_KERNEL);
if (!txq_pcpu->tx_skb) {
dma_free_coherent(port->dev->dev.parent,
txq->size * MVPP2_DESC_ALIGNED_SIZE,
txq->descs, txq->descs_phys);
return -ENOMEM;
}
if (!txq_pcpu->tx_skb)
goto error;
txq_pcpu->tx_buffs = kmalloc(txq_pcpu->size *
sizeof(dma_addr_t), GFP_KERNEL);
if (!txq_pcpu->tx_buffs)
goto error;
txq_pcpu->count = 0;
txq_pcpu->reserved_num = 0;
@@ -4663,6 +4675,19 @@ static int mvpp2_txq_init(struct mvpp2_port *port,
}
return 0;
error:
for_each_present_cpu(cpu) {
txq_pcpu = per_cpu_ptr(txq->pcpu, cpu);
kfree(txq_pcpu->tx_skb);
kfree(txq_pcpu->tx_buffs);
}
dma_free_coherent(port->dev->dev.parent,
txq->size * MVPP2_DESC_ALIGNED_SIZE,
txq->descs, txq->descs_phys);
return -ENOMEM;
}
/* Free allocated TXQ resources */
@@ -4675,6 +4700,7 @@ static void mvpp2_txq_deinit(struct mvpp2_port *port,
for_each_present_cpu(cpu) {
txq_pcpu = per_cpu_ptr(txq->pcpu, cpu);
kfree(txq_pcpu->tx_skb);
kfree(txq_pcpu->tx_buffs);
}
if (txq->descs)
@@ -4805,7 +4831,6 @@ static int mvpp2_setup_txqs(struct mvpp2_port *port)
goto err_cleanup;
}
on_each_cpu(mvpp2_tx_done_pkts_coal_set, port, 1);
on_each_cpu(mvpp2_txq_sent_counter_clear, port, 1);
return 0;
@@ -4887,6 +4912,49 @@ static void mvpp2_link_event(struct net_device *dev)
}
}
static void mvpp2_timer_set(struct mvpp2_port_pcpu *port_pcpu)
{
ktime_t interval;
if (!port_pcpu->timer_scheduled) {
port_pcpu->timer_scheduled = true;
interval = ktime_set(0, MVPP2_TXDONE_HRTIMER_PERIOD_NS);
hrtimer_start(&port_pcpu->tx_done_timer, interval,
HRTIMER_MODE_REL_PINNED);
}
}
static void mvpp2_tx_proc_cb(unsigned long data)
{
struct net_device *dev = (struct net_device *)data;
struct mvpp2_port *port = netdev_priv(dev);
struct mvpp2_port_pcpu *port_pcpu = this_cpu_ptr(port->pcpu);
unsigned int tx_todo, cause;
if (!netif_running(dev))
return;
port_pcpu->timer_scheduled = false;
/* Process all the Tx queues */
cause = (1 << txq_number) - 1;
tx_todo = mvpp2_tx_done(port, cause);
/* Set the timer in case not all the packets were processed */
if (tx_todo)
mvpp2_timer_set(port_pcpu);
}
static enum hrtimer_restart mvpp2_hr_timer_cb(struct hrtimer *timer)
{
struct mvpp2_port_pcpu *port_pcpu = container_of(timer,
struct mvpp2_port_pcpu,
tx_done_timer);
tasklet_schedule(&port_pcpu->tx_done_tasklet);
return HRTIMER_NORESTART;
}
/* Main RX/TX processing routines */
/* Display more error info */
@@ -5144,11 +5212,11 @@ static int mvpp2_tx_frag_process(struct mvpp2_port *port, struct sk_buff *skb,
if (i == (skb_shinfo(skb)->nr_frags - 1)) {
/* Last descriptor */
tx_desc->command = MVPP2_TXD_L_DESC;
mvpp2_txq_inc_put(txq_pcpu, skb);
mvpp2_txq_inc_put(txq_pcpu, skb, tx_desc);
} else {
/* Descriptor in the middle: Not First, Not Last */
tx_desc->command = 0;
mvpp2_txq_inc_put(txq_pcpu, NULL);
mvpp2_txq_inc_put(txq_pcpu, NULL, tx_desc);
}
}
@@ -5214,12 +5282,12 @@ static int mvpp2_tx(struct sk_buff *skb, struct net_device *dev)
/* First and Last descriptor */
tx_cmd |= MVPP2_TXD_F_DESC | MVPP2_TXD_L_DESC;
tx_desc->command = tx_cmd;
mvpp2_txq_inc_put(txq_pcpu, skb);
mvpp2_txq_inc_put(txq_pcpu, skb, tx_desc);
} else {
/* First but not Last */
tx_cmd |= MVPP2_TXD_F_DESC | MVPP2_TXD_PADDING_DISABLE;
tx_desc->command = tx_cmd;
mvpp2_txq_inc_put(txq_pcpu, NULL);
mvpp2_txq_inc_put(txq_pcpu, NULL, tx_desc);
/* Continue with other skb fragments */
if (mvpp2_tx_frag_process(port, skb, aggr_txq, txq)) {
@@ -5255,6 +5323,17 @@ out:
dev_kfree_skb_any(skb);
}
/* Finalize TX processing */
if (txq_pcpu->count >= txq->done_pkts_coal)
mvpp2_txq_done(port, txq, txq_pcpu);
/* Set the timer in case not all frags were processed */
if (txq_pcpu->count <= frags && txq_pcpu->count > 0) {
struct mvpp2_port_pcpu *port_pcpu = this_cpu_ptr(port->pcpu);
mvpp2_timer_set(port_pcpu);
}
return NETDEV_TX_OK;
}
@@ -5268,10 +5347,11 @@ static inline void mvpp2_cause_error(struct net_device *dev, int cause)
netdev_err(dev, "tx fifo underrun error\n");
}
static void mvpp2_txq_done_percpu(void *arg)
static int mvpp2_poll(struct napi_struct *napi, int budget)
{
struct mvpp2_port *port = arg;
u32 cause_rx_tx, cause_tx, cause_misc;
u32 cause_rx_tx, cause_rx, cause_misc;
int rx_done = 0;
struct mvpp2_port *port = netdev_priv(napi->dev);
/* Rx/Tx cause register
*
@@ -5285,7 +5365,7 @@ static void mvpp2_txq_done_percpu(void *arg)
*/
cause_rx_tx = mvpp2_read(port->priv,
MVPP2_ISR_RX_TX_CAUSE_REG(port->id));
cause_tx = cause_rx_tx & MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_MASK;
cause_rx_tx &= ~MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_MASK;
cause_misc = cause_rx_tx & MVPP2_CAUSE_MISC_SUM_MASK;
if (cause_misc) {
@@ -5297,26 +5377,6 @@ static void mvpp2_txq_done_percpu(void *arg)
cause_rx_tx & ~MVPP2_CAUSE_MISC_SUM_MASK);
}
/* Release TX descriptors */
if (cause_tx) {
struct mvpp2_tx_queue *txq = mvpp2_get_tx_queue(port, cause_tx);
struct mvpp2_txq_pcpu *txq_pcpu = this_cpu_ptr(txq->pcpu);
if (txq_pcpu->count)
mvpp2_txq_done(port, txq, txq_pcpu);
}
}
static int mvpp2_poll(struct napi_struct *napi, int budget)
{
u32 cause_rx_tx, cause_rx;
int rx_done = 0;
struct mvpp2_port *port = netdev_priv(napi->dev);
on_each_cpu(mvpp2_txq_done_percpu, port, 1);
cause_rx_tx = mvpp2_read(port->priv,
MVPP2_ISR_RX_TX_CAUSE_REG(port->id));
cause_rx = cause_rx_tx & MVPP2_CAUSE_RXQ_OCCUP_DESC_ALL_MASK;
/* Process RX packets */
@@ -5561,6 +5621,8 @@ err_cleanup_rxqs:
static int mvpp2_stop(struct net_device *dev)
{
struct mvpp2_port *port = netdev_priv(dev);
struct mvpp2_port_pcpu *port_pcpu;
int cpu;
mvpp2_stop_dev(port);
mvpp2_phy_disconnect(port);
@@ -5569,6 +5631,13 @@ static int mvpp2_stop(struct net_device *dev)
on_each_cpu(mvpp2_interrupts_mask, port, 1);
free_irq(port->irq, port);
for_each_present_cpu(cpu) {
port_pcpu = per_cpu_ptr(port->pcpu, cpu);
hrtimer_cancel(&port_pcpu->tx_done_timer);
port_pcpu->timer_scheduled = false;
tasklet_kill(&port_pcpu->tx_done_tasklet);
}
mvpp2_cleanup_rxqs(port);
mvpp2_cleanup_txqs(port);
@@ -5784,7 +5853,6 @@ static int mvpp2_ethtool_set_coalesce(struct net_device *dev,
txq->done_pkts_coal = c->tx_max_coalesced_frames;
}
on_each_cpu(mvpp2_tx_done_pkts_coal_set, port, 1);
return 0;
}
@@ -6035,6 +6103,7 @@ static int mvpp2_port_probe(struct platform_device *pdev,
{
struct device_node *phy_node;
struct mvpp2_port *port;
struct mvpp2_port_pcpu *port_pcpu;
struct net_device *dev;
struct resource *res;
const char *dt_mac_addr;
@@ -6044,7 +6113,7 @@ static int mvpp2_port_probe(struct platform_device *pdev,
int features;
int phy_mode;
int priv_common_regs_num = 2;
int err, i;
int err, i, cpu;
dev = alloc_etherdev_mqs(sizeof(struct mvpp2_port), txq_number,
rxq_number);
@@ -6135,6 +6204,24 @@ static int mvpp2_port_probe(struct platform_device *pdev,
}
mvpp2_port_power_up(port);
port->pcpu = alloc_percpu(struct mvpp2_port_pcpu);
if (!port->pcpu) {
err = -ENOMEM;
goto err_free_txq_pcpu;
}
for_each_present_cpu(cpu) {
port_pcpu = per_cpu_ptr(port->pcpu, cpu);
hrtimer_init(&port_pcpu->tx_done_timer, CLOCK_MONOTONIC,
HRTIMER_MODE_REL_PINNED);
port_pcpu->tx_done_timer.function = mvpp2_hr_timer_cb;
port_pcpu->timer_scheduled = false;
tasklet_init(&port_pcpu->tx_done_tasklet, mvpp2_tx_proc_cb,
(unsigned long)dev);
}
netif_napi_add(dev, &port->napi, mvpp2_poll, NAPI_POLL_WEIGHT);
features = NETIF_F_SG | NETIF_F_IP_CSUM;
dev->features = features | NETIF_F_RXCSUM;
@@ -6144,7 +6231,7 @@ static int mvpp2_port_probe(struct platform_device *pdev,
err = register_netdev(dev);
if (err < 0) {
dev_err(&pdev->dev, "failed to register netdev\n");
goto err_free_txq_pcpu;
goto err_free_port_pcpu;
}
netdev_info(dev, "Using %s mac address %pM\n", mac_from, dev->dev_addr);
@@ -6153,6 +6240,8 @@ static int mvpp2_port_probe(struct platform_device *pdev,
priv->port_list[id] = port;
return 0;
err_free_port_pcpu:
free_percpu(port->pcpu);
err_free_txq_pcpu:
for (i = 0; i < txq_number; i++)
free_percpu(port->txqs[i]->pcpu);
@@ -6171,6 +6260,7 @@ static void mvpp2_port_remove(struct mvpp2_port *port)
int i;
unregister_netdev(port->dev);
free_percpu(port->pcpu);
free_percpu(port->stats);
for (i = 0; i < txq_number; i++)
free_percpu(port->txqs[i]->pcpu);

View File

@@ -391,6 +391,8 @@ static int handle_hca_cap(struct mlx5_core_dev *dev)
/* disable cmdif checksum */
MLX5_SET(cmd_hca_cap, set_hca_cap, cmdif_checksum, 0);
MLX5_SET(cmd_hca_cap, set_hca_cap, log_uar_page_sz, PAGE_SHIFT - 12);
err = set_caps(dev, set_ctx, set_sz);
query_ex:

View File

@@ -4875,10 +4875,12 @@ static void rtl_init_rxcfg(struct rtl8169_private *tp)
case RTL_GIGA_MAC_VER_46:
case RTL_GIGA_MAC_VER_47:
case RTL_GIGA_MAC_VER_48:
RTL_W32(RxConfig, RX128_INT_EN | RX_DMA_BURST | RX_EARLY_OFF);
break;
case RTL_GIGA_MAC_VER_49:
case RTL_GIGA_MAC_VER_50:
case RTL_GIGA_MAC_VER_51:
RTL_W32(RxConfig, RX128_INT_EN | RX_DMA_BURST | RX_EARLY_OFF);
RTL_W32(RxConfig, RX128_INT_EN | RX_MULTI_EN | RX_DMA_BURST | RX_EARLY_OFF);
break;
default:
RTL_W32(RxConfig, RX128_INT_EN | RX_DMA_BURST);

View File

@@ -4821,6 +4821,7 @@ static void rocker_remove_ports(const struct rocker *rocker)
rocker_port_ig_tbl(rocker_port, SWITCHDEV_TRANS_NONE,
ROCKER_OP_FLAG_REMOVE);
unregister_netdev(rocker_port->dev);
free_netdev(rocker_port->dev);
}
kfree(rocker->ports);
}

View File

@@ -42,7 +42,7 @@
#define NSS_COMMON_CLK_DIV_MASK 0x7f
#define NSS_COMMON_CLK_SRC_CTRL 0x14
#define NSS_COMMON_CLK_SRC_CTRL_OFFSET(x) (1 << x)
#define NSS_COMMON_CLK_SRC_CTRL_OFFSET(x) (x)
/* Mode is coded on 1 bit but is different depending on the MAC ID:
* MAC0: QSGMII=0 RGMII=1
* MAC1: QSGMII=0 SGMII=0 RGMII=1
@@ -291,7 +291,7 @@ static void *ipq806x_gmac_setup(struct platform_device *pdev)
/* Configure the clock src according to the mode */
regmap_read(gmac->nss_common, NSS_COMMON_CLK_SRC_CTRL, &val);
val &= ~NSS_COMMON_CLK_SRC_CTRL_OFFSET(gmac->id);
val &= ~(1 << NSS_COMMON_CLK_SRC_CTRL_OFFSET(gmac->id));
switch (gmac->phy_mode) {
case PHY_INTERFACE_MODE_RGMII:
val |= NSS_COMMON_CLK_SRC_CTRL_RGMII(gmac->id) <<

View File

@@ -85,7 +85,6 @@ struct netcp_intf {
struct list_head rxhook_list_head;
unsigned int rx_queue_id;
void *rx_fdq[KNAV_DMA_FDQ_PER_CHAN];
u32 rx_buffer_sizes[KNAV_DMA_FDQ_PER_CHAN];
struct napi_struct rx_napi;
struct napi_struct tx_napi;

View File

@@ -34,6 +34,7 @@
#define NETCP_SOP_OFFSET (NET_IP_ALIGN + NET_SKB_PAD)
#define NETCP_NAPI_WEIGHT 64
#define NETCP_TX_TIMEOUT (5 * HZ)
#define NETCP_PACKET_SIZE (ETH_FRAME_LEN + ETH_FCS_LEN)
#define NETCP_MIN_PACKET_SIZE ETH_ZLEN
#define NETCP_MAX_MCAST_ADDR 16
@@ -804,30 +805,28 @@ static void netcp_allocate_rx_buf(struct netcp_intf *netcp, int fdq)
if (likely(fdq == 0)) {
unsigned int primary_buf_len;
/* Allocate a primary receive queue entry */
buf_len = netcp->rx_buffer_sizes[0] + NETCP_SOP_OFFSET;
buf_len = NETCP_PACKET_SIZE + NETCP_SOP_OFFSET;
primary_buf_len = SKB_DATA_ALIGN(buf_len) +
SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
if (primary_buf_len <= PAGE_SIZE) {
bufptr = netdev_alloc_frag(primary_buf_len);
pad[1] = primary_buf_len;
} else {
bufptr = kmalloc(primary_buf_len, GFP_ATOMIC |
GFP_DMA32 | __GFP_COLD);
pad[1] = 0;
}
bufptr = netdev_alloc_frag(primary_buf_len);
pad[1] = primary_buf_len;
if (unlikely(!bufptr)) {
dev_warn_ratelimited(netcp->ndev_dev, "Primary RX buffer alloc failed\n");
dev_warn_ratelimited(netcp->ndev_dev,
"Primary RX buffer alloc failed\n");
goto fail;
}
dma = dma_map_single(netcp->dev, bufptr, buf_len,
DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(netcp->dev, dma)))
goto fail;
pad[0] = (u32)bufptr;
} else {
/* Allocate a secondary receive queue entry */
page = alloc_page(GFP_ATOMIC | GFP_DMA32 | __GFP_COLD);
page = alloc_page(GFP_ATOMIC | GFP_DMA | __GFP_COLD);
if (unlikely(!page)) {
dev_warn_ratelimited(netcp->ndev_dev, "Secondary page alloc failed\n");
goto fail;
@@ -1010,7 +1009,7 @@ netcp_tx_map_skb(struct sk_buff *skb, struct netcp_intf *netcp)
/* Map the linear buffer */
dma_addr = dma_map_single(dev, skb->data, pkt_len, DMA_TO_DEVICE);
if (unlikely(!dma_addr)) {
if (unlikely(dma_mapping_error(dev, dma_addr))) {
dev_err(netcp->ndev_dev, "Failed to map skb buffer\n");
return NULL;
}
@@ -1546,8 +1545,8 @@ static int netcp_setup_navigator_resources(struct net_device *ndev)
knav_queue_disable_notify(netcp->rx_queue);
/* open Rx FDQs */
for (i = 0; i < KNAV_DMA_FDQ_PER_CHAN &&
netcp->rx_queue_depths[i] && netcp->rx_buffer_sizes[i]; ++i) {
for (i = 0; i < KNAV_DMA_FDQ_PER_CHAN && netcp->rx_queue_depths[i];
++i) {
snprintf(name, sizeof(name), "rx-fdq-%s-%d", ndev->name, i);
netcp->rx_fdq[i] = knav_queue_open(name, KNAV_QUEUE_GP, 0);
if (IS_ERR_OR_NULL(netcp->rx_fdq[i])) {
@@ -1941,14 +1940,6 @@ static int netcp_create_interface(struct netcp_device *netcp_device,
netcp->rx_queue_depths[0] = 128;
}
ret = of_property_read_u32_array(node_interface, "rx-buffer-size",
netcp->rx_buffer_sizes,
KNAV_DMA_FDQ_PER_CHAN);
if (ret) {
dev_err(dev, "missing \"rx-buffer-size\" parameter\n");
netcp->rx_buffer_sizes[0] = 1536;
}
ret = of_property_read_u32_array(node_interface, "rx-pool", temp, 2);
if (ret < 0) {
dev_err(dev, "missing \"rx-pool\" parameter\n");

View File

@@ -728,11 +728,12 @@ static int mkiss_open(struct tty_struct *tty)
dev->type = ARPHRD_AX25;
/* Perform the low-level AX25 initialization. */
if ((err = ax_open(ax->dev))) {
err = ax_open(ax->dev);
if (err)
goto out_free_netdev;
}
if (register_netdev(dev))
err = register_netdev(dev);
if (err)
goto out_free_buffers;
/* after register_netdev() - because else printk smashes the kernel */

View File

@@ -102,6 +102,12 @@ static void ntb_netdev_rx_handler(struct ntb_transport_qp *qp, void *qp_data,
netdev_dbg(ndev, "%s: %d byte payload received\n", __func__, len);
if (len < 0) {
ndev->stats.rx_errors++;
ndev->stats.rx_length_errors++;
goto enqueue_again;
}
skb_put(skb, len);
skb->protocol = eth_type_trans(skb, ndev);
skb->ip_summed = CHECKSUM_NONE;
@@ -121,6 +127,7 @@ static void ntb_netdev_rx_handler(struct ntb_transport_qp *qp, void *qp_data,
return;
}
enqueue_again:
rc = ntb_transport_rx_enqueue(qp, skb, skb->data, ndev->mtu + ETH_HLEN);
if (rc) {
dev_kfree_skb(skb);
@@ -184,7 +191,7 @@ static int ntb_netdev_open(struct net_device *ndev)
rc = ntb_transport_rx_enqueue(dev->qp, skb, skb->data,
ndev->mtu + ETH_HLEN);
if (rc == -EINVAL) {
if (rc) {
dev_kfree_skb(skb);
goto err;
}

View File

@@ -1756,9 +1756,9 @@ static int virtnet_probe(struct virtio_device *vdev)
/* Do we support "hardware" checksums? */
if (virtio_has_feature(vdev, VIRTIO_NET_F_CSUM)) {
/* This opens up the world of extra features. */
dev->hw_features |= NETIF_F_HW_CSUM|NETIF_F_SG|NETIF_F_FRAGLIST;
dev->hw_features |= NETIF_F_HW_CSUM | NETIF_F_SG;
if (csum)
dev->features |= NETIF_F_HW_CSUM|NETIF_F_SG|NETIF_F_FRAGLIST;
dev->features |= NETIF_F_HW_CSUM | NETIF_F_SG;
if (virtio_has_feature(vdev, VIRTIO_NET_F_GSO)) {
dev->hw_features |= NETIF_F_TSO | NETIF_F_UFO

View File

@@ -589,7 +589,8 @@ static int cosa_probe(int base, int irq, int dma)
chan->netdev->base_addr = chan->cosa->datareg;
chan->netdev->irq = chan->cosa->irq;
chan->netdev->dma = chan->cosa->dma;
if (register_hdlc_device(chan->netdev)) {
err = register_hdlc_device(chan->netdev);
if (err) {
netdev_warn(chan->netdev,
"register_hdlc_device() failed\n");
free_netdev(chan->netdev);

View File

@@ -3728,7 +3728,7 @@ const u32 *b43_nphy_get_tx_gain_table(struct b43_wldev *dev)
switch (phy->rev) {
case 6:
case 5:
if (sprom->fem.ghz5.extpa_gain == 3)
if (sprom->fem.ghz2.extpa_gain == 3)
return b43_ntab_tx_gain_epa_rev3_hi_pwr_2g;
/* fall through */
case 4:

View File

@@ -1023,7 +1023,7 @@ static void iwl_mvm_scan_umac_dwell(struct iwl_mvm *mvm,
cmd->scan_priority =
iwl_mvm_scan_priority(mvm, IWL_SCAN_PRIORITY_EXT_6);
if (iwl_mvm_scan_total_iterations(params) == 0)
if (iwl_mvm_scan_total_iterations(params) == 1)
cmd->ooc_priority =
iwl_mvm_scan_priority(mvm, IWL_SCAN_PRIORITY_EXT_6);
else

View File

@@ -478,10 +478,16 @@ static void iwl_pcie_apm_stop(struct iwl_trans *trans, bool op_mode_leave)
if (trans->cfg->device_family == IWL_DEVICE_FAMILY_7000)
iwl_set_bits_prph(trans, APMG_PCIDEV_STT_REG,
APMG_PCIDEV_STT_VAL_WAKE_ME);
else if (trans->cfg->device_family == IWL_DEVICE_FAMILY_8000)
else if (trans->cfg->device_family == IWL_DEVICE_FAMILY_8000) {
iwl_set_bit(trans, CSR_DBG_LINK_PWR_MGMT_REG,
CSR_RESET_LINK_PWR_MGMT_DISABLED);
iwl_set_bit(trans, CSR_HW_IF_CONFIG_REG,
CSR_HW_IF_CONFIG_REG_PREPARE |
CSR_HW_IF_CONFIG_REG_ENABLE_PME);
mdelay(1);
iwl_clear_bit(trans, CSR_DBG_LINK_PWR_MGMT_REG,
CSR_RESET_LINK_PWR_MGMT_DISABLED);
}
mdelay(5);
}
@@ -575,6 +581,10 @@ static int iwl_pcie_prepare_card_hw(struct iwl_trans *trans)
if (ret >= 0)
return 0;
iwl_set_bit(trans, CSR_DBG_LINK_PWR_MGMT_REG,
CSR_RESET_LINK_PWR_MGMT_DISABLED);
msleep(1);
for (iter = 0; iter < 10; iter++) {
/* If HW is not ready, prepare the conditions to check again */
iwl_set_bit(trans, CSR_HW_IF_CONFIG_REG,
@@ -582,8 +592,10 @@ static int iwl_pcie_prepare_card_hw(struct iwl_trans *trans)
do {
ret = iwl_pcie_set_hw_ready(trans);
if (ret >= 0)
return 0;
if (ret >= 0) {
ret = 0;
goto out;
}
usleep_range(200, 1000);
t += 200;
@@ -593,6 +605,10 @@ static int iwl_pcie_prepare_card_hw(struct iwl_trans *trans)
IWL_ERR(trans, "Couldn't prepare the card\n");
out:
iwl_clear_bit(trans, CSR_DBG_LINK_PWR_MGMT_REG,
CSR_RESET_LINK_PWR_MGMT_DISABLED);
return ret;
}

View File

@@ -1875,8 +1875,19 @@ int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb,
/* start timer if queue currently empty */
if (q->read_ptr == q->write_ptr) {
if (txq->wd_timeout)
mod_timer(&txq->stuck_timer, jiffies + txq->wd_timeout);
if (txq->wd_timeout) {
/*
* If the TXQ is active, then set the timer, if not,
* set the timer in remainder so that the timer will
* be armed with the right value when the station will
* wake up.
*/
if (!txq->frozen)
mod_timer(&txq->stuck_timer,
jiffies + txq->wd_timeout);
else
txq->frozen_expiry_remainder = txq->wd_timeout;
}
IWL_DEBUG_RPM(trans, "Q: %d first tx - take ref\n", q->id);
iwl_trans_pcie_ref(trans);
}

View File

@@ -172,6 +172,7 @@ static int rsi_load_ta_instructions(struct rsi_common *common)
(struct rsi_91x_sdiodev *)adapter->rsi_dev;
u32 len;
u32 num_blocks;
const u8 *fw;
const struct firmware *fw_entry = NULL;
u32 block_size = dev->tx_blk_size;
int status = 0;
@@ -200,6 +201,10 @@ static int rsi_load_ta_instructions(struct rsi_common *common)
return status;
}
/* Copy firmware into DMA-accessible memory */
fw = kmemdup(fw_entry->data, fw_entry->size, GFP_KERNEL);
if (!fw)
return -ENOMEM;
len = fw_entry->size;
if (len % 4)
@@ -210,7 +215,8 @@ static int rsi_load_ta_instructions(struct rsi_common *common)
rsi_dbg(INIT_ZONE, "%s: Instruction size:%d\n", __func__, len);
rsi_dbg(INIT_ZONE, "%s: num blocks: %d\n", __func__, num_blocks);
status = rsi_copy_to_card(common, fw_entry->data, len, num_blocks);
status = rsi_copy_to_card(common, fw, len, num_blocks);
kfree(fw);
release_firmware(fw_entry);
return status;
}

View File

@@ -146,7 +146,10 @@ static int rsi_load_ta_instructions(struct rsi_common *common)
return status;
}
/* Copy firmware into DMA-accessible memory */
fw = kmemdup(fw_entry->data, fw_entry->size, GFP_KERNEL);
if (!fw)
return -ENOMEM;
len = fw_entry->size;
if (len % 4)
@@ -158,6 +161,7 @@ static int rsi_load_ta_instructions(struct rsi_common *common)
rsi_dbg(INIT_ZONE, "%s: num blocks: %d\n", __func__, num_blocks);
status = rsi_copy_to_card(common, fw, len, num_blocks);
kfree(fw);
release_firmware(fw_entry);
return status;
}

View File

@@ -1015,9 +1015,12 @@ static void send_beacon_frame(struct ieee80211_hw *hw,
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct sk_buff *skb = ieee80211_beacon_get(hw, vif);
struct rtl_tcb_desc tcb_desc;
if (skb)
rtlpriv->intf_ops->adapter_tx(hw, NULL, skb, NULL);
if (skb) {
memset(&tcb_desc, 0, sizeof(struct rtl_tcb_desc));
rtlpriv->intf_ops->adapter_tx(hw, NULL, skb, &tcb_desc);
}
}
static void rtl_op_bss_info_changed(struct ieee80211_hw *hw,

View File

@@ -385,6 +385,7 @@ module_param_named(debug, rtl8723be_mod_params.debug, int, 0444);
module_param_named(ips, rtl8723be_mod_params.inactiveps, bool, 0444);
module_param_named(swlps, rtl8723be_mod_params.swctrl_lps, bool, 0444);
module_param_named(fwlps, rtl8723be_mod_params.fwctrl_lps, bool, 0444);
module_param_named(msi, rtl8723be_mod_params.msi_support, bool, 0444);
module_param_named(disable_watchdog, rtl8723be_mod_params.disable_watchdog,
bool, 0444);
MODULE_PARM_DESC(swenc, "Set to 1 for software crypto (default 0)\n");

View File

@@ -61,6 +61,12 @@ void xenvif_skb_zerocopy_prepare(struct xenvif_queue *queue,
void xenvif_skb_zerocopy_complete(struct xenvif_queue *queue)
{
atomic_dec(&queue->inflight_packets);
/* Wake the dealloc thread _after_ decrementing inflight_packets so
* that if kthread_stop() has already been called, the dealloc thread
* does not wait forever with nothing to wake it.
*/
wake_up(&queue->dealloc_wq);
}
int xenvif_schedulable(struct xenvif *vif)

View File

@@ -810,23 +810,17 @@ static inline struct sk_buff *xenvif_alloc_skb(unsigned int size)
static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif_queue *queue,
struct sk_buff *skb,
struct xen_netif_tx_request *txp,
struct gnttab_map_grant_ref *gop)
struct gnttab_map_grant_ref *gop,
unsigned int frag_overflow,
struct sk_buff *nskb)
{
struct skb_shared_info *shinfo = skb_shinfo(skb);
skb_frag_t *frags = shinfo->frags;
u16 pending_idx = XENVIF_TX_CB(skb)->pending_idx;
int start;
pending_ring_idx_t index;
unsigned int nr_slots, frag_overflow = 0;
unsigned int nr_slots;
/* At this point shinfo->nr_frags is in fact the number of
* slots, which can be as large as XEN_NETBK_LEGACY_SLOTS_MAX.
*/
if (shinfo->nr_frags > MAX_SKB_FRAGS) {
frag_overflow = shinfo->nr_frags - MAX_SKB_FRAGS;
BUG_ON(frag_overflow > MAX_SKB_FRAGS);
shinfo->nr_frags = MAX_SKB_FRAGS;
}
nr_slots = shinfo->nr_frags;
/* Skip first skb fragment if it is on same page as header fragment. */
@@ -841,13 +835,6 @@ static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif_queue *que
}
if (frag_overflow) {
struct sk_buff *nskb = xenvif_alloc_skb(0);
if (unlikely(nskb == NULL)) {
if (net_ratelimit())
netdev_err(queue->vif->dev,
"Can't allocate the frag_list skb.\n");
return NULL;
}
shinfo = skb_shinfo(nskb);
frags = shinfo->frags;
@@ -1175,9 +1162,10 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
unsigned *copy_ops,
unsigned *map_ops)
{
struct gnttab_map_grant_ref *gop = queue->tx_map_ops, *request_gop;
struct sk_buff *skb;
struct gnttab_map_grant_ref *gop = queue->tx_map_ops;
struct sk_buff *skb, *nskb;
int ret;
unsigned int frag_overflow;
while (skb_queue_len(&queue->tx_queue) < budget) {
struct xen_netif_tx_request txreq;
@@ -1265,6 +1253,29 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
break;
}
skb_shinfo(skb)->nr_frags = ret;
if (data_len < txreq.size)
skb_shinfo(skb)->nr_frags++;
/* At this point shinfo->nr_frags is in fact the number of
* slots, which can be as large as XEN_NETBK_LEGACY_SLOTS_MAX.
*/
frag_overflow = 0;
nskb = NULL;
if (skb_shinfo(skb)->nr_frags > MAX_SKB_FRAGS) {
frag_overflow = skb_shinfo(skb)->nr_frags - MAX_SKB_FRAGS;
BUG_ON(frag_overflow > MAX_SKB_FRAGS);
skb_shinfo(skb)->nr_frags = MAX_SKB_FRAGS;
nskb = xenvif_alloc_skb(0);
if (unlikely(nskb == NULL)) {
kfree_skb(skb);
xenvif_tx_err(queue, &txreq, idx);
if (net_ratelimit())
netdev_err(queue->vif->dev,
"Can't allocate the frag_list skb.\n");
break;
}
}
if (extras[XEN_NETIF_EXTRA_TYPE_GSO - 1].type) {
struct xen_netif_extra_info *gso;
gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
@@ -1272,6 +1283,7 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
if (xenvif_set_skb_gso(queue->vif, skb, gso)) {
/* Failure in xenvif_set_skb_gso is fatal. */
kfree_skb(skb);
kfree_skb(nskb);
break;
}
}
@@ -1294,9 +1306,7 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
(*copy_ops)++;
skb_shinfo(skb)->nr_frags = ret;
if (data_len < txreq.size) {
skb_shinfo(skb)->nr_frags++;
frag_set_pending_idx(&skb_shinfo(skb)->frags[0],
pending_idx);
xenvif_tx_create_map_op(queue, pending_idx, &txreq, gop);
@@ -1310,13 +1320,8 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
queue->pending_cons++;
request_gop = xenvif_get_requests(queue, skb, txfrags, gop);
if (request_gop == NULL) {
kfree_skb(skb);
xenvif_tx_err(queue, &txreq, idx);
break;
}
gop = request_gop;
gop = xenvif_get_requests(queue, skb, txfrags, gop,
frag_overflow, nskb);
__skb_queue_tail(&queue->tx_queue, skb);
@@ -1536,7 +1541,6 @@ void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success)
smp_wmb();
queue->dealloc_prod++;
} while (ubuf);
wake_up(&queue->dealloc_wq);
spin_unlock_irqrestore(&queue->callback_lock, flags);
if (likely(zerocopy_success))

View File

@@ -114,7 +114,7 @@ int ntb_register_device(struct ntb_dev *ntb)
ntb->dev.bus = &ntb_bus;
ntb->dev.parent = &ntb->pdev->dev;
ntb->dev.release = ntb_dev_release;
dev_set_name(&ntb->dev, pci_name(ntb->pdev));
dev_set_name(&ntb->dev, "%s", pci_name(ntb->pdev));
ntb->ctx = NULL;
ntb->ctx_ops = NULL;

View File

@@ -142,10 +142,11 @@ struct ntb_transport_qp {
void (*rx_handler)(struct ntb_transport_qp *qp, void *qp_data,
void *data, int len);
struct list_head rx_post_q;
struct list_head rx_pend_q;
struct list_head rx_free_q;
spinlock_t ntb_rx_pend_q_lock;
spinlock_t ntb_rx_free_q_lock;
/* ntb_rx_q_lock: synchronize access to rx_XXXX_q */
spinlock_t ntb_rx_q_lock;
void *rx_buff;
unsigned int rx_index;
unsigned int rx_max_entry;
@@ -211,6 +212,8 @@ struct ntb_transport_ctx {
bool link_is_up;
struct delayed_work link_work;
struct work_struct link_cleanup;
struct dentry *debugfs_node_dir;
};
enum {
@@ -436,13 +439,17 @@ static ssize_t debugfs_read(struct file *filp, char __user *ubuf, size_t count,
char *buf;
ssize_t ret, out_offset, out_count;
qp = filp->private_data;
if (!qp || !qp->link_is_up)
return 0;
out_count = 1000;
buf = kmalloc(out_count, GFP_KERNEL);
if (!buf)
return -ENOMEM;
qp = filp->private_data;
out_offset = 0;
out_offset += snprintf(buf + out_offset, out_count - out_offset,
"NTB QP stats\n");
@@ -534,6 +541,27 @@ out:
return entry;
}
static struct ntb_queue_entry *ntb_list_mv(spinlock_t *lock,
struct list_head *list,
struct list_head *to_list)
{
struct ntb_queue_entry *entry;
unsigned long flags;
spin_lock_irqsave(lock, flags);
if (list_empty(list)) {
entry = NULL;
} else {
entry = list_first_entry(list, struct ntb_queue_entry, entry);
list_move_tail(&entry->entry, to_list);
}
spin_unlock_irqrestore(lock, flags);
return entry;
}
static int ntb_transport_setup_qp_mw(struct ntb_transport_ctx *nt,
unsigned int qp_num)
{
@@ -601,13 +629,16 @@ static void ntb_free_mw(struct ntb_transport_ctx *nt, int num_mw)
}
static int ntb_set_mw(struct ntb_transport_ctx *nt, int num_mw,
unsigned int size)
resource_size_t size)
{
struct ntb_transport_mw *mw = &nt->mw_vec[num_mw];
struct pci_dev *pdev = nt->ndev->pdev;
unsigned int xlat_size, buff_size;
size_t xlat_size, buff_size;
int rc;
if (!size)
return -EINVAL;
xlat_size = round_up(size, mw->xlat_align_size);
buff_size = round_up(size, mw->xlat_align);
@@ -627,7 +658,7 @@ static int ntb_set_mw(struct ntb_transport_ctx *nt, int num_mw,
if (!mw->virt_addr) {
mw->xlat_size = 0;
mw->buff_size = 0;
dev_err(&pdev->dev, "Unable to alloc MW buff of size %d\n",
dev_err(&pdev->dev, "Unable to alloc MW buff of size %zu\n",
buff_size);
return -ENOMEM;
}
@@ -867,6 +898,8 @@ static void ntb_qp_link_work(struct work_struct *work)
if (qp->event_handler)
qp->event_handler(qp->cb_data, qp->link_is_up);
tasklet_schedule(&qp->rxc_db_work);
} else if (nt->link_is_up)
schedule_delayed_work(&qp->link_work,
msecs_to_jiffies(NTB_LINK_DOWN_TIMEOUT));
@@ -923,12 +956,12 @@ static int ntb_transport_init_queue(struct ntb_transport_ctx *nt,
qp->tx_max_frame = min(transport_mtu, tx_size / 2);
qp->tx_max_entry = tx_size / qp->tx_max_frame;
if (nt_debugfs_dir) {
if (nt->debugfs_node_dir) {
char debugfs_name[4];
snprintf(debugfs_name, 4, "qp%d", qp_num);
qp->debugfs_dir = debugfs_create_dir(debugfs_name,
nt_debugfs_dir);
nt->debugfs_node_dir);
qp->debugfs_stats = debugfs_create_file("stats", S_IRUSR,
qp->debugfs_dir, qp,
@@ -941,10 +974,10 @@ static int ntb_transport_init_queue(struct ntb_transport_ctx *nt,
INIT_DELAYED_WORK(&qp->link_work, ntb_qp_link_work);
INIT_WORK(&qp->link_cleanup, ntb_qp_link_cleanup_work);
spin_lock_init(&qp->ntb_rx_pend_q_lock);
spin_lock_init(&qp->ntb_rx_free_q_lock);
spin_lock_init(&qp->ntb_rx_q_lock);
spin_lock_init(&qp->ntb_tx_free_q_lock);
INIT_LIST_HEAD(&qp->rx_post_q);
INIT_LIST_HEAD(&qp->rx_pend_q);
INIT_LIST_HEAD(&qp->rx_free_q);
INIT_LIST_HEAD(&qp->tx_free_q);
@@ -1031,6 +1064,12 @@ static int ntb_transport_probe(struct ntb_client *self, struct ntb_dev *ndev)
goto err2;
}
if (nt_debugfs_dir) {
nt->debugfs_node_dir =
debugfs_create_dir(pci_name(ndev->pdev),
nt_debugfs_dir);
}
for (i = 0; i < qp_count; i++) {
rc = ntb_transport_init_queue(nt, i);
if (rc)
@@ -1107,22 +1146,47 @@ static void ntb_transport_free(struct ntb_client *self, struct ntb_dev *ndev)
kfree(nt);
}
static void ntb_complete_rxc(struct ntb_transport_qp *qp)
{
struct ntb_queue_entry *entry;
void *cb_data;
unsigned int len;
unsigned long irqflags;
spin_lock_irqsave(&qp->ntb_rx_q_lock, irqflags);
while (!list_empty(&qp->rx_post_q)) {
entry = list_first_entry(&qp->rx_post_q,
struct ntb_queue_entry, entry);
if (!(entry->flags & DESC_DONE_FLAG))
break;
entry->rx_hdr->flags = 0;
iowrite32(entry->index, &qp->rx_info->entry);
cb_data = entry->cb_data;
len = entry->len;
list_move_tail(&entry->entry, &qp->rx_free_q);
spin_unlock_irqrestore(&qp->ntb_rx_q_lock, irqflags);
if (qp->rx_handler && qp->client_ready)
qp->rx_handler(qp, qp->cb_data, cb_data, len);
spin_lock_irqsave(&qp->ntb_rx_q_lock, irqflags);
}
spin_unlock_irqrestore(&qp->ntb_rx_q_lock, irqflags);
}
static void ntb_rx_copy_callback(void *data)
{
struct ntb_queue_entry *entry = data;
struct ntb_transport_qp *qp = entry->qp;
void *cb_data = entry->cb_data;
unsigned int len = entry->len;
struct ntb_payload_header *hdr = entry->rx_hdr;
hdr->flags = 0;
entry->flags |= DESC_DONE_FLAG;
iowrite32(entry->index, &qp->rx_info->entry);
ntb_list_add(&qp->ntb_rx_free_q_lock, &entry->entry, &qp->rx_free_q);
if (qp->rx_handler && qp->client_ready)
qp->rx_handler(qp, qp->cb_data, cb_data, len);
ntb_complete_rxc(entry->qp);
}
static void ntb_memcpy_rx(struct ntb_queue_entry *entry, void *offset)
@@ -1138,19 +1202,18 @@ static void ntb_memcpy_rx(struct ntb_queue_entry *entry, void *offset)
ntb_rx_copy_callback(entry);
}
static void ntb_async_rx(struct ntb_queue_entry *entry, void *offset,
size_t len)
static void ntb_async_rx(struct ntb_queue_entry *entry, void *offset)
{
struct dma_async_tx_descriptor *txd;
struct ntb_transport_qp *qp = entry->qp;
struct dma_chan *chan = qp->dma_chan;
struct dma_device *device;
size_t pay_off, buff_off;
size_t pay_off, buff_off, len;
struct dmaengine_unmap_data *unmap;
dma_cookie_t cookie;
void *buf = entry->buf;
entry->len = len;
len = entry->len;
if (!chan)
goto err;
@@ -1226,7 +1289,6 @@ static int ntb_process_rxc(struct ntb_transport_qp *qp)
struct ntb_payload_header *hdr;
struct ntb_queue_entry *entry;
void *offset;
int rc;
offset = qp->rx_buff + qp->rx_max_frame * qp->rx_index;
hdr = offset + qp->rx_max_frame - sizeof(struct ntb_payload_header);
@@ -1255,65 +1317,43 @@ static int ntb_process_rxc(struct ntb_transport_qp *qp)
return -EIO;
}
entry = ntb_list_rm(&qp->ntb_rx_pend_q_lock, &qp->rx_pend_q);
entry = ntb_list_mv(&qp->ntb_rx_q_lock, &qp->rx_pend_q, &qp->rx_post_q);
if (!entry) {
dev_dbg(&qp->ndev->pdev->dev, "no receive buffer\n");
qp->rx_err_no_buf++;
rc = -ENOMEM;
goto err;
return -EAGAIN;
}
entry->rx_hdr = hdr;
entry->index = qp->rx_index;
if (hdr->len > entry->len) {
dev_dbg(&qp->ndev->pdev->dev,
"receive buffer overflow! Wanted %d got %d\n",
hdr->len, entry->len);
qp->rx_err_oflow++;
rc = -EIO;
goto err;
entry->len = -EIO;
entry->flags |= DESC_DONE_FLAG;
ntb_complete_rxc(qp);
} else {
dev_dbg(&qp->ndev->pdev->dev,
"RX OK index %u ver %u size %d into buf size %d\n",
qp->rx_index, hdr->ver, hdr->len, entry->len);
qp->rx_bytes += hdr->len;
qp->rx_pkts++;
entry->len = hdr->len;
ntb_async_rx(entry, offset);
}
dev_dbg(&qp->ndev->pdev->dev,
"RX OK index %u ver %u size %d into buf size %d\n",
qp->rx_index, hdr->ver, hdr->len, entry->len);
qp->rx_bytes += hdr->len;
qp->rx_pkts++;
entry->index = qp->rx_index;
entry->rx_hdr = hdr;
ntb_async_rx(entry, offset, hdr->len);
qp->rx_index++;
qp->rx_index %= qp->rx_max_entry;
return 0;
err:
/* FIXME: if this syncrhonous update of the rx_index gets ahead of
* asyncrhonous ntb_rx_copy_callback of previous entry, there are three
* scenarios:
*
* 1) The peer might miss this update, but observe the update
* from the memcpy completion callback. In this case, the buffer will
* not be freed on the peer to be reused for a different packet. The
* successful rx of a later packet would clear the condition, but the
* condition could persist if several rx fail in a row.
*
* 2) The peer may observe this update before the asyncrhonous copy of
* prior packets is completed. The peer may overwrite the buffers of
* the prior packets before they are copied.
*
* 3) Both: the peer may observe the update, and then observe the index
* decrement by the asynchronous completion callback. Who knows what
* badness that will cause.
*/
hdr->flags = 0;
iowrite32(qp->rx_index, &qp->rx_info->entry);
return rc;
}
static void ntb_transport_rxc_db(unsigned long data)
@@ -1333,7 +1373,7 @@ static void ntb_transport_rxc_db(unsigned long data)
break;
}
if (qp->dma_chan)
if (i && qp->dma_chan)
dma_async_issue_pending(qp->dma_chan);
if (i == qp->rx_max_entry) {
@@ -1609,7 +1649,7 @@ ntb_transport_create_queue(void *data, struct device *client_dev,
goto err1;
entry->qp = qp;
ntb_list_add(&qp->ntb_rx_free_q_lock, &entry->entry,
ntb_list_add(&qp->ntb_rx_q_lock, &entry->entry,
&qp->rx_free_q);
}
@@ -1634,7 +1674,7 @@ err2:
while ((entry = ntb_list_rm(&qp->ntb_tx_free_q_lock, &qp->tx_free_q)))
kfree(entry);
err1:
while ((entry = ntb_list_rm(&qp->ntb_rx_free_q_lock, &qp->rx_free_q)))
while ((entry = ntb_list_rm(&qp->ntb_rx_q_lock, &qp->rx_free_q)))
kfree(entry);
if (qp->dma_chan)
dma_release_channel(qp->dma_chan);
@@ -1652,7 +1692,6 @@ EXPORT_SYMBOL_GPL(ntb_transport_create_queue);
*/
void ntb_transport_free_queue(struct ntb_transport_qp *qp)
{
struct ntb_transport_ctx *nt = qp->transport;
struct pci_dev *pdev;
struct ntb_queue_entry *entry;
u64 qp_bit;
@@ -1689,18 +1728,23 @@ void ntb_transport_free_queue(struct ntb_transport_qp *qp)
qp->tx_handler = NULL;
qp->event_handler = NULL;
while ((entry = ntb_list_rm(&qp->ntb_rx_free_q_lock, &qp->rx_free_q)))
while ((entry = ntb_list_rm(&qp->ntb_rx_q_lock, &qp->rx_free_q)))
kfree(entry);
while ((entry = ntb_list_rm(&qp->ntb_rx_pend_q_lock, &qp->rx_pend_q))) {
dev_warn(&pdev->dev, "Freeing item from a non-empty queue\n");
while ((entry = ntb_list_rm(&qp->ntb_rx_q_lock, &qp->rx_pend_q))) {
dev_warn(&pdev->dev, "Freeing item from non-empty rx_pend_q\n");
kfree(entry);
}
while ((entry = ntb_list_rm(&qp->ntb_rx_q_lock, &qp->rx_post_q))) {
dev_warn(&pdev->dev, "Freeing item from non-empty rx_post_q\n");
kfree(entry);
}
while ((entry = ntb_list_rm(&qp->ntb_tx_free_q_lock, &qp->tx_free_q)))
kfree(entry);
nt->qp_bitmap_free |= qp_bit;
qp->transport->qp_bitmap_free |= qp_bit;
dev_info(&pdev->dev, "NTB Transport QP %d freed\n", qp->qp_num);
}
@@ -1724,14 +1768,14 @@ void *ntb_transport_rx_remove(struct ntb_transport_qp *qp, unsigned int *len)
if (!qp || qp->client_ready)
return NULL;
entry = ntb_list_rm(&qp->ntb_rx_pend_q_lock, &qp->rx_pend_q);
entry = ntb_list_rm(&qp->ntb_rx_q_lock, &qp->rx_pend_q);
if (!entry)
return NULL;
buf = entry->cb_data;
*len = entry->len;
ntb_list_add(&qp->ntb_rx_free_q_lock, &entry->entry, &qp->rx_free_q);
ntb_list_add(&qp->ntb_rx_q_lock, &entry->entry, &qp->rx_free_q);
return buf;
}
@@ -1757,15 +1801,18 @@ int ntb_transport_rx_enqueue(struct ntb_transport_qp *qp, void *cb, void *data,
if (!qp)
return -EINVAL;
entry = ntb_list_rm(&qp->ntb_rx_free_q_lock, &qp->rx_free_q);
entry = ntb_list_rm(&qp->ntb_rx_q_lock, &qp->rx_free_q);
if (!entry)
return -ENOMEM;
entry->cb_data = cb;
entry->buf = data;
entry->len = len;
entry->flags = 0;
ntb_list_add(&qp->ntb_rx_pend_q_lock, &entry->entry, &qp->rx_pend_q);
ntb_list_add(&qp->ntb_rx_q_lock, &entry->entry, &qp->rx_pend_q);
tasklet_schedule(&qp->rxc_db_work);
return 0;
}

Some files were not shown because too many files have changed in this diff Show More