Age | Commit message (Collapse) | Author |
|
commit 412145947adfca60a4b5b4893fbae82dffa25edd upstream.
There isn't much else I can do with these. I can find no hardware for any
of them and no users. The code is broken.
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 39acbc12affcaa23ef1d887ba3d197baca8e6e47 upstream.
In this patch:
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=16dc42e018c2868211b4928f20a957c0c216126c
the check was added for another driver to already claim the same device
on the same bus. But the returned error code was wrong: to modprobe, the
-EEXIST means that _this_ driver is already installed. It therefore
doesn't produce the needed error message when _another_ driver is trying
to register for the same device. Returning -EBUSY fixes the problem.
Signed-off-by: Stas Sergeev <stsp@aknet.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 5c36fe3d87b3f0c85894a49193c66096a3d6b26f upstream.
As found in <http://bugs.debian.org/550010>, hfsplus is using type u32
rather than sector_t for some sector number calculations.
In particular, hfsplus_get_block() does:
u32 ablock, dblock, mask;
...
map_bh(bh_result, sb, (dblock << HFSPLUS_SB(sb).fs_shift) + HFSPLUS_SB(sb).blockoffset + (iblock & mask));
I am not confident that I can find and fix all cases where a sector number
may be truncated. For now, avoid data loss by refusing to mount HFS+
volumes with more than 2^32 sectors (2TB).
[akpm@linux-foundation.org: fix 32 and 64-bit issues]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Cc: Eric Sesterhenn <snakebyte@gmx.de>
Cc: Roman Zippel <zippel@linux-m68k.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit aefba418bfecd1985a08f50a95bd854a119f0153 upstream.
Commit ef7562b7f28319e6dd1f85dc1af87df2a7a84832 ("dpt_i2o: Fix up
copy*user") had a silly typo: EINVAL should be -EINVAL.
Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Cc: Alan Cox <alan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit ef7562b7f28319e6dd1f85dc1af87df2a7a84832 upstream.
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit c68d2b1594548cda7f6dbac6a4d9d30a9b01558c upstream.
The IBM Saturn serial card has only one port. Without that fixup,
the kernel thinks it has two, which confuses userland setup and
admin tools as well.
[akpm@linux-foundation.org: fix pci-ids.h layout]
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Alan Cox <alan@linux.intel.com>
Cc: Michael Reed <mreed10@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Acked-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit b05ca7385a2848abdc72051f832722641daed8b0 upstream.
If migrate_prep is failed, new variable is leaked. This patch fixes it.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Acked-by: Christoph Lameter <cl@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit ab8a3e14e6f8e567560f664bbd29aefb306a274e upstream.
If mbind() receives an invalid address, do_mbind leaks a page. The
following test program detects this leak.
This patch fixes it.
migrate_efault.c
=======================================
#include <numaif.h>
#include <numa.h>
#include <sys/mman.h>
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <string.h>
static unsigned long pagesize;
static void* make_hole_mapping(void)
{
void* addr;
addr = mmap(NULL, pagesize*3, PROT_READ|PROT_WRITE,
MAP_ANON|MAP_PRIVATE, 0, 0);
if (addr == MAP_FAILED)
return NULL;
/* make page populate */
memset(addr, 0, pagesize*3);
/* make memory hole */
munmap(addr+pagesize, pagesize);
return addr;
}
int main(int argc, char** argv)
{
void* addr;
int ch;
int node;
struct bitmask *nmask = numa_allocate_nodemask();
int err;
int node_set = 0;
while ((ch = getopt(argc, argv, "n:")) != -1){
switch (ch){
case 'n':
node = strtol(optarg, NULL, 0);
numa_bitmask_setbit(nmask, node);
node_set = 1;
break;
default:
;
}
}
argc -= optind;
argv += optind;
if (!node_set)
numa_bitmask_setbit(nmask, 0);
pagesize = getpagesize();
addr = make_hole_mapping();
err = mbind(addr, pagesize*3, MPOL_BIND, nmask->maskp, nmask->size, MPOL_MF_MOVE_ALL);
if (err)
perror("mbind ");
return 0;
}
=======================================
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Acked-by: Christoph Lameter <cl@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 575c9ed7798218dc923f319c0d78f0c25ca506b9 upstream.
I've not touched the other stuff here but the word "locking" comes to mind.
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit df96eee679ba28c98cf722fa7c9f4286ee1ed0bd upstream.
Use unsigned integer chunk size.
Maximum chunk size is 512kB, there won't ever be need to use 4GB chunk size,
so the number can be 32-bit. This fixes compiler failure on 32-bit systems
with large block devices.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Reviewed-by: Jonathan Brassow <jbrassow@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 3f2412dc85260e5aae7ebb03bf50d5b1407e3083 upstream.
If we are creating snapshot with memory-stored exception store, fail if
the user didn't specify chunk size. Zero chunk size would probably crash
a lot of places in the rest of snapshot code.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Reviewed-by: Jonathan Brassow <jbrassow@redhat.com>
Reviewed-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 4c6fff445d7aa753957856278d4d93bcad6e2c14 upstream.
This patch locks the snapshot when returning status. It fixes a race
when it could return an invalid number of free chunks if someone
was simultaneously modifying it.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 0e8c4e4e3ebb15756ddc4170a88149a2cd323cfe upstream.
Properly close the device if failing because of an invalid chunk size.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit f88fb981183e71daf40bbd84bc8251bbf7b59e19 upstream.
Multiple instances of dec_pending() can run concurrently so a lock is
needed when it saves the first error code.
I have never experienced actual problem without locking and just found
this during code inspection while implementing the barrier support
patch for request-based dm.
This patch adds the locking.
I've done compile, boot and basic I/O testings.
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 03022c54b9725026c0370a810168975c387ad04c upstream.
Add missing del_gendisk() to error path when creation of workqueue fails.
Otherwice there is a resource leak and following warning is shown:
WARNING: at fs/sysfs/dir.c:487 sysfs_add_one+0xc5/0x160()
sysfs: cannot create duplicate filename '/devices/virtual/block/dm-0'
Signed-off-by: Zdenek Kabelac <zkabelac@redhat.com>
Reviewed-by: Jonathan Brassow <jbrassow@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit bca915aae803cf01fde4461fc9c093cf5a86d7fc upstream.
mips:
drivers/md/dm-log-userspace-base.c: In function `userspace_ctr':
drivers/md/dm-log-userspace-base.c:159: warning: cast from pointer to integer of different size
Cc: Jonathan Brassow <jbrassow@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 6d45d93ead319423099b82a4efd775bc0f159121 upstream.
Avoid a race causing corruption when snapshots of the same origin have
different chunk sizes by sorting the internal list of snapshots by chunk
size, largest first.
https://bugzilla.redhat.com/show_bug.cgi?id=182659
For example, let's have two snapshots with different chunk sizes. The
first snapshot (1) has small chunk size and the second snapshot (2) has
large chunk size. Let's have chunks A, B, C in these snapshots:
snapshot1: ====A==== ====B====
snapshot2: ==========C==========
(Chunk size is a power of 2. Chunks are aligned.)
A write to the origin at a position within A and C comes along. It
triggers reallocation of A, then reallocation of C and links them
together using A as the 'primary' exception.
Then another write to the origin comes along at a position within B and
C. It creates pending exception for B. C already has a reallocation in
progress and it already has a primary exception (A), so nothing is done
to it: B and C are not linked.
If the reallocation of B finishes before the reallocation of C, because
there is no link with the pending exception for C it does not know to
wait for it and, the second write is dispatched to the origin and causes
data corruption in the chunk C in snapshot2.
To avoid this situation, we maintain snapshots sorted in descending
order of chunk size. This leads to a guaranteed ordering on the links
between the pending exceptions and avoids the problem explained above -
both A and B now get linked to C.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 034a186d29dbcef099e57ab23ec39440596be911 upstream.
While initializing the snapshot module, if we fail to register
the snapshot target then we must back-out the exception store
module initialization.
Signed-off-by: Jonathan Brassow <jbrassow@redhat.com>
Reviewed-by: Mikulas Patocka <mpatocka@redhat.com>
Reviewed-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 5f5eeff4c93256ee93435a3bf08cf18c45e9a994 upstream.
Apparently some of Toshiba Protege M300 identify themselves as
"Portable PC" in DMI so we need to add that to the DMI table as
well. We need DMI data so we can automatically lower Synaptics
reporting rate from 80 to 40 pps to avoid over-taxing their
keyboard controllers.
Tested-by: Rod Davison <roddavison@gmail.com>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
[ Upstream commit 03717e3d12b625268848414e39beda25e4515692 ]
After sucessfully registering the misc device the driver iounmaps the
hardware registers and kfree's the device data structure. Ouch !
This was introduced with commit e42311d75 (riowatchdog: Convert to
pure OF driver) and went unnoticed for more than a year :)
Return success instead of dropping into the error cleanup code path.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
[ Upstream commit 09d3f3f0e02c8a900d076c302c5c02227f33572d ]
Many years ago when this driver was written, it had a use, but these
days it's nothing but trouble and distributions should not enable it
in any situation.
Pretty much every console device a sparc machine could see has a
bonafide real driver, making the PROM console hack unnecessary.
If any new device shows up, we should write a driver instead of
depending upon this crutch to save us. We've been able to take care
of this even when no chip documentation exists (sunxvr500, sunxvr2500)
so there are no excuses.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
[ Upstream commit c58543c869606532c2382f027d6466f4672ea756 ]
With lots of virtual devices it's easy to generate a lot of
events and chew up the kernel IRQ stack.
Reported-by: hyl <heyongli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
revert commit 58a09b38cfcd700b796ea07ae3d2e0efbb28b561
("[libata] ahci: Restore SB600 SATA controller 64 bit DMA")
Upstream commit 58a09b38cfcd700b796ea07ae3d2e0efbb28b561 does
nearly the same thing but this patch is simplified for 2.6.31
Disables 64-bit DMA for _all_ boards, unlike 2.6.32 which adds a
whitelist. (The whitelist function requires a fairly large patch
that touches unrelated code.)
Doesn't revert the DMI part as other backported patches might need
the exported symbol.
Applies to 2.6.31.4
Signed-off-by: Chuck Ebbert <cebbert@redhat.com>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 11df6dddcbc38affb7473aad3d962baf8414a947 upstream.
The requeue_pi path doesn't use unqueue_me() (and the racy lock_ptr ==
NULL test) nor does it use the wake_list of futex_wake() which where
the reason for commit 41890f2 (futex: Handle spurious wake up)
See debugging discussing on LKML Message-ID: <4AD4080C.20703@us.ibm.com>
The changes in this fix to the wait_requeue_pi path were considered to
be a likely unecessary, but harmless safety net. But it turns out that
due to the fact that for unknown $@#!*( reasons EWOULDBLOCK is defined
as EAGAIN we built an endless loop in the code path which returns
correctly EWOULDBLOCK.
Spurious wakeups in wait_requeue_pi code path are unlikely so we do
the easy solution and return EWOULDBLOCK^WEAGAIN to user space and let
it deal with the spurious wakeup.
Cc: Darren Hart <dvhltc@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: John Stultz <johnstul@linux.vnet.ibm.com>
Cc: Dinakar Guniguntala <dino@in.ibm.com>
LKML-Reference: <4AE23C74.1090502@us.ibm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 89061d3d58e1f0742139605dc6a7950aa1ecc019 upstream.
When requeuing tasks from one futex to another, the reference held
by the requeued task to the original futex location needs to be
dropped eventually.
Dropping the reference may ultimately lead to a call to
"iput_final" and subsequently call into filesystem- specific code -
which may be non-atomic.
It is therefore safer to defer this drop operation until after the
futex_hash_bucket spinlock has been dropped.
Originally-From: Helge Bahmann <hcb@chaoticmind.net>
Signed-off-by: Darren Hart <dvhltc@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Dinakar Guniguntala <dino@in.ibm.com>
Cc: John Stultz <johnstul@linux.vnet.ibm.com>
Cc: Sven-Thorsten Dietrich <sdietrich@novell.com>
Cc: John Kacur <jkacur@redhat.com>
LKML-Reference: <4AD7A298.5040802@us.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 2bc872036e1c5948b5b02942810bbdd8dbdb9812 upstream.
If userspace tries to perform a requeue_pi on a non-requeue_pi waiter,
it will find the futex_q->requeue_pi_key to be NULL and OOPS.
Check for NULL in match_futex() instead of doing explicit NULL pointer
checks on all call sites. While match_futex(NULL, NULL) returning
false is a little odd, it's still correct as we expect valid key
references.
Signed-off-by: Darren Hart <dvhltc@us.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@elte.hu>
CC: Eric Dumazet <eric.dumazet@gmail.com>
CC: Dinakar Guniguntala <dino@in.ibm.com>
CC: John Stultz <johnstul@us.ibm.com>
LKML-Reference: <4AD60687.10306@us.ibm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit d58e6576b0deec6f0b9ff8450fe282da18c50883 upstream.
The futex code does not handle spurious wake up in futex_wait and
futex_wait_requeue_pi.
The code assumes that any wake up which was not caused by futex_wake /
requeue or by a timeout was caused by a signal wake up and returns one
of the syscall restart error codes.
In case of a spurious wake up the signal delivery code which deals
with the restart error codes is not invoked and we return that error
code to user space. That causes applications which actually check the
return codes to fail. Blaise reported that on preempt-rt a python test
program run into a exception trap. -rt exposed that due to a built in
spurious wake up accelerator :)
Solve this by checking signal_pending(current) in the wake up path and
handle the spurious wake up case w/o returning to user space.
Reported-by: Blaise Gassend <blaise@willowgarage.com>
Debugged-by: Darren Hart <dvhltc@us.ibm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 1fdbd48c242db996107f72ae4140ffe8163e26a8 upstream.
If the Linux kernel detects an C1E capable AMD processor (K8 RevF and
higher), it will access a certain MSR on every attempt to go to halt.
Explicitly handle this read and return 0 to let KVM run a Linux guest
with the native AMD host CPU propagated to the guest.
Signed-off-by: Andre Przywara <andre.przywara@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit ace1546487a0fe4634e3251067f8a32cb2cdc099 upstream.
hrtimer->base can be temporarily NULL due to racing hrtimer_start.
See switch_hrtimer_base/lock_hrtimer_base.
Use hrtimer_get_remaining which is robust against it.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 4223a4a155f245d41c350ed9eba4fc32e965c4da upstream.
Fix a (small) memory leak in one of the error paths of the NFS mount
options parsing code.
Regression introduced in 2.6.30 by commit a67d18f (NFS: load the
rpc/rdma transport module automatically).
Reported-by: Yinghai Lu <yinghai@kernel.org>
Reported-by: Pekka Enberg <penberg@cs.helsinki.fi>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 6489e3262e6b188a1a009b65e8a94b7aa17645b7 upstream.
prereset doesn't bring link online if hardreset is about to happen and
nv_hardreset() may skip if conditions are not right so softreset may
be entered with non-working link status if the system firmware didn't
bring it up before entering OS code which can happen during resume.
This patch makes nv_hardreset() to bring up the link if it's skipping
reset.
This bug was reported by frodone@gmail.com in the following bug entry.
http://bugzilla.kernel.org/show_bug.cgi?id=14329
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: frodone@gmail.com
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 4f7c2874995ac48a4622755b8bd159eb2fb6d8f4 upstream.
Commit 842faa6c1a1d6faddf3377948e5cf214812c6c90 fixed error handling
during attach by not committing detected device class to dev->class
while attaching a new device. However, this change missed the PMP
class check in the configuration loop causing a new PMP device to go
through ata_dev_configure() as if it were an ATA or ATAPI device.
As PMP device doesn't have a regular IDENTIFY data, this makes
ata_dev_configure() tries to configure a PMP device using an invalid
data. For the most part, it wasn't too harmful and went unnoticed but
this ends up clearing dev->flags which may have ATA_DFLAG_AN set by
sata_pmp_attach(). This means that SATA_PMP_FEAT_NOTIFY ends up being
disabled on PMPs and on PMPs which honor the flag breaks hotplug
support.
This problem was discovered and reported by Ethan Hsiao.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Ethan Hsiao <ethanhsiao@jmicron.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit f4b31db92d163df8a639f5a8c8633bdeb6e8432d upstream.
When an internal command fails, it should be failed directly without
invoking EH. In the original implemetation, this was accomplished by
letting internal command bypass failure handling in ata_qc_complete().
However, later changes added post-successful-completion handling to
that code path and the success path is no longer adequate as internal
command failure path. One of the visible problems is that internal
command failure due to timeout or other freeze conditions would
spuriously trigger WARN_ON_ONCE() in the success path.
This patch updates failure path such that internal command failure
handling is contained there.
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 15b812f1d0a5ca8f5efe7f5882f468af10682ca8 upstream.
As reported in
http://bugzilla.kernel.org/show_bug.cgi?id=13940
on some system when acpi are enabled, acpi clears some BAR for some
devices without reason, and kernel will need to allocate devices for
them. It then apparently hits some undocumented resource conflict,
resulting in non-working devices.
Try to increase alignment to get more safe range for unassigned devices.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
commit ad3960243e55320d74195fb85c975e0a8cc4466c upstream.
This patch fixes a null pointer exception in pipe_rdwr_open() which
generates the stack trace:
> Unable to handle kernel NULL pointer dereference at 0000000000000028 RIP:
> [<ffffffff802899a5>] pipe_rdwr_open+0x35/0x70
> [<ffffffff8028125c>] __dentry_open+0x13c/0x230
> [<ffffffff8028143d>] do_filp_open+0x2d/0x40
> [<ffffffff802814aa>] do_sys_open+0x5a/0x100
> [<ffffffff8021faf3>] sysenter_do_call+0x1b/0x67
The failure mode is triggered by an attempt to open an anonymous
pipe via /proc/pid/fd/* as exemplified by this script:
=============================================================
while : ; do
{ echo y ; sleep 1 ; } | { while read ; do echo z$REPLY; done ; } &
PID=$!
OUT=$(ps -efl | grep 'sleep 1' | grep -v grep |
{ read PID REST ; echo $PID; } )
OUT="${OUT%% *}"
DELAY=$((RANDOM * 1000 / 32768))
usleep $((DELAY * 1000 + RANDOM % 1000 ))
echo n > /proc/$OUT/fd/1 # Trigger defect
done
=============================================================
Note that the failure window is quite small and I could only
reliably reproduce the defect by inserting a small delay
in pipe_rdwr_open(). For example:
static int
pipe_rdwr_open(struct inode *inode, struct file *filp)
{
msleep(100);
mutex_lock(&inode->i_mutex);
Although the defect was observed in pipe_rdwr_open(), I think it
makes sense to replicate the change through all the pipe_*_open()
functions.
The core of the change is to verify that inode->i_pipe has not
been released before attempting to manipulate it. If inode->i_pipe
is no longer present, return ENOENT to indicate so.
The comment about potentially using atomic_t for i_pipe->readers
and i_pipe->writers has also been removed because it is no longer
relevant in this context. The inode->i_mutex lock must be used so
that inode->i_pipe can be dealt with correctly.
Signed-off-by: Earl Chew <earl_chew@agilent.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit c8e33141911bf8fe87dc6c92793b9a59b2be0130 upstream.
The locking logic in this function is extremely subtle, and it broke
when we started doing potentially concurrent 'flush_to_ldisc()' calls in
commit e043e42bdb66885b3ac10d27a01ccb9972e2b0a3 ("pty: avoid forcing
'low_latency' tty flag").
The code in flush_to_ldisc() used to set 'tty->buf.head' to NULL, with
the intention that this would then cause any other concurrent calls to
not do anything (locking note: we have to drop the buf.lock over the
call to ->receive_buf that can block, which is why we can have
concurrency here at all in the first place).
It also used to set the TTY_FLUSHING bit, which would then cause any
concurrent 'tty_buffer_flush()' to not free all the tty buffers and
clear 'tty->buf.tail'. And with 'buf.head' being NULL, and 'buf.tail'
being non-NULL, new data would never touch 'buf.head'.
Does that sound a bit too subtle? It was. If another concurrent call to
'flush_to_ldisc()' were to come in, the NULL buf.head would indeed cause
it to not process the buffer list, but it would still clear TTY_FLUSHING
afterwards, making the buffer protection against 'tty_buffer_flush()' no
longer work.
So this clears it all up. We depend purely on TTY_FLUSHING for handling
re-entrancy, and stop playing games with the buffer list entirely. In
fact, the buffer list handling is now robust enough that we could
probably stop doing the whole "protect against 'tty_buffer_flush()'"
thing entirely.
However, Alan also points out that we would probably be better off
simplifying the locking even further, and just take the tty ldisc_mutex
around all the buffer flushing calls. That seems like a good idea, but
in the meantime this is a conceptually minimal fix (with the patch
itself being bigger than required just to clean the code up and make it
readable).
This fixes keyboard trouble under X:
http://bugzilla.kernel.org/show_bug.cgi?id=14388
Reported-and-tested-by: Frédéric Meunier <fredlwm@gmail.com>
Reported-and-tested-by: Boyan <btanastasov@yahoo.co.uk>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: Paul Fulghum <paulkf@microgate.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit fbc44bf7177dfd61381da55405550b693943a432 upstream.
When receiving data frames, we can send them only to
the interface they belong to based on transmitting
station (this doesn't work for probe requests). Also,
don't try to handle other frames for AP_VLAN at all
since those interface should only receive data.
Additionally, the transmit side must check that the
station we're sending a frame to is actually on the
interface we're transmitting on, and not transmit
packets to functions that live on other interfaces,
so validate that as well.
Another bug fix is needed in sta_info.c where in the
VLAN case when adding/removing stations we overwrite
the sdata variable we still need.
Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 2facba769d7f9e563cf706de709074a2d20f1bba upstream.
The address stored in the next link address is a word address but when
reading the OTP blocks, a byte address is used. Also if the blocks are
full and the last link pointer is not zero, then none of the blocks are
valid so return an error.
The algorithm is simply valid blocks have a next address and that
address's contents is zero.
Using the wrong address for the next link address gets arbitrary data,
obviously. In cases seen, the first block is considered valid when it is not.
If the block has in fact been invalidated there may be old data or
there may be no data, bad data, or partial data, there is no way of
telling. Without this patch it is possible that a device with valid OTP data
is unable to work.
Signed-off-by: Jay Sternberg <jay.e.sternberg@intel.com>
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit b8430e1b82b7e514d76a88eb70a7d8831d50df1e upstream.
usb-storage: Workaround devices with bogus sense size
Some devices, such as Huawei E169, advertise more than the standard
amount of sense data, causing us to set US_FL_SANE_SENSE, assuming
they support it. However, they subsequently fail the request sense
with that size.
This works around it generically. When a sense request fails due to
a device returning an error, US_FL_SANE_SENSE was set, and that sense
request used a larger sense size, we retry with a smaller size before
giving up.
Based on an original patch by Ben Efros <ben@pc-doctor.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 0af49167b1e5ba154e90d2c454bf4624ee47df80 upstream.
This fixes a panic which is triggered when the hardware "disappears" from
beneath the driver, i.e. when wireless is toggled off via Fn-F2 on various
EeePC models.
Ref. bug report http://bugzilla.kernel.org/show_bug.cgi?id=13390
panic http://bugzilla.kernel.org/attachment.cgi?id=21928
Signed-off-by: Darren Salt <linux@youmustbejoking.demon.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 83db93f4de2d9ae441a491d1dc61c2204f0195de upstream.
sysfs_notify_dirent is a simple atomic operation that can be used to
alert user-space that new data can be read from a sysfs attribute.
Unfortunately it cannot currently be called from non-process context
because of its use of spin_lock which is sometimes taken with
interrupts enabled.
So change all lockers of sysfs_open_dirent_lock to disable interrupts,
thus making sysfs_notify_dirent safe to be called from non-process
context (as drivers/md does in md_safemode_timeout).
sysfs_get_open_dirent is (documented as being) only called from
process context, so it uses spin_lock_irq. Other places
use spin_lock_irqsave.
The usage for sysfs_notify_dirent in md_safemode_timeout was
introduced in 2.6.28, so this patch is suitable for that and more
recent kernels.
Reported-by: Joel Andres Granados <jgranado@redhat.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit d8e180dcd5bbbab9cd3ff2e779efcf70692ef541 upstream.
When process accounting is enabled, every exiting process writes a log to
the account file. In addition, every once in a while one of the exiting
processes checks whether there's enough free space for the log.
SELinux policy may or may not allow the exiting process to stat the fs.
So unsuspecting processes start generating AVC denials just because
someone enabled process accounting.
For these filesystem operations, the exiting process's credentials should
be temporarily switched to that of the process which enabled accounting,
because it's really that process which wanted to have the accounting
information logged.
Signed-off-by: Michal Schmidt <mschmidt@redhat.com>
Acked-by: David Howells <dhowells@redhat.com>
Acked-by: Serge Hallyn <serue@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: James Morris <jmorris@namei.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 18c4078489fe064cc0ed08be3381cf2f26657f5f upstream.
The client->driver pointer can be NULL when i2c-device probing fails
in i2c_new_device(). This patch adds the NULL checks for client->driver
and return the error instead of blind assumption of driver availability.
Reported-by: Tim Shepard <shep@alum.mit.edu>
Cc: Jean Delvare <khali@linux-fr.org>
Cc: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 18669eabde2ff5fc446e72e043f0539059763438 upstream.
When an ACPI resource conflict is detected, error messages are already
printed by ACPI. There's no point in causing the driver core to print
more error messages, so return one of the error codes for which no
message is printed.
This fixes bug #14293:
http://bugzilla.kernel.org/show_bug.cgi?id=14293
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 6f6b35e133fe4313277b30fc1a7ea313875ea6c9 upstream.
If i2c device probing fails, then there is no driver to dereference
after calling i2c_new_device(). Stop assuming that probing will always
succeed, to avoid NULL pointer dereferences. We have an easier access
to the driver anyway.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Tested-by: Tim Shepard <shep@alum.mit.edu>
Cc: Colin Leroy <colin@colino.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 05576a1e38e2d06dece32974c5218528d3fbc6e2 upstream.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Acked-by: Riku Voipio <riku.voipio@iki.fi>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit a825e00c98a2ee37eb2a0ad93b352e79d2bc1593 upstream.
There appears to have been a mixup in the max supported jumbo frame size
between 82574 and 82583 which ended up disabling jumbo frames on the 82574
as a result. This patch swaps the two so that this issue is resolved.
This patch fixes http://bugzilla.kernel.org/show_bug.cgi?id=14261
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Cc: Tim Gardner <tim.gardner@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
(cherry picked from commit 30efa3f76813b17445bc5a2e443ae9731518566b)
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
(cherry picked from commit 0179065b13b354cc0b940e7a632a65ec0448beff)
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|