Age | Commit message (Collapse) | Author |
|
|
|
[ Upstream commit 85821c906cf3563a00a3d98fa380a2581a7a5ff1 ]
As there's no point in adding a fixed-fudge value (originally 5
seconds), honor the user settings only. We also remove the
driver's dead-callback get_rport_dev_loss_tmo function
(qla2x00_get_rport_loss_tmo()).
Signed-off-by: Andrew Vasquez <andrew.vasquez@qlogic.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
[ Upstream commit 5f3a9a207f1fccde476dd31b4c63ead2967d934f ]
Signed-off-by: Seokmann Ju <seokmann.ju@qlogic.com>
Signed-off-by: Andrew Vasquez <andrew.vasquez@qlogic.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 7b27718bdb1b70166383dec91391df5534d449ee upstream
yesterday I tried to reactivate my old 486 box and wanted to install a
current Linux with latest kernel on it. But it turned out that the
latest kernel does not boot because the machine crashes early in the
setup code.
After some debugging it turned out that the problem is the query_ist()
function. If this interrupt with that function is called the machine
simply locks up. It looks like a BIOS bug. Looking for a workaround for
this problem I wrote the attached patch. It checks for the CPUID
instruction and if it is not implemented it does not call the speedstep
BIOS function. As far as I know speedstep should be available since some
Pentium earliest.
Alan Cox observed that it's available since the Pentium II, so cpuid
levels 4 and 5 can be excluded altogether.
H. Peter Anvin cleaned up the code some more:
> Right in concept, but I dislike the implementation (duplication of the
> CPU detect code we already have). Could you try this patch and see if
> it works for you?
which, with a small modification to fix a build error with it the
resulting kernel boots on my machine.
Signed-off-by: Joerg Roedel <joro@8bytes.org>
Signed-off-by: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 7bc069c6bc4ede519a7116be1b9e149a1dbf787a upstream
The masked difference is what needs to be compared against 1, rather
than the difference of masked values (which can be negative).
Signed-off-by: Jan Beulich <jbeulich@novell.com>
Acked-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 04e1e0cccade330ab3715ce59234f7e3b087e246 upstream.
Signed-off-by: Steve French <sfrench@us.ibm.com>
Cc: Eugene Teo <eteo@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
Already in Linus' tree:
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=b25b791b13aaa336b56c4f9bd417ff126363f80b
Fix a NULL pointer dereference that happened when calling
i2c_new_probed_device on one of the addresses for which we use byte
reads instead of quick write for detection purpose (that is: 0x30-0x37
and 0x50-0x5f).
Signed-off-by: Hans Verkuil <hverkuil@xs4all.nl>
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 252815b0cfe711001eff0327872209986b36d490 upstream
Fix a range check in netfilter IP NAT for SNMP to always use a big enough size
variable that the compiler won't moan about comparing it to ULONG_MAX/8 on a
64-bit platform.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Cc: Eugene Teo <eteo@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 77332894c21165404496c56763d7df6c15c4bb09 upstream
The magic write to register 0x82 will often cause PCI config space on
my 8168 (PCI ID 10ec:8168, revision 2. mounted in an LG P300 laptop)
to be filled with ones during driver load, and thus breaking NIC
operation until reboot. If it does not happen on first driver load it
can easily be reproduced by unloading and loading the driver a few
times.
The magic write was added long ago by this commit:
Author: François Romieu <romieu@fr.zoreil.com>
Date: Sat Jan 10 06:00:46 2004 -0500
[netdrvr r8169] Merge of changes done by Realtek to rtl8169_init_one():
- phy capability settings allows lower or equal capability as suggested
in Realtek's changes;
- I/O voodoo;
- no need to s/mdio_write/RTL8169_WRITE_GMII_REG/;
- s/rtl8169_hw_PHY_config/rtl8169_hw_phy_config/;
- rtl8169_hw_phy_config(): ad-hoc struct "phy_magic" to limit duplication
of code (yep, the u16 -> int conversions should work as expected);
- variable renames and whitepace changes ignored.
As the 8168 wasn't supported by that version this patch simply removes
the bogus write from mac versions <= RTL_GIGA_MAC_VER_06.
[The change above makes sense for the 8101/8102 too -- Ueimor]
Signed-off-by: Marcus Sundberg <marcus@ingate.com>
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com>
Cc: Karsten Keil <kkeil@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
Commit efc491814308f89d5ef6c4fe19ae4552a67d4132 upstream
radeon: misc corrections
I have a new PCI-E radeon RV380 series card (PCI device ID 5b64) that
hangs in my sparc64 boxes when the init scripts set the font. The problem
goes away if I disable acceleration.
I haven't figured out that bug yet, but along the way I found some
corrections to make based upon some auditing.
1) The RB2D_DC_FLUSH_ALL value used by the kernel fb driver
and the XORG video driver differ. I've made the kernel
match what XORG is using.
2) In radeonfb_engine_reset() we have top-level code structure
that roughly looks like:
if (family is 300, 350, or V350)
do this;
else
do that;
...
if (family is NOT 300, OR
family is NOT 350, OR
family is NOT V350)
do another thing;
this last conditional makes no sense, is always true,
and obviously was likely meant to be "family is NOT
300, 350, or V350". So I've made the code match the
intent.
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Tested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit b6d8adf477439e7086224bc9674c6b6638780783 upstream.
Include limits.h to get a definition of PATH_MAX.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 7c1fed03b9fa32d4323d5caa6a9c7dcdd7eba767 upstream
My copying of linux/init.h didn't go far enough. The definition of
__used singled out gcc minor version 3, but didn't care what the major
version was. This broke when unit-at-a-time was added and gcc started
throwing out initcalls.
This results in an early boot crash when ptrace tries to initialize a
process with an empty, uninitialized register set.
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 4f81c5350b44bcc501ab6f8a089b16d064b4d2f6 upstream
There are various constraints on the use of unit-at-a-time:
- i386 uses no-unit-at-a-time for pre-4.0 (not 4.3)
- x86_64 uses unit-at-a-time always
Uli reported a crash on x86_64 with gcc 4.1.2 with unit-at-a-time,
resulting in commit c0a18111e571138747a98af18b3a2124df56a0d1
Ingo reported a gcc internal error with gcc 4.3 with no-unit-at-a-timem,
resulting in 22eecde2f9034764a3fd095eecfa3adfb8ec9a98
Benny Halevy is seeing extern inlines not resolved with gcc 4.3 with
no-unit-at-a-time
This patch reintroduces unit-at-a-time for gcc >= 4.0, bringing back the
possibility of Uli's crash. If that happens, we'll debug it.
I started seeing both the internal compiler errors and unresolved
inlines on Fedora 9. This patch fixes both problems, without so far
reintroducing the crash reported by Uli.
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Cc: Benny Halevy <bhalevy@panasas.com>
Cc: Adrian Bunk <bunk@kernel.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Ulrich Drepper <drepper@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit f1ef9167ca4494a8c6d71d0031c73e9c8841eadd upstream
Fedora broke PTRACE_SYSEMU again, and UML crashes as a result when it
doesn't need to. This patch makes the PTRACE_SYSEMU check fail gracefully
and makes UML fall back to PTRACE_SYSCALL.
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 3d5ede6f776bdb1483bcd086f79c3bf41fed3865 upstream
We lost the marking of SIGWINCH as being OK to receive during stub
execution, causing a panic should that happen.
Cc: Benedict Verheyen <benedict.verheyen@gmail.com>
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 8bfd04b974689f700bbd053ad6e66b0a95fb80c9 upstream
x86_64 defines either memcpy or __memcpy depending on the gcc version, and
it looks like UML needs to follow that in its exporting.
Cc: Gabriel C <nix.or.die@googlemail.com>
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 3e3b48e5198544dd90e27265a70c1a834139e025 upstream
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Cc: Jeff Dike <jdike@addtoit.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 40fb16a360d9c6459afee91dc793c1e3374feb94 upstream
This patch makes os_get_task_size locate the bottom of the address space,
as well as the top. This is for systems which put a lower limit on mmap
addresses. It works by manually scanning pages from zero onwards until a
valid page is found.
Because the bottom of the address space may not be zero, it's not
sufficient to assume the top of the address space is the size of the
address space. The size is the difference between the top address and
bottom address.
[jdike@addtoit.com: changed the name to reflect that this function is
supposed to return the top of the process address space, not its size and
changed the return value to reflect that. Also some minor formatting
changes]
Signed-off-by: Tom Spink <tspink@gmail.com>
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 06e1e4ffbd1932e288839b3140cda6b8141eb684 upstream
Protection against the host's time going backwards (eg, ntp activity on
the host) by keeping track of the time at the last tick and if it's
greater than the current time, keep time stopped until the host catches
up.
Cc: Nix <nix@esperi.org.uk>
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 296cd66f7f6e130fe08e6880ecb13c3fc615a8db upstream
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Acked-by: Jeff Dike <jdike@addtoit.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit fe2cc53ee013a4d4d0317d418e7019fe6533a5a8 upstream
Alarm delivery could be noticably late in the !CONFIG_NOHZ case because lost
ticks weren't being taken into account. This is now treated more carefully,
with the time between ticks being calculated and the appropriate number of
ticks delivered to the timekeeping system.
Cc: Nix <nix@esperi.org.uk>
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 60a2988aea701a6424809a5432bf068667aac177 upstream
The top of physical memory should be below the initial process stack, not the
top of the address space, at least for as long as the stack isn't known to the
kernel VM system and appropriately reserved.
Cc: "Christopher S. Aker" <caker@theshore.net>
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit cfd28f6695d0fc047478480791a21bdd4967f98e upstream
UML's supposed nanosecond clock interacts badly with NTP when NTP
decides that the clock has drifted ahead and needs to be slowed down.
Slowing down the clock is done by decrementing the cycle-to-nanosecond
multiplier, which is 1. Decrementing that gives you 0 and time is
stopped.
This is fixed by switching to a microsecond clock, with a multiplier
of 1000.
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 43f5b3085fdd27c4edf535d938b2cb0ccead4f75 upstream
Reintroduce uml_kmalloc for the benefit of UML libc code. The
previous tactic of declaring __kmalloc so it could be called directly
from the libc side of the house turned out to be getting too intimate
with slab, and it doesn't work with slob.
So, the uml_kmalloc wrapper is back. It calls kmalloc or whatever
that translates into, and libc code calls it.
kfree is left alone since that still works, leaving a somewhat
inconsistent API.
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
[ Upstream commit 0a4949c4414af2eb91414bcd8e2a8ac3706f7dde ]
That's the userland thread register, so we should never try to change
it like this.
Based upon glibc bug nptl/6577 and suggestions by Jakub Jelinek.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
[ Upstream commit d72609e17fd93bb2f7e0f7e1bdc70b6d20e43843 ]
Correct sparc64's implementation of FUTEX_OP_ANDN to do a
bitwise negate of the oparg parameter before applying the
AND operation. All other archs that support FUTEX_OP_ANDN
either negate oparg explicitly (frv, ia64, mips, sh, x86),
or do so indirectly by using an and-not instruction (powerpc).
Since sparc64 has and-not, I chose to use that solution.
I've not found any use of FUTEX_OP_ANDN in glibc so the
impact of this bug is probably minor. But other user-space
components may try to use it so it should still get fixed.
Signed-off-by: Mikael Pettersson <mikpe@it.uu.se>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
[ Upstream commit 77e2f14f71d68d05945f1d30ca55b5194d6ab1ce ]
SCTP used ip6_xmit() to send fragments after received ICMP packet too
big message. But while send packet used ip6_xmit, the skb->local_df is
not initialized. So when skb if enter ip6_fragment(), the following
code will discard the skb.
ip6_fragment(...)
{
if (!skb->local_df) {
...
return -EMSGSIZE;
}
...
}
SCTP do the following step:
1. send packet ip6_xmit(skb, ipfragok=0)
2. received ICMP packet too big message
3. if PMTUD_ENABLE: ip6_xmit(skb, ipfragok=1)
This patch fixed the problem by set local_df if ipfragok is true.
Signed-off-by: Wei Yongjun <yjwei@cn.fujitsu.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
[ Upstream commit 697f8d0348a652593d195a13dd1067d9df911a82 ]
The rationale is:
* use u32 consistently
* no need to do LCG on values from (better) get_random_bytes
* use more data from get_random_bytes for secondary seeding
* don't reduce state space on srandom32()
* enforce state variable initialization restrictions
Note: the second paper has a version of random32() with even longer period
and a version of random64() if needed.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 3e8a0a559c66ee9e7468195691a56fefc3589740 upstream
Thanks to Eugene Teo for reporting this problem.
Signed-off-by: Eugene Teo <eugeneteo@kernel.sg>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 5c742b45dd5fbbb6cf74d3378341704f4b23c5e8 upstream
In the old acer_acpi, I discovered that on some of the newer AMW0 laptops
that supported the WMID methods, they don't work properly for setting the
wireless and bluetooth values.
So for the AMW0 V2 laptops, we want to use both the 'old' AMW0 and the
'new' WMID methods for setting wireless & bluetooth to guarantee we always
enable it.
This was fixed in acer_acpi some time ago, but I forgot to port the patch
over to acer-wmi when it was merged.
(Without this patch, early AMW0 V2 laptops such as the Aspire 5040 won't
work with acer-wmi, where-as they did with the old acer_acpi).
AK: fix compilation
Signed-off-by: Carlos Corbacho <carlos@strangeworlds.co.uk>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 2c731afb0d4ba16018b400c75665fbdb8feb2175 upstream
Signed-off-by: Steve French <sfrench@us.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit ad661334b8ae421154b121ee6ad3b56807adbf11 upstream
In looking at network named pipe support on cifs, I noticed that
Dave Howell's iget patch:
iget: stop CIFS from using iget() and read_inode()
broke mounts to IPC$ (the interprocess communication share), and don't
handle the error case (when getting info on the root inode fails).
Thanks to Gunter who noted a typo in a debug line in the original
version of this patch.
CC: David Howells <dhowells@redhat.com>
CC: Gunter Kukkukk <linux@kukkukk.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 938bb03d188a1e688fb0bcae49788f540193e80a uptream
Aesthetic regards aside, commit e8e7b9eb11c34ee18bde8b7011af41938d1ad667
still leaves a bug in the error message, because it uses the unconverted
big-endian value for printk.
Fix this by using a local variable in machine byte order. The result is
correct, more readable, and also produces slightly shorter code on i386.
Signed-off-by: Petr Tesarik <ptesarik@suse.cz>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: <stable@kernel.org>
Acked-by: Borislav Petkov <petkovbb@gmail.com>
[bart: __u32 -> u32]
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 8ab19ea36c5c5340ff598e4d15fc084eb65671dc upstream
There is a slight chance for a deadlock in the estimator code. We can't call
del_timer_sync() while holding our lock, as the timer might be active and
spinning for the lock on another cpu. Work around this issue by using
try_to_del_timer_sync() and releasing the lock. We could actually delete the
timer outside of our lock, as the add and kill functions are only every called
from userspace via [gs]etsockopt() and are serialized by a mutex, but better
make this explicit.
Signed-off-by: Sven Wegener <sven.wegener@stealer.net>
Acked-by: Simon Horman <horms@verge.net.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 5ede40f87957c6ededf9284c8339722a97b9dfb6 upstream
I broke an error path with d03c21ec0be7787ff6b75dcf56c0e96209ccbfbd,
sorry about that.
The machine will crash if the i2c_attach_client() or maven_init_client()
calls fail, although nobody has yet reported this happening.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Acked-by: Krzysztof Helt <krzysztof.h1@wp.pl>
Cc: Petr Vandrovec <VANDROVE@vc.cvut.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit a477097d9c37c1cf289c7f0257dffcfa42d50197 upstream
Halesh says:
Please find the below testcase provide to test mlock.
Test Case :
===========================
#include <sys/resource.h>
#include <stdio.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <unistd.h>
#include <sys/mman.h>
#include <fcntl.h>
#include <errno.h>
#include <stdlib.h>
int main(void)
{
int fd,ret, i = 0;
char *addr, *addr1 = NULL;
unsigned int page_size;
struct rlimit rlim;
if (0 != geteuid())
{
printf("Execute this pgm as root\n");
exit(1);
}
/* create a file */
if ((fd = open("mmap_test.c",O_RDWR|O_CREAT,0755)) == -1)
{
printf("cant create test file\n");
exit(1);
}
page_size = sysconf(_SC_PAGE_SIZE);
/* set the MEMLOCK limit */
rlim.rlim_cur = 2000;
rlim.rlim_max = 2000;
if ((ret = setrlimit(RLIMIT_MEMLOCK,&rlim)) != 0)
{
printf("Cant change limit values\n");
exit(1);
}
addr = 0;
while (1)
{
/* map a page into memory each time*/
if ((addr = (char *) mmap(addr,page_size, PROT_READ |
PROT_WRITE,MAP_SHARED,fd,0)) == MAP_FAILED)
{
printf("cant do mmap on file\n");
exit(1);
}
if (0 == i)
addr1 = addr;
i++;
errno = 0;
/* lock the mapped memory pagewise*/
if ((ret = mlock((char *)addr, 1500)) == -1)
{
printf("errno value is %d\n", errno);
printf("cant lock maped region\n");
exit(1);
}
addr = addr + page_size;
}
}
======================================================
This testcase results in an mlock() failure with errno 14 that is EFAULT,
but it has nowhere been specified that mlock() will return EFAULT. When I
tested the same on older kernels like 2.6.18, I got the correct result i.e
errno 12 (ENOMEM).
I think in source code mlock(2), setting errno ENOMEM has been missed in
do_mlock() , on mlock_fixup() failure.
SUSv3 requires the following behavior frmo mlock(2).
[ENOMEM]
Some or all of the address range specified by the addr and
len arguments does not correspond to valid mapped pages
in the address space of the process.
[EAGAIN]
Some or all of the memory identified by the operation could not
be locked when the call was made.
This rule isn't so nice and slighly strange. but many people think
POSIX/SUS compliance is important.
Reported-by: Halesh Sadashiv <halesh.sadashiv@ap.sony.com>
Tested-by: Halesh Sadashiv <halesh.sadashiv@ap.sony.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 54da1174922cddd4be83d5a364b2e0fdd693f513 upstream
do_schedule_next_timer() sets info->si_overrun = timr->it_overrun_last,
this discards the already accumulated overruns.
Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru>
Cc: Mark McLoughlin <markmc@redhat.com>
Cc: Oliver Pinter <oliver.pntr@gmail.com>
Cc: Roland McGrath <roland@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit ba661292a2bc6ddd305a212b0526e5dc22195fe7 upstream
The bug was reported and analysed by Mark McLoughlin <markmc@redhat.com>,
the patch is based on his and Roland's suggestions.
posix_timer_event() always rewrites the pre-allocated siginfo before sending
the signal. Most of the written info is the same all the time, but memset(0)
is very wrong. If ->sigq is queued we can race with collect_signal() which
can fail to find this siginfo looking at .si_signo, or copy_siginfo() can
copy the wrong .si_code/si_tid/etc.
In short, sys_timer_settime() can in fact stop the active timer, or the user
can receive the siginfo with the wrong .si_xxx values.
Move "memset(->info, 0)" from posix_timer_event() to alloc_posix_timer(),
change send_sigqueue() to set .si_overrun = 0 when ->sigq is not queued.
It would be nice to move the whole sigq->info initialization from send to
create path, but this is not easy to do without uglifying timer_create()
further.
As Roland rightly pointed out, we need more cleanups/fixes here, see the
"FIXME" comment in the patch. Hopefully this patch makes sense anyway, and
it can mask the most bad implications.
Reported-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru>
Cc: Mark McLoughlin <markmc@redhat.com>
Cc: Oliver Pinter <oliver.pntr@gmail.com>
Cc: Roland McGrath <roland@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 969830b2fedf8336c41d6195f49d250b1e166ff8 upstream
Some chips appear to have the 2D engine hang during screen redraw,
typically in a sequence of copyarea operations. This appear to be
solved by adding a flush of the engine destination pixel cache
and waiting for the engine to be idle before issuing the accel
operation. The performance impact seems to be fairly small.
Here is a trace on an RV370 (PCI device ID 0x5b64), it records the
RBBM_STATUS register, then the source x/y, destination x/y, and
width/height used for the copy:
----------------------------------------
radeonfb_prim_copyarea: STATUS[00000140] src[210:70] dst[210:60] wh[a0:10]
radeonfb_prim_copyarea: STATUS[00000140] src[2b8:70] dst[2b8:60] wh[88:10]
radeonfb_prim_copyarea: STATUS[00000140] src[348:70] dst[348:60] wh[40:10]
radeonfb_prim_copyarea: STATUS[80020140] src[390:70] dst[390:60] wh[88:10]
radeonfb_prim_copyarea: STATUS[8002613f] src[40:80] dst[40:70] wh[28:10]
radeonfb_prim_copyarea: STATUS[80026139] src[a8:80] dst[a8:70] wh[38:10]
radeonfb_prim_copyarea: STATUS[80026133] src[e8:80] dst[e8:70] wh[80:10]
radeonfb_prim_copyarea: STATUS[8002612d] src[170:80] dst[170:70] wh[30:10]
radeonfb_prim_copyarea: STATUS[80026127] src[1a8:80] dst[1a8:70] wh[8:10]
radeonfb_prim_copyarea: STATUS[80026121] src[1b8:80] dst[1b8:70] wh[88:10]
radeonfb_prim_copyarea: STATUS[8002611b] src[248:80] dst[248:70] wh[68:10]
----------------------------------------
When things are going fine the copies complete before the next ROP is
even issued, but all of a sudden the 2D unit becomes active (bit 17 in
RBBM_STATUS) and the FIFO retry (bit 13) and FIFO pipeline busy (bit
14) are set as well. The FIFO begins to backup until it becomes full.
What happens next is the radeon_fifo_wait() times out, and we access
the chip illegally leading to a bus error which usually wedges the
box. None of this makes it to the console screen, of course :-)
radeon_fifo_wait() should be modified to reset the accelerator when
this timeout happens instead of programming the chip anyways.
----------------------------------------
radeonfb: FIFO Timeout !
ERROR(0): Cheetah error trap taken afsr[0010080005000000] afar[000007f900800e40] TL1(0)
ERROR(0): TPC[595114] TNPC[595118] O7[459788] TSTATE[11009601]
ERROR(0): TPC<radeonfb_copyarea+0xfc/0x248>
ERROR(0): M_SYND(0), E_SYND(0), Privileged
ERROR(0): Highest priority error (0000080000000000) "Bus error response from system bus"
ERROR(0): D-cache idx[0] tag[0000000000000000] utag[0000000000000000] stag[0000000000000000]
ERROR(0): D-cache data0[0000000000000000] data1[0000000000000000] data2[0000000000000000] data3[0000000000000000]
ERROR(0): I-cache idx[0] tag[0000000000000000] utag[0000000000000000] stag[0000000000000000] u[0000000000000000] l[00\
ERROR(0): I-cache INSN0[0000000000000000] INSN1[0000000000000000] INSN2[0000000000000000] INSN3[0000000000000000]
ERROR(0): I-cache INSN4[0000000000000000] INSN5[0000000000000000] INSN6[0000000000000000] INSN7[0000000000000000]
ERROR(0): E-cache idx[800e40] tag[000000000e049f4c]
ERROR(0): E-cache data0[fffff8127d300180] data1[00000000004b5384] data2[0000000000000000] data3[0000000000000000]
Ker:xnel panic - not syncing: Irrecoverable deferred error trap.
----------------------------------------
Another quirk is that these copyarea calls will not happen until the
first drivers/char/vt.c:redraw_screen() occurs. This will only happen
if you 1) VC switch or 2) run "consolechars" or 3) unblank the screen.
This seems to happen because until a redraw_screen() the screen scrolling
method used by fbcon is not finalized yet. I've seen this with other fb
drivers too.
So if all you do is boot straight into X you will never see this bug on
the relevant chips.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 32194450330be327f3b25bf6b66298bd122599e9 upstream
In relay's current read implementation, if the buffer is completely full
but hasn't triggered the buffer-full condition (i.e. the last write
didn't cross the subbuffer boundary) and the last subbuffer is exactly
full, the subbuffer accounting code erroneously finds nothing available.
This patch fixes the problem.
Signed-off-by: Tom Zanussi <tzanussi@gmail.com>
Cc: Eduard - Gabriel Munteanu <eduard.munteanu@linux360.ro>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org>
Cc: Andrea Righi <righi.andrea@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit ad337591f4fd20de6a0ca03d6715267a5c1d2b16 upstream
It seems cdrwtool in the udftools has been unusable on "modern" kernels
for some time. A Google search reveals many people with the same issue
but no solution (cdrwtool fails to format the disk). After spending some
time tracking down the issue, it comes down to the following:
The udftools still use the older CDROM_SEND_PACKET interface to send
things like FORMAT_UNIT through to the drive. They should really be
updated, but that's another story. Since most distros are using libata
now, the cd or dvd burner appears as a SCSI device, and we wind up in
block/scsi_ioctl.c. Here, the code tries to take the "struct
cdrom_generic_command" and translate it and stuff it into a "struct
sg_io_hdr" structure so it can pass it to the modern sg_io() routine
instead. Unfortunately, there is one error, or rather an omission in the
translation. The timeout that is passed in in the "struct
cdrom_generic_command" is in HZ=100 units, and this is modified and
correctly converted to jiffies by use of clock_t_to_jiffies(). However,
a little further down, this cgc.timeout value in jiffies is simply
copied into the sg_io_hdr timeout, which should be in milliseconds.
Since most modern x86 kernels seems to be getting build with HZ=250, the
timeout that is passed to sg_io and eventually converted to the
timeout_per_command member of the scsi_cmnd structure is now four times
too small. Since cdrwtool tries to set the timeout to one hour for the
FORMAT_UNIT command, and it takes about 20 minutes to format a 4x CDRW,
the SCSI error-handler kicks in after the FORMAT_UNIT completes because
it took longer than the incorrectly-calculated timeout.
[jejb: fix up whitespace]
Signed-off-by: Tim Wright <timw@splhi.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit dd07428b44944b42f699408fe31af47977f1e733 upstream
Add PCI device ID for new adapter models.
Signed-off-by: HighPoint Linux Team <linux@highpoint-tech.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit e8bac9e0647dd04c83fd0bfe7cdfe2f6dfb100d0 upstream
The class_device->device conversion is causing an oops in revalidate
because it's assuming that the device_for_each_child iterator will only
return struct scsi_device children. The conversion made all former
class_devices children of the device as well, so this assumption is
broken. Fix it.
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 671a99c8eb2f1dde08ac5538d8cd912047c61ddf upstream
There are a few kerneloops.org reports like this one:
http://www.kerneloops.org/search.php?search=ses_match_to_enclosure
That seem to imply we're running off the end of the VPD inquiry data
(although at 512 bytes, it should be long enough for just about
anything). we should be using correctly sized buffers anyway, so put
those in and hope this oops goes away.
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit a00c3cadc2bf50b3c925acdb3d0e5789b1650498 upstream
The Patch adds support for Luminance Stellaris Evaluation/Development
Kits (FTDI 2232C based).
The PIDs were missing.
Successfully tested with a Stellaris LM3S8962 Evaluation kit.
Signed-off-by: Frederik Kriewitz <frederik@kriewitz.eu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit b5894a500127fce1db1309db5f9ca8b77a2ac266 upstream
USB product id registration for the ELV HS485 USB adapter (www.elv.de) to
their home automation bus system. Applies to 2.6.26.
Signed-off-by: Andre Schenk <andre@melior.s.bawue.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 8c809681ba0289afd0ed7bbb63679a0568dd441d upstream
USB ID 4348:5523 is handled by the ch341 driver. Remove it from the
pl2023 driver.
Reverts 002e8f2c80c6be76bb312940bc278fc10b2b2487.
Signed-off-by: Tollef Fog Heen <tfheen@err.no>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 0282b7f2a874e72c18fcd5a112ccf67f71ba7f5c upstream
This patch (as1121) fixes a bug in the USB serial core. When a device
is unregistered, the core will give back its minors -- even if the
device hasn't been assigned any!
The patch reserves the highest minor value (255) to mean that no minor
was assigned. It also removes some dead code and does a small style
fixup.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 368ee6469c327364ea10082a348f91c1f5ba47f7 upstream
This patch (as1115) adds unusual_devs entries with the IGNORE_RESIDE
flag for the iRiver T10 and the Simple Tech/Datafab CF+SM card
reader. Apparently these devices provide reasonable residue values
for READ and WRITE operations, but not for others like INQUIRY or READ
CAPACITY.
This fixes the iRiver T10 problem reported in Bugzilla #11125.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit b9a097f26e55968cbc52e30a4a2e73d32d7604ce upstream
usb-storage: quirk around v1.11 firmware on Nikon D40
https://bugzilla.redhat.com/show_bug.cgi?id=454028
Just as in earlier firmware versions, we need to perform this
quirk for the latest version too.
Speculatively do the entry for the D80 too, as they seem to
have the same firmware problems historically.
Signed-off-by: Dave Jones <davej@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|