diff options
author | Eric Sandeen <sandeen@redhat.com> | 2007-01-05 16:36:36 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@woody.osdl.org> | 2007-01-05 23:55:23 -0800 |
commit | be6aab0e9fa6d3c6d75aa1e38ac972d8b4ee82b8 (patch) | |
tree | 6601373d683326f034fcce292953673a522db111 /mm/slab.c | |
parent | 2723f9603a8f8bb2cd8c7b581f7c94b8d75e3837 (diff) | |
download | lwn-be6aab0e9fa6d3c6d75aa1e38ac972d8b4ee82b8.tar.gz lwn-be6aab0e9fa6d3c6d75aa1e38ac972d8b4ee82b8.zip |
[PATCH] fix memory corruption from misinterpreted bad_inode_ops return values
CVE-2006-5753 is for a case where an inode can be marked bad, switching
the ops to bad_inode_ops, which are all connected as:
static int return_EIO(void)
{
return -EIO;
}
#define EIO_ERROR ((void *) (return_EIO))
static struct inode_operations bad_inode_ops =
{
.create = bad_inode_create
...etc...
The problem here is that the void cast causes return types to not be
promoted, and for ops such as listxattr which expect more than 32 bits of
return value, the 32-bit -EIO is interpreted as a large positive 64-bit
number, i.e. 0x00000000fffffffa instead of 0xfffffffa.
This goes particularly badly when the return value is taken as a number of
bytes to copy into, say, a user's buffer for example...
I originally had coded up the fix by creating a return_EIO_<TYPE> macro
for each return type, like this:
static int return_EIO_int(void)
{
return -EIO;
}
#define EIO_ERROR_INT ((void *) (return_EIO_int))
static struct inode_operations bad_inode_ops =
{
.create = EIO_ERROR_INT,
...etc...
but Al felt that it was probably better to create an EIO-returner for each
actual op signature. Since so few ops share a signature, I just went ahead
& created an EIO function for each individual file & inode op that returns
a value.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'mm/slab.c')
0 files changed, 0 insertions, 0 deletions