diff options
author | Vineet Gupta <vgupta@synopsys.com> | 2013-07-25 15:45:50 -0700 |
---|---|---|
committer | Vineet Gupta <vgupta@synopsys.com> | 2013-08-30 21:42:19 +0530 |
commit | 947bf103fcd2defa3bc4b7ebc6b05d0427bcde2d (patch) | |
tree | 549bdf5c9cdd5a9d4aa320bf4fbdf88b499f1f4b /arch/arc/include/asm/mmu.h | |
parent | c60115537c96d78a884d2a4bd78839a57266d48b (diff) | |
download | lwn-947bf103fcd2defa3bc4b7ebc6b05d0427bcde2d.tar.gz lwn-947bf103fcd2defa3bc4b7ebc6b05d0427bcde2d.zip |
ARC: [ASID] Track ASID allocation cycles/generations
This helps remove asid-to-mm reverse map
While mm->context.id contains the ASID assigned to a process, our ASID
allocator also used asid_mm_map[] reverse map. In a new allocation
cycle (mm->ASID >= @asid_cache), the Round Robin ASID allocator used this
to check if new @asid_cache belonged to some mm2 (from prev cycle).
If so, it could locate that mm using the ASID reverse map, and mark that
mm as unallocated ASID, to force it to refresh at the time of switch_mm()
However, for SMP, the reverse map has to be maintained per CPU, so
becomes 2 dimensional, hence got rid of it.
With reverse map gone, it is NOT possible to reach out to current
assignee. So we track the ASID allocation generation/cycle and
on every switch_mm(), check if the current generation of CPU ASID is
same as mm's ASID; If not it is refreshed.
(Based loosely on arch/sh implementation)
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Diffstat (limited to 'arch/arc/include/asm/mmu.h')
-rw-r--r-- | arch/arc/include/asm/mmu.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/arc/include/asm/mmu.h b/arch/arc/include/asm/mmu.h index 1639f25e47b1..c82db8bd7270 100644 --- a/arch/arc/include/asm/mmu.h +++ b/arch/arc/include/asm/mmu.h @@ -48,7 +48,7 @@ #ifndef __ASSEMBLY__ typedef struct { - unsigned long asid; /* Pvt Addr-Space ID for mm */ + unsigned long asid; /* 8 bit MMU PID + Generation cycle */ } mm_context_t; #ifdef CONFIG_ARC_DBG_TLB_PARANOIA |