4
David S. Miller <davem@redhat.com>
6
This document describes the cache/tlb flushing interfaces called
7
by the Linux VM subsystem. It enumerates over each interface,
8
describes its intended purpose, and what side effect is expected
9
after the interface is invoked.
11
The side effects described below are stated for a uniprocessor
12
implementation, and what is to happen on that single processor. The
13
SMP cases are a simple extension, in that you just extend the
14
definition such that the side effect for a particular interface occurs
15
on all processors in the system. Don't let this scare you into
16
thinking SMP cache/tlb flushing must be so inefficient, this is in
17
fact an area where many optimizations are possible. For example,
18
if it can be proven that a user address space has never executed
19
on a cpu (see mm_cpumask()), one need not perform a flush
20
for this address space on that cpu.
22
First, the TLB flushing interfaces, since they are the simplest. The
23
"TLB" is abstracted under Linux as something the cpu uses to cache
24
virtual-->physical address translations obtained from the software
25
page tables. Meaning that if the software page tables change, it is
26
possible for stale translations to exist in this "TLB" cache.
27
Therefore when software page table changes occur, the kernel will
28
invoke one of the following flush methods _after_ the page table
31
1) void flush_tlb_all(void)
33
The most severe flush of all. After this interface runs,
34
any previous page table modification whatsoever will be
37
This is usually invoked when the kernel page tables are
38
changed, since such translations are "global" in nature.
40
2) void flush_tlb_mm(struct mm_struct *mm)
42
This interface flushes an entire user address space from
43
the TLB. After running, this interface must make sure that
44
any previous page table modifications for the address space
45
'mm' will be visible to the cpu. That is, after running,
46
there will be no entries in the TLB for 'mm'.
48
This interface is used to handle whole address space
49
page table operations such as what happens during
52
3) void flush_tlb_range(struct vm_area_struct *vma,
53
unsigned long start, unsigned long end)
55
Here we are flushing a specific range of (user) virtual
56
address translations from the TLB. After running, this
57
interface must make sure that any previous page table
58
modifications for the address space 'vma->vm_mm' in the range
59
'start' to 'end-1' will be visible to the cpu. That is, after
60
running, here will be no entries in the TLB for 'mm' for
61
virtual addresses in the range 'start' to 'end-1'.
63
The "vma" is the backing store being used for the region.
64
Primarily, this is used for munmap() type operations.
66
The interface is provided in hopes that the port can find
67
a suitably efficient method for removing multiple page
68
sized translations from the TLB, instead of having the kernel
69
call flush_tlb_page (see below) for each entry which may be
72
4) void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
74
This time we need to remove the PAGE_SIZE sized translation
75
from the TLB. The 'vma' is the backing structure used by
76
Linux to keep track of mmap'd regions for a process, the
77
address space is available via vma->vm_mm. Also, one may
78
test (vma->vm_flags & VM_EXEC) to see if this region is
79
executable (and thus could be in the 'instruction TLB' in
80
split-tlb type setups).
82
After running, this interface must make sure that any previous
83
page table modification for address space 'vma->vm_mm' for
84
user virtual address 'addr' will be visible to the cpu. That
85
is, after running, there will be no entries in the TLB for
86
'vma->vm_mm' for virtual address 'addr'.
88
This is used primarily during fault processing.
90
5) void update_mmu_cache(struct vm_area_struct *vma,
91
unsigned long address, pte_t *ptep)
93
At the end of every page fault, this routine is invoked to
94
tell the architecture specific code that a translation
95
now exists at virtual address "address" for address space
96
"vma->vm_mm", in the software page tables.
98
A port may use this information in any way it so chooses.
99
For example, it could use this event to pre-load TLB
100
translations for software managed TLB configurations.
101
The sparc64 port currently does this.
103
6) void tlb_migrate_finish(struct mm_struct *mm)
105
This interface is called at the end of an explicit
106
process migration. This interface provides a hook
107
to allow a platform to update TLB or context-specific
108
information for the address space.
110
The ia64 sn2 platform is one example of a platform
111
that uses this interface.
113
Next, we have the cache flushing interfaces. In general, when Linux
114
is changing an existing virtual-->physical mapping to a new value,
115
the sequence will be in one of the following forms:
117
1) flush_cache_mm(mm);
118
change_all_page_tables_of(mm);
121
2) flush_cache_range(vma, start, end);
122
change_range_of_page_tables(mm, start, end);
123
flush_tlb_range(vma, start, end);
125
3) flush_cache_page(vma, addr, pfn);
126
set_pte(pte_pointer, new_pte_val);
127
flush_tlb_page(vma, addr);
129
The cache level flush will always be first, because this allows
130
us to properly handle systems whose caches are strict and require
131
a virtual-->physical translation to exist for a virtual address
132
when that virtual address is flushed from the cache. The HyperSparc
133
cpu is one such cpu with this attribute.
135
The cache flushing routines below need only deal with cache flushing
136
to the extent that it is necessary for a particular cpu. Mostly,
137
these routines must be implemented for cpus which have virtually
138
indexed caches which must be flushed when virtual-->physical
139
translations are changed or removed. So, for example, the physically
140
indexed physically tagged caches of IA32 processors have no need to
141
implement these interfaces since the caches are fully synchronized
142
and have no dependency on translation information.
144
Here are the routines, one by one:
146
1) void flush_cache_mm(struct mm_struct *mm)
148
This interface flushes an entire user address space from
149
the caches. That is, after running, there will be no cache
150
lines associated with 'mm'.
152
This interface is used to handle whole address space
153
page table operations such as what happens during exit and exec.
155
2) void flush_cache_dup_mm(struct mm_struct *mm)
157
This interface flushes an entire user address space from
158
the caches. That is, after running, there will be no cache
159
lines associated with 'mm'.
161
This interface is used to handle whole address space
162
page table operations such as what happens during fork.
164
This option is separate from flush_cache_mm to allow some
165
optimizations for VIPT caches.
167
3) void flush_cache_range(struct vm_area_struct *vma,
168
unsigned long start, unsigned long end)
170
Here we are flushing a specific range of (user) virtual
171
addresses from the cache. After running, there will be no
172
entries in the cache for 'vma->vm_mm' for virtual addresses in
173
the range 'start' to 'end-1'.
175
The "vma" is the backing store being used for the region.
176
Primarily, this is used for munmap() type operations.
178
The interface is provided in hopes that the port can find
179
a suitably efficient method for removing multiple page
180
sized regions from the cache, instead of having the kernel
181
call flush_cache_page (see below) for each entry which may be
184
4) void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)
186
This time we need to remove a PAGE_SIZE sized range
187
from the cache. The 'vma' is the backing structure used by
188
Linux to keep track of mmap'd regions for a process, the
189
address space is available via vma->vm_mm. Also, one may
190
test (vma->vm_flags & VM_EXEC) to see if this region is
191
executable (and thus could be in the 'instruction cache' in
192
"Harvard" type cache layouts).
194
The 'pfn' indicates the physical page frame (shift this value
195
left by PAGE_SHIFT to get the physical address) that 'addr'
196
translates to. It is this mapping which should be removed from
199
After running, there will be no entries in the cache for
200
'vma->vm_mm' for virtual address 'addr' which translates
203
This is used primarily during fault processing.
205
5) void flush_cache_kmaps(void)
207
This routine need only be implemented if the platform utilizes
208
highmem. It will be called right before all of the kmaps
211
After running, there will be no entries in the cache for
212
the kernel virtual address range PKMAP_ADDR(0) to
213
PKMAP_ADDR(LAST_PKMAP).
215
This routing should be implemented in asm/highmem.h
217
6) void flush_cache_vmap(unsigned long start, unsigned long end)
218
void flush_cache_vunmap(unsigned long start, unsigned long end)
220
Here in these two interfaces we are flushing a specific range
221
of (kernel) virtual addresses from the cache. After running,
222
there will be no entries in the cache for the kernel address
223
space for virtual addresses in the range 'start' to 'end-1'.
225
The first of these two routines is invoked after map_vm_area()
226
has installed the page table entries. The second is invoked
227
before unmap_kernel_range() deletes the page table entries.
229
There exists another whole class of cpu cache issues which currently
230
require a whole different set of interfaces to handle properly.
231
The biggest problem is that of virtual aliasing in the data cache
234
Is your port susceptible to virtual aliasing in its D-cache?
235
Well, if your D-cache is virtually indexed, is larger in size than
236
PAGE_SIZE, and does not prevent multiple cache lines for the same
237
physical address from existing at once, you have this problem.
239
If your D-cache has this problem, first define asm/shmparam.h SHMLBA
240
properly, it should essentially be the size of your virtually
241
addressed D-cache (or if the size is variable, the largest possible
242
size). This setting will force the SYSv IPC layer to only allow user
243
processes to mmap shared memory at address which are a multiple of
246
NOTE: This does not fix shared mmaps, check out the sparc64 port for
247
one way to solve this (in particular SPARC_FLAG_MMAPSHARED).
249
Next, you have to solve the D-cache aliasing issue for all
250
other cases. Please keep in mind that fact that, for a given page
251
mapped into some user address space, there is always at least one more
252
mapping, that of the kernel in its linear mapping starting at
253
PAGE_OFFSET. So immediately, once the first user maps a given
254
physical page into its address space, by implication the D-cache
255
aliasing problem has the potential to exist since the kernel already
256
maps this page at its virtual address.
258
void copy_user_page(void *to, void *from, unsigned long addr, struct page *page)
259
void clear_user_page(void *to, unsigned long addr, struct page *page)
261
These two routines store data in user anonymous or COW
262
pages. It allows a port to efficiently avoid D-cache alias
263
issues between userspace and the kernel.
265
For example, a port may temporarily map 'from' and 'to' to
266
kernel virtual addresses during the copy. The virtual address
267
for these two pages is chosen in such a way that the kernel
268
load/store instructions happen to virtual addresses which are
269
of the same "color" as the user mapping of the page. Sparc64
270
for example, uses this technique.
272
The 'addr' parameter tells the virtual address where the
273
user will ultimately have this page mapped, and the 'page'
274
parameter gives a pointer to the struct page of the target.
276
If D-cache aliasing is not an issue, these two routines may
277
simply call memcpy/memset directly and do nothing more.
279
void flush_dcache_page(struct page *page)
281
Any time the kernel writes to a page cache page, _OR_
282
the kernel is about to read from a page cache page and
283
user space shared/writable mappings of this page potentially
284
exist, this routine is called.
286
NOTE: This routine need only be called for page cache pages
287
which can potentially ever be mapped into the address
288
space of a user process. So for example, VFS layer code
289
handling vfs symlinks in the page cache need not call
290
this interface at all.
292
The phrase "kernel writes to a page cache page" means,
293
specifically, that the kernel executes store instructions
294
that dirty data in that page at the page->virtual mapping
295
of that page. It is important to flush here to handle
296
D-cache aliasing, to make sure these kernel stores are
297
visible to user space mappings of that page.
299
The corollary case is just as important, if there are users
300
which have shared+writable mappings of this file, we must make
301
sure that kernel reads of these pages will see the most recent
302
stores done by the user.
304
If D-cache aliasing is not an issue, this routine may
305
simply be defined as a nop on that architecture.
307
There is a bit set aside in page->flags (PG_arch_1) as
308
"architecture private". The kernel guarantees that,
309
for pagecache pages, it will clear this bit when such
310
a page first enters the pagecache.
312
This allows these interfaces to be implemented much more
313
efficiently. It allows one to "defer" (perhaps indefinitely)
314
the actual flush if there are currently no user processes
315
mapping this page. See sparc64's flush_dcache_page and
316
update_mmu_cache implementations for an example of how to go
319
The idea is, first at flush_dcache_page() time, if
320
page->mapping->i_mmap is an empty tree and ->i_mmap_nonlinear
321
an empty list, just mark the architecture private page flag bit.
322
Later, in update_mmu_cache(), a check is made of this flag bit,
323
and if set the flush is done and the flag bit is cleared.
325
IMPORTANT NOTE: It is often important, if you defer the flush,
326
that the actual flush occurs on the same CPU
327
as did the cpu stores into the page to make it
328
dirty. Again, see sparc64 for examples of how
331
void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
332
unsigned long user_vaddr,
333
void *dst, void *src, int len)
334
void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
335
unsigned long user_vaddr,
336
void *dst, void *src, int len)
337
When the kernel needs to copy arbitrary data in and out
338
of arbitrary user pages (f.e. for ptrace()) it will use
341
Any necessary cache flushing or other coherency operations
342
that need to occur should happen here. If the processor's
343
instruction cache does not snoop cpu stores, it is very
344
likely that you will need to flush the instruction cache
345
for copy_to_user_page().
347
void flush_anon_page(struct vm_area_struct *vma, struct page *page,
348
unsigned long vmaddr)
349
When the kernel needs to access the contents of an anonymous
350
page, it calls this function (currently only
351
get_user_pages()). Note: flush_dcache_page() deliberately
352
doesn't work for an anonymous page. The default
353
implementation is a nop (and should remain so for all coherent
354
architectures). For incoherent architectures, it should flush
355
the cache of the page at vmaddr.
357
void flush_kernel_dcache_page(struct page *page)
358
When the kernel needs to modify a user page is has obtained
359
with kmap, it calls this function after all modifications are
360
complete (but before kunmapping it) to bring the underlying
361
page up to date. It is assumed here that the user has no
362
incoherent cached copies (i.e. the original page was obtained
363
from a mechanism like get_user_pages()). The default
364
implementation is a nop and should remain so on all coherent
365
architectures. On incoherent architectures, this should flush
366
the kernel cache for page (using page_address(page)).
369
void flush_icache_range(unsigned long start, unsigned long end)
370
When the kernel stores into addresses that it will execute
371
out of (eg when loading modules), this function is called.
373
If the icache does not snoop stores then this routine will need
376
void flush_icache_page(struct vm_area_struct *vma, struct page *page)
377
All the functionality of flush_icache_page can be implemented in
378
flush_dcache_page and update_mmu_cache. In 2.7 the hope is to
379
remove this interface completely.
381
The final category of APIs is for I/O to deliberately aliased address
382
ranges inside the kernel. Such aliases are set up by use of the
383
vmap/vmalloc API. Since kernel I/O goes via physical pages, the I/O
384
subsystem assumes that the user mapping and kernel offset mapping are
385
the only aliases. This isn't true for vmap aliases, so anything in
386
the kernel trying to do I/O to vmap areas must manually manage
387
coherency. It must do this by flushing the vmap range before doing
388
I/O and invalidating it after the I/O returns.
390
void flush_kernel_vmap_range(void *vaddr, int size)
391
flushes the kernel cache for a given virtual address range in
392
the vmap area. This is to make sure that any data the kernel
393
modified in the vmap range is made visible to the physical
394
page. The design is to make this area safe to perform I/O on.
395
Note that this API does *not* also flush the offset map alias
398
void invalidate_kernel_vmap_range(void *vaddr, int size) invalidates
399
the cache for a given virtual address range in the vmap area
400
which prevents the processor from making the cache stale by
401
speculatively reading data while the I/O was occurring to the
402
physical pages. This is only necessary for data reads into the