TEXT   51

cachetlb.txt

Guest on 10th October 2021 07:10:03 AM

  1. ==================================
  2. Cache and TLB Flushing Under Linux
  3. ==================================
  4.  
  5. :Author: David S. Miller <davem@redhat.com>
  6.  
  7. This document describes the cache/tlb flushing interfaces called
  8. by the Linux VM subsystem.  It enumerates over each interface,
  9. describes its intended purpose, and what side effect is expected
  10. after the interface is invoked.
  11.  
  12. The side effects described below are stated for a uniprocessor
  13. implementation, and what is to happen on that single processor.  The
  14. SMP cases are a simple extension, in that you just extend the
  15. definition such that the side effect for a particular interface occurs
  16. on all processors in the system.  Don't let this scare you into
  17. thinking SMP cache/tlb flushing must be so inefficient, this is in
  18. fact an area where many optimizations are possible.  For example,
  19. if it can be proven that a user address space has never executed
  20. on a cpu (see mm_cpumask()), one need not perform a flush
  21. for this address space on that cpu.
  22.  
  23. First, the TLB flushing interfaces, since they are the simplest.  The
  24. "TLB" is abstracted under Linux as something the cpu uses to cache
  25. virtual-->physical address translations obtained from the software
  26. page tables.  Meaning that if the software page tables change, it is
  27. possible for stale translations to exist in this "TLB" cache.
  28. Therefore when software page table changes occur, the kernel will
  29. invoke one of the following flush methods _after_ the page table
  30. changes occur:
  31.  
  32. 1) ``void flush_tlb_all(void)``
  33.  
  34.         The most severe flush of all.  After this interface runs,
  35.         any previous page table modification whatsoever will be
  36.         visible to the cpu.
  37.  
  38.         This is usually invoked when the kernel page tables are
  39.         changed, since such translations are "global" in nature.
  40.  
  41. 2) ``void flush_tlb_mm(struct mm_struct *mm)``
  42.  
  43.         This interface flushes an entire user address space from
  44.         the TLB.  After running, this interface must make sure that
  45.         any previous page table modifications for the address space
  46.         'mm' will be visible to the cpu.  That is, after running,
  47.         there will be no entries in the TLB for 'mm'.
  48.  
  49.         This interface is used to handle whole address space
  50.         page table operations such as what happens during
  51.         fork, and exec.
  52.  
  53. 3) ``void flush_tlb_range(struct vm_area_struct *vma,
  54.    unsigned long start, unsigned long end)``
  55.  
  56.         Here we are flushing a specific range of (user) virtual
  57.         address translations from the TLB.  After running, this
  58.         interface must make sure that any previous page table
  59.         modifications for the address space 'vma->vm_mm' in the range
  60.         'start' to 'end-1' will be visible to the cpu.  That is, after
  61.         running, there will be no entries in the TLB for 'mm' for
  62.         virtual addresses in the range 'start' to 'end-1'.
  63.  
  64.         The "vma" is the backing store being used for the region.
  65.         Primarily, this is used for munmap() type operations.
  66.  
  67.         The interface is provided in hopes that the port can find
  68.         a suitably efficient method for removing multiple page
  69.         sized translations from the TLB, instead of having the kernel
  70.         call flush_tlb_page (see below) for each entry which may be
  71.         modified.
  72.  
  73. 4) ``void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)``
  74.  
  75.         This time we need to remove the PAGE_SIZE sized translation
  76.         from the TLB.  The 'vma' is the backing structure used by
  77.         Linux to keep track of mmap'd regions for a process, the
  78.         address space is available via vma->vm_mm.  Also, one may
  79.         test (vma->vm_flags & VM_EXEC) to see if this region is
  80.         executable (and thus could be in the 'instruction TLB' in
  81.         split-tlb type setups).
  82.  
  83.         After running, this interface must make sure that any previous
  84.         page table modification for address space 'vma->vm_mm' for
  85.         user virtual address 'addr' will be visible to the cpu.  That
  86.         is, after running, there will be no entries in the TLB for
  87.         'vma->vm_mm' for virtual address 'addr'.
  88.  
  89.         This is used primarily during fault processing.
  90.  
  91. 5) ``void update_mmu_cache(struct vm_area_struct *vma,
  92.    unsigned long address, pte_t *ptep)``
  93.  
  94.         At the end of every page fault, this routine is invoked to
  95.         tell the architecture specific code that a translation
  96.         now exists at virtual address "address" for address space
  97.         "vma->vm_mm", in the software page tables.
  98.  
  99.         A port may use this information in any way it so chooses.
  100.         For example, it could use this event to pre-load TLB
  101.         translations for software managed TLB configurations.
  102.         The sparc64 port currently does this.
  103.  
  104. 6) ``void tlb_migrate_finish(struct mm_struct *mm)``
  105.  
  106.         This interface is called at the end of an explicit
  107.         process migration. This interface provides a hook
  108.         to allow a platform to update TLB or context-specific
  109.         information for the address space.
  110.  
  111.         The ia64 sn2 platform is one example of a platform
  112.         that uses this interface.
  113.  
  114. Next, we have the cache flushing interfaces.  In general, when Linux
  115. is changing an existing virtual-->physical mapping to a new value,
  116. the sequence will be in one of the following forms::
  117.  
  118.         1) flush_cache_mm(mm);
  119.            change_all_page_tables_of(mm);
  120.            flush_tlb_mm(mm);
  121.  
  122.         2) flush_cache_range(vma, start, end);
  123.            change_range_of_page_tables(mm, start, end);
  124.            flush_tlb_range(vma, start, end);
  125.  
  126.         3) flush_cache_page(vma, addr, pfn);
  127.            set_pte(pte_pointer, new_pte_val);
  128.            flush_tlb_page(vma, addr);
  129.  
  130. The cache level flush will always be first, because this allows
  131. us to properly handle systems whose caches are strict and require
  132. a virtual-->physical translation to exist for a virtual address
  133. when that virtual address is flushed from the cache.  The HyperSparc
  134. cpu is one such cpu with this attribute.
  135.  
  136. The cache flushing routines below need only deal with cache flushing
  137. to the extent that it is necessary for a particular cpu.  Mostly,
  138. these routines must be implemented for cpus which have virtually
  139. indexed caches which must be flushed when virtual-->physical
  140. translations are changed or removed.  So, for example, the physically
  141. indexed physically tagged caches of IA32 processors have no need to
  142. implement these interfaces since the caches are fully synchronized
  143. and have no dependency on translation information.
  144.  
  145. Here are the routines, one by one:
  146.  
  147. 1) ``void flush_cache_mm(struct mm_struct *mm)``
  148.  
  149.         This interface flushes an entire user address space from
  150.         the caches.  That is, after running, there will be no cache
  151.         lines associated with 'mm'.
  152.  
  153.         This interface is used to handle whole address space
  154.         page table operations such as what happens during exit and exec.
  155.  
  156. 2) ``void flush_cache_dup_mm(struct mm_struct *mm)``
  157.  
  158.         This interface flushes an entire user address space from
  159.         the caches.  That is, after running, there will be no cache
  160.         lines associated with 'mm'.
  161.  
  162.         This interface is used to handle whole address space
  163.         page table operations such as what happens during fork.
  164.  
  165.         This option is separate from flush_cache_mm to allow some
  166.         optimizations for VIPT caches.
  167.  
  168. 3) ``void flush_cache_range(struct vm_area_struct *vma,
  169.    unsigned long start, unsigned long end)``
  170.  
  171.         Here we are flushing a specific range of (user) virtual
  172.         addresses from the cache.  After running, there will be no
  173.         entries in the cache for 'vma->vm_mm' for virtual addresses in
  174.         the range 'start' to 'end-1'.
  175.  
  176.         The "vma" is the backing store being used for the region.
  177.         Primarily, this is used for munmap() type operations.
  178.  
  179.         The interface is provided in hopes that the port can find
  180.         a suitably efficient method for removing multiple page
  181.         sized regions from the cache, instead of having the kernel
  182.         call flush_cache_page (see below) for each entry which may be
  183.         modified.
  184.  
  185. 4) ``void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)``
  186.  
  187.         This time we need to remove a PAGE_SIZE sized range
  188.         from the cache.  The 'vma' is the backing structure used by
  189.         Linux to keep track of mmap'd regions for a process, the
  190.         address space is available via vma->vm_mm.  Also, one may
  191.         test (vma->vm_flags & VM_EXEC) to see if this region is
  192.         executable (and thus could be in the 'instruction cache' in
  193.         "Harvard" type cache layouts).
  194.  
  195.         The 'pfn' indicates the physical page frame (shift this value
  196.         left by PAGE_SHIFT to get the physical address) that 'addr'
  197.         translates to.  It is this mapping which should be removed from
  198.         the cache.
  199.  
  200.         After running, there will be no entries in the cache for
  201.         'vma->vm_mm' for virtual address 'addr' which translates
  202.         to 'pfn'.
  203.  
  204.         This is used primarily during fault processing.
  205.  
  206. 5) ``void flush_cache_kmaps(void)``
  207.  
  208.         This routine need only be implemented if the platform utilizes
  209.         highmem.  It will be called right before all of the kmaps
  210.         are invalidated.
  211.  
  212.         After running, there will be no entries in the cache for
  213.         the kernel virtual address range PKMAP_ADDR(0) to
  214.         PKMAP_ADDR(LAST_PKMAP).
  215.  
  216.         This routing should be implemented in asm/highmem.h
  217.  
  218. 6) ``void flush_cache_vmap(unsigned long start, unsigned long end)``
  219.    ``void flush_cache_vunmap(unsigned long start, unsigned long end)``
  220.  
  221.         Here in these two interfaces we are flushing a specific range
  222.         of (kernel) virtual addresses from the cache.  After running,
  223.         there will be no entries in the cache for the kernel address
  224.         space for virtual addresses in the range 'start' to 'end-1'.
  225.  
  226.         The first of these two routines is invoked after map_vm_area()
  227.         has installed the page table entries.  The second is invoked
  228.         before unmap_kernel_range() deletes the page table entries.
  229.  
  230. There exists another whole class of cpu cache issues which currently
  231. require a whole different set of interfaces to handle properly.
  232. The biggest problem is that of virtual aliasing in the data cache
  233. of a processor.
  234.  
  235. Is your port susceptible to virtual aliasing in its D-cache?
  236. Well, if your D-cache is virtually indexed, is larger in size than
  237. PAGE_SIZE, and does not prevent multiple cache lines for the same
  238. physical address from existing at once, you have this problem.
  239.  
  240. If your D-cache has this problem, first define asm/shmparam.h SHMLBA
  241. properly, it should essentially be the size of your virtually
  242. addressed D-cache (or if the size is variable, the largest possible
  243. size).  This setting will force the SYSv IPC layer to only allow user
  244. processes to mmap shared memory at address which are a multiple of
  245. this value.
  246.  
  247. .. note::
  248.  
  249.   This does not fix shared mmaps, check out the sparc64 port for
  250.   one way to solve this (in particular SPARC_FLAG_MMAPSHARED).
  251.  
  252. Next, you have to solve the D-cache aliasing issue for all
  253. other cases.  Please keep in mind that fact that, for a given page
  254. mapped into some user address space, there is always at least one more
  255. mapping, that of the kernel in its linear mapping starting at
  256. PAGE_OFFSET.  So immediately, once the first user maps a given
  257. physical page into its address space, by implication the D-cache
  258. aliasing problem has the potential to exist since the kernel already
  259. maps this page at its virtual address.
  260.  
  261.   ``void copy_user_page(void *to, void *from, unsigned long addr, struct page *page)``
  262.   ``void clear_user_page(void *to, unsigned long addr, struct page *page)``
  263.  
  264.         These two routines store data in user anonymous or COW
  265.         pages.  It allows a port to efficiently avoid D-cache alias
  266.         issues between userspace and the kernel.
  267.  
  268.         For example, a port may temporarily map 'from' and 'to' to
  269.         kernel virtual addresses during the copy.  The virtual address
  270.         for these two pages is chosen in such a way that the kernel
  271.         load/store instructions happen to virtual addresses which are
  272.         of the same "color" as the user mapping of the page.  Sparc64
  273.         for example, uses this technique.
  274.  
  275.         The 'addr' parameter tells the virtual address where the
  276.         user will ultimately have this page mapped, and the 'page'
  277.         parameter gives a pointer to the struct page of the target.
  278.  
  279.         If D-cache aliasing is not an issue, these two routines may
  280.         simply call memcpy/memset directly and do nothing more.
  281.  
  282.   ``void flush_dcache_page(struct page *page)``
  283.  
  284.         Any time the kernel writes to a page cache page, _OR_
  285.         the kernel is about to read from a page cache page and
  286.         user space shared/writable mappings of this page potentially
  287.         exist, this routine is called.
  288.  
  289.         .. note::
  290.  
  291.               This routine need only be called for page cache pages
  292.               which can potentially ever be mapped into the address
  293.               space of a user process.  So for example, VFS layer code
  294.               handling vfs symlinks in the page cache need not call
  295.               this interface at all.
  296.  
  297.         The phrase "kernel writes to a page cache page" means,
  298.         specifically, that the kernel executes store instructions
  299.         that dirty data in that page at the page->virtual mapping
  300.         of that page.  It is important to flush here to handle
  301.         D-cache aliasing, to make sure these kernel stores are
  302.         visible to user space mappings of that page.
  303.  
  304.         The corollary case is just as important, if there are users
  305.         which have shared+writable mappings of this file, we must make
  306.         sure that kernel reads of these pages will see the most recent
  307.         stores done by the user.
  308.  
  309.         If D-cache aliasing is not an issue, this routine may
  310.         simply be defined as a nop on that architecture.
  311.  
  312.         There is a bit set aside in page->flags (PG_arch_1) as
  313.         "architecture private".  The kernel guarantees that,
  314.         for pagecache pages, it will clear this bit when such
  315.         a page first enters the pagecache.
  316.  
  317.         This allows these interfaces to be implemented much more
  318.         efficiently.  It allows one to "defer" (perhaps indefinitely)
  319.         the actual flush if there are currently no user processes
  320.         mapping this page.  See sparc64's flush_dcache_page and
  321.         update_mmu_cache implementations for an example of how to go
  322.         about doing this.
  323.  
  324.         The idea is, first at flush_dcache_page() time, if
  325.         page->mapping->i_mmap is an empty tree, just mark the architecture
  326.         private page flag bit.  Later, in update_mmu_cache(), a check is
  327.         made of this flag bit, and if set the flush is done and the flag
  328.         bit is cleared.
  329.  
  330.         .. important::
  331.  
  332.                         It is often important, if you defer the flush,
  333.                         that the actual flush occurs on the same CPU
  334.                         as did the cpu stores into the page to make it
  335.                         dirty.  Again, see sparc64 for examples of how
  336.                         to deal with this.
  337.  
  338.   ``void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
  339.   unsigned long user_vaddr, void *dst, void *src, int len)``
  340.   ``void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
  341.   unsigned long user_vaddr, void *dst, void *src, int len)``
  342.  
  343.         When the kernel needs to copy arbitrary data in and out
  344.         of arbitrary user pages (f.e. for ptrace()) it will use
  345.         these two routines.
  346.  
  347.         Any necessary cache flushing or other coherency operations
  348.         that need to occur should happen here.  If the processor's
  349.         instruction cache does not snoop cpu stores, it is very
  350.         likely that you will need to flush the instruction cache
  351.         for copy_to_user_page().
  352.  
  353.   ``void flush_anon_page(struct vm_area_struct *vma, struct page *page,
  354.   unsigned long vmaddr)``
  355.  
  356.         When the kernel needs to access the contents of an anonymous
  357.         page, it calls this function (currently only
  358.         get_user_pages()).  Note: flush_dcache_page() deliberately
  359.         doesn't work for an anonymous page.  The default
  360.         implementation is a nop (and should remain so for all coherent
  361.         architectures).  For incoherent architectures, it should flush
  362.         the cache of the page at vmaddr.
  363.  
  364.   ``void flush_kernel_dcache_page(struct page *page)``
  365.  
  366.         When the kernel needs to modify a user page is has obtained
  367.         with kmap, it calls this function after all modifications are
  368.         complete (but before kunmapping it) to bring the underlying
  369.         page up to date.  It is assumed here that the user has no
  370.         incoherent cached copies (i.e. the original page was obtained
  371.         from a mechanism like get_user_pages()).  The default
  372.         implementation is a nop and should remain so on all coherent
  373.         architectures.  On incoherent architectures, this should flush
  374.         the kernel cache for page (using page_address(page)).
  375.  
  376.  
  377.   ``void flush_icache_range(unsigned long start, unsigned long end)``
  378.  
  379.         When the kernel stores into addresses that it will execute
  380.         out of (eg when loading modules), this function is called.
  381.  
  382.         If the icache does not snoop stores then this routine will need
  383.         to flush it.
  384.  
  385.   ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)``
  386.  
  387.         All the functionality of flush_icache_page can be implemented in
  388.         flush_dcache_page and update_mmu_cache. In the future, the hope
  389.         is to remove this interface completely.
  390.  
  391. The final category of APIs is for I/O to deliberately aliased address
  392. ranges inside the kernel.  Such aliases are set up by use of the
  393. vmap/vmalloc API.  Since kernel I/O goes via physical pages, the I/O
  394. subsystem assumes that the user mapping and kernel offset mapping are
  395. the only aliases.  This isn't true for vmap aliases, so anything in
  396. the kernel trying to do I/O to vmap areas must manually manage
  397. coherency.  It must do this by flushing the vmap range before doing
  398. I/O and invalidating it after the I/O returns.
  399.  
  400.   ``void flush_kernel_vmap_range(void *vaddr, int size)``
  401.  
  402.        flushes the kernel cache for a given virtual address range in
  403.        the vmap area.  This is to make sure that any data the kernel
  404.        modified in the vmap range is made visible to the physical
  405.        page.  The design is to make this area safe to perform I/O on.
  406.        Note that this API does *not* also flush the offset map alias
  407.        of the area.
  408.  
  409.   ``void invalidate_kernel_vmap_range(void *vaddr, int size) invalidates``
  410.  
  411.        the cache for a given virtual address range in the vmap area
  412.        which prevents the processor from making the cache stale by
  413.        speculatively reading data while the I/O was occurring to the
  414.        physical pages.  This is only necessary for data reads into the
  415.        vmap area.

Raw Paste


Login or Register to edit or fork this paste. It's free.