Main Page: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
mNo edit summary
No edit summary
 
(431 intermediate revisions by more than 100 users not shown)
Line 1: Line 1:
{{Use dmy dates|date=August 2012}}
This is a preview for the new '''MathML rendering mode''' (with SVG fallback), which is availble in production for registered users.
{{about|algorithms specific to paging|outline of general cache algorithms (e.g. processor, disk, database, web)|Cache algorithms}}
In a [[computer]] [[operating system]] that uses [[paging]] for [[virtual memory]] [[memory management|management]], '''page replacement algorithms''' decide which memory pages to page out (swap out, write to disk) when a page of memory needs to be allocated. Paging happens when a [[page fault]] occurs and a free page cannot be used to satisfy the allocation, either because there are none, or because the number of free pages is lower than some threshold.


When the page that was selected for replacement and paged out is referenced again it has to be paged in (read in from disk), and this involves waiting for I/O completion. This determines the ''quality'' of the page replacement algorithm: the less time waiting for page-ins, the better the algorithm. A page replacement algorithm looks at the limited information about accesses to the pages provided by hardware, and tries to guess which pages should be replaced to minimize the total number of page misses, while balancing this with the costs (primary storage and processor time) of the algorithm itself.
If you would like use the '''MathML''' rendering mode, you need a wikipedia user account that can be registered here [[https://en.wikipedia.org/wiki/Special:UserLogin/signup]]
* Only registered users will be able to execute this rendering mode.
* Note: you need not enter a email address (nor any other private information). Please do not use a password that you use elsewhere.


The page replacing problem is a typical [[online problem]] from the competitive analysis perspective in the sense that optimal deterministic algorithm is known. 
Registered users will be able to choose between the following three rendering modes:


== History ==
'''MathML'''
Page replacement algorithms were a hot topic of research and debate in the 1960s and 1970s.
:<math forcemathmode="mathml">E=mc^2</math>
That mostly ended with the development of sophisticated LRU approximations and working set algorithms. Since then, some basic assumptions made by the traditional page replacement algorithms were invalidated, resulting in a revival of research. In particular, the following trends in the behavior of underlying hardware and user-level software have affected the performance of page replacement algorithms:


* Size of primary storage has increased by multiple orders of magnitude. With several gigabytes of primary memory, algorithms that require a periodic check of each and every memory frame are becoming less and less practical.
<!--'''PNG'''  (currently default in production)
* Memory hierarchies have grown taller. The cost of a CPU cache miss is far more expensive. This exacerbates the previous problem.
:<math forcemathmode="png">E=mc^2</math>
* [[Memory locality|Locality of reference]] of user software has weakened. This is mostly attributed to the spread of [[object-oriented programming]] techniques that favor large numbers of small functions, use of sophisticated data structures like [[Tree data structure|tree]]s and [[hash table]]s that tend to result in chaotic memory reference patterns, and the advent of [[garbage collection (computer science)|garbage collection]] that drastically changed memory access behavior of applications.


Requirements for page replacement algorithms have changed due to differences in operating system [[Kernel (computer science)|kernel]] architectures. In particular, most modern OS kernels have unified virtual memory and file system caches, requiring the page replacement algorithm to select a page from among the pages of both user program virtual address spaces and cached files. The latter pages have specific properties. For example, they can be locked, or can have write ordering requirements imposed by [[journaling file system|journaling]]. Moreover, as the goal of page replacement is to minimize total time waiting for memory, it has to take into account memory requirements imposed by other kernel sub-systems that allocate memory. As a result, page replacement in modern kernels ([[Linux]], [[FreeBSD]], and [[Solaris (operating system)|Solaris]]) tends to work at the level of a general purpose kernel memory allocator, rather than at the higher level of a virtual memory subsystem.
'''source'''
:<math forcemathmode="source">E=mc^2</math> -->


== Local vs. global replacement ==
<span style="color: red">Follow this [https://en.wikipedia.org/wiki/Special:Preferences#mw-prefsection-rendering link] to change your Math rendering settings.</span> You can also add a [https://en.wikipedia.org/wiki/Special:Preferences#mw-prefsection-rendering-skin Custom CSS] to force the MathML/SVG rendering or select different font families. See [https://www.mediawiki.org/wiki/Extension:Math#CSS_for_the_MathML_with_SVG_fallback_mode these examples].
Replacement algorithms can be ''local'' or ''global''.


When a process incurs a page fault, a local page replacement algorithm selects for replacement some page that belongs to that same process (or a group of processes sharing a [[Memory management (operating systems)#Partitioned allocation|memory partition]]).
==Demos==
A global replacement algorithm is free to select any page in memory.


Local page replacement assumes some form of memory partitioning that determines how many pages are to be assigned to a given process or a group of processes. Most popular forms of partitioning are ''fixed partitioning'' and ''balanced set'' algorithms based on the [[working set]] model. The advantage of local page replacement is its scalability: each process can handle its page faults independently without contending for some shared global data structure.
Here are some [https://commons.wikimedia.org/w/index.php?title=Special:ListFiles/Frederic.wang demos]:


== Precleaning ==
Most replacement algorithms simply return the target page as their result. This means that if target page is ''dirty'' (that is, contains data that have to be written to the stable storage before page can be reclaimed), I/O has to be initiated to send that page to the stable storage (to ''clean'' the page). In the early days of virtual memory, time spent on cleaning was not of much concern, because virtual memory was first implemented on systems with [[full duplex]] channels to the stable storage, and cleaning was customarily overlapped with paging. Contemporary commodity hardware, on the other hand, does not support full duplex transfers, and cleaning of target pages becomes an issue.


To deal with this situation, various ''precleaning'' policies are implemented. Precleaning is the mechanism that starts I/O on dirty pages that are (likely) to be replaced soon. The idea is that by the time the precleaned page is actually selected for the replacement, the I/O will complete and the page will be clean. Precleaning assumes that it is possible to identify pages that will be replaced ''next''. Precleaning that is too eager can waste I/O bandwidth by writing pages that manage to get re-dirtied before being selected for replacement.
* accessibility:
** Safari + VoiceOver: [https://commons.wikimedia.org/wiki/File:VoiceOver-Mac-Safari.ogv video only], [[File:Voiceover-mathml-example-1.wav|thumb|Voiceover-mathml-example-1]], [[File:Voiceover-mathml-example-2.wav|thumb|Voiceover-mathml-example-2]], [[File:Voiceover-mathml-example-3.wav|thumb|Voiceover-mathml-example-3]], [[File:Voiceover-mathml-example-4.wav|thumb|Voiceover-mathml-example-4]], [[File:Voiceover-mathml-example-5.wav|thumb|Voiceover-mathml-example-5]], [[File:Voiceover-mathml-example-6.wav|thumb|Voiceover-mathml-example-6]], [[File:Voiceover-mathml-example-7.wav|thumb|Voiceover-mathml-example-7]]
** [https://commons.wikimedia.org/wiki/File:MathPlayer-Audio-Windows7-InternetExplorer.ogg Internet Explorer + MathPlayer (audio)]
** [https://commons.wikimedia.org/wiki/File:MathPlayer-SynchronizedHighlighting-WIndows7-InternetExplorer.png Internet Explorer + MathPlayer (synchronized highlighting)]
** [https://commons.wikimedia.org/wiki/File:MathPlayer-Braille-Windows7-InternetExplorer.png Internet Explorer + MathPlayer (braille)]
** NVDA+MathPlayer: [[File:Nvda-mathml-example-1.wav|thumb|Nvda-mathml-example-1]], [[File:Nvda-mathml-example-2.wav|thumb|Nvda-mathml-example-2]], [[File:Nvda-mathml-example-3.wav|thumb|Nvda-mathml-example-3]], [[File:Nvda-mathml-example-4.wav|thumb|Nvda-mathml-example-4]], [[File:Nvda-mathml-example-5.wav|thumb|Nvda-mathml-example-5]], [[File:Nvda-mathml-example-6.wav|thumb|Nvda-mathml-example-6]], [[File:Nvda-mathml-example-7.wav|thumb|Nvda-mathml-example-7]].
** Orca: There is ongoing work, but no support at all at the moment [[File:Orca-mathml-example-1.wav|thumb|Orca-mathml-example-1]], [[File:Orca-mathml-example-2.wav|thumb|Orca-mathml-example-2]], [[File:Orca-mathml-example-3.wav|thumb|Orca-mathml-example-3]], [[File:Orca-mathml-example-4.wav|thumb|Orca-mathml-example-4]], [[File:Orca-mathml-example-5.wav|thumb|Orca-mathml-example-5]], [[File:Orca-mathml-example-6.wav|thumb|Orca-mathml-example-6]], [[File:Orca-mathml-example-7.wav|thumb|Orca-mathml-example-7]].
** From our testing, ChromeVox and JAWS are not able to read the formulas generated by the MathML mode.


==Anticipatory paging==
==Test pages ==
{{seealso|Paging}}
Some systems use [[demand paging]] -- waiting until a page is actually requested before loading it into RAM.


Other systems attempt to reduce latency by guessing which pages not in RAM are likely to be needed soon, and pre-loading such pages into RAM, before that page is requested. (This is often in combination with pre-cleaning, which guesses which pages currently in RAM are not likely to be needed soon, and pre-writing them out to storage).
To test the '''MathML''', '''PNG''', and '''source''' rendering modes, please go to one of the following test pages:
*[[Displaystyle]]
*[[MathAxisAlignment]]
*[[Styling]]
*[[Linebreaking]]
*[[Unique Ids]]
*[[Help:Formula]]


When a page fault occurs, "anticipatory paging" systems will not only bring in the referenced page, but also the next few consecutive pages (analogous to a [[prefetch input queue]] in a CPU).
*[[Inputtypes|Inputtypes (private Wikis only)]]
 
*[[Url2Image|Url2Image (private Wikis only)]]
The [[Paging#Swap_prefetch|swap prefetch]] mechanism goes even further in loading pages (even if they are not consecutive) that are likely to be needed soon.
==Bug reporting==
 
If you find any bugs, please report them at [https://bugzilla.wikimedia.org/enter_bug.cgi?product=MediaWiki%20extensions&component=Math&version=master&short_desc=Math-preview%20rendering%20problem Bugzilla], or write an email to math_bugs (at) ckurs (dot) de .
==The (h,k)-Paging problem==
The (h,k)-Paging problem is a generalization of the model of paging problem: Let h,k be positive integers that <math>h \leq k</math>. We measure the performance of an  algorithm with cache of size <math>h \leq k</math> relative to the [[theoretically optimal page replacement algorithm]]. If <math>h<k</math> we provide the optimal page replacement algorithm with strictly less resource.
 
The (h,k)-Paging problem is a way to measure how an online algorithm performs by comparing it with the performance of the optimal algorithm, specifically, separately parameterizing the cache size of the online algorithm and optimal algorithm.
 
==Marking algorithms==
Marking algorithms is a general class of paging algorithms. For each page, we associate it with a bit called its mark. Initially, we set all pages as unmarked. During a stage of  page requests, we mark a page when it is first requested in this stage. A marking algorithm is such an algorithm that never pages out a marked page.
 
If ALG is a marking algorithm with a cache of size k, and OPT is the optimal algorithm with a cache of <math>h \leq k</math>. Then ALG is <math> \dfrac{k}{k-h+1}</math>-competitive. So every marking algorithm attains the <math> \dfrac{k}{k-h+1}</math>-competitive ratio.
 
LRU and CLOCK  are marking algorithms while FIFO is not a marking algorithm.
 
==Conservative algorithms==
An algorithm is conservative, if on any consecutive request sequence containing k or fewer distinct page references, the algorithm will incur k or fewer page faults.
 
If ALG is a conservative algorithm with a cache of size k, and OPT is the optimal algorithm with a cache of <math>h \leq k</math>. Then ALG is <math> \dfrac{k}{k-h+1}</math>-competitive. So every conservative algorithm attains the <math> \dfrac{k}{k-h+1}</math>-competitive ratio.
 
LRU, FIFO and CLOCK are conservative algorithms.
 
 
==Page replacement algorithms==
There are a variety of page replacement algorithms
<ref>
[http://www.cs.uiowa.edu/~jones/opsys/fall95/notes/0908.html "Lecture Notes"] by Douglas W. Jones 1995
</ref>:
 
===The theoretically optimal page replacement algorithm===
The theoretically optimal page replacement algorithm (also known as OPT, [[Clairvoyance|clairvoyant]] replacement algorithm, or [[László Bélády|Bélády's]] optimal page replacement policy)<ref>[http://www.read.cs.ucla.edu/111/2006fall/notes/lec11 2006fall:notes:lec11 [CS111&#93;<!-- Bot generated title -->]</ref><ref>[http://cat.inist.fr/?aModele=afficheN&cpsidt=15530156 Characterization of Web reference behavior revisited: Evidence for Dichotomized Cache management<!-- Bot generated title -->]</ref><ref>[http://www.cs.uiowa.edu/~jones/opsys/fall95/notes/0908.html 22C:116, Notes, 8 September 1995<!-- Bot generated title -->]</ref> is an algorithm that works as follows: when a page needs to be swapped in, the [[operating system]] swaps out the page whose next use will occur farthest in the future. For example, a page that is not going to be used for the next 6 seconds will be swapped out over a page that is going to be used within the next 0.4 seconds.
 
This algorithm cannot be implemented in the general purpose operating system because it is impossible to compute reliably how long it will be before a page is going to be used, except when all software that will run on a system is either known beforehand and is amenable to the static analysis of its memory reference patterns, or only a class of applications allowing run-time analysis. Despite this limitation, algorithms exist{{Citation needed|date=June 2008}} that can offer near-optimal performance &mdash;<!--  on the first run of a program, (conflicts with following sentense) --> the operating system keeps track of all pages referenced by the program, and it uses those data to decide which pages to swap in and out on subsequent runs. This algorithm can offer near-optimal performance, but not on the first run of a program, and only if the program's memory reference pattern is relatively consistent each time it runs.
 
Analysis of the paging problem has also been done in the field of [[online algorithm]]s. Efficiency of randomized online algorithms for the paging problem is measured using [[amortized analysis]].
 
===Not recently used===
The not recently used (NRU), sometimes known as the Least Recently Used (LRU), page replacement algorithm is an algorithm that favours keeping pages in memory that have been recently used. This algorithm works on the following principle: when a page is referenced, a referenced bit is set for that page, marking it as referenced. Similarly, when a page is modified (written to), a modified bit is set. The setting of the bits is usually done by the hardware, although it is possible to do so on the software level as well.
 
At a certain fixed time interval, the clock interrupt triggers and clears the referenced bit of all the pages, so only pages referenced within the current clock interval are marked with a referenced bit. When a page needs to be replaced, the [[operating system]] divides the pages into four classes:
 
<ol start=0>
<li>not referenced, not modified
<li>not referenced, modified
<li>referenced, not modified
<li>referenced, modified
</ol>
 
Although it does not seem possible for a page to be not referenced yet modified, this happens when a class 3 page has its referenced bit cleared by the clock interrupt. The NRU algorithm picks a random page from the lowest category for removal. Note that this algorithm implies that a modified (within clock interval) but not referenced page is less important than a not modified page that is intensely referenced.
 
NRU is a marking algorithm, so it is <math> \dfrac{k}{k-h+1}</math>-competitive.
 
===First-in, first-out===
The simplest page-replacement algorithm is a FIFO algorithm. The first-in, first-out (FIFO) page replacement algorithm is a low-overhead algorithm that requires little book-keeping on the part of the [[operating system]]. The idea is obvious from the name - the operating system keeps track of all the pages in memory in a queue, with the most recent arrival at the back, and the earliest arrival in front. When a page needs to be replaced, the page at the front of the queue (the oldest page) is selected. While FIFO is cheap and intuitive, it performs poorly in practical application. Thus, it is rarely used in its unmodified form. This algorithm experiences [[Bélády's anomaly]].
 
FIFO page replacement algorithm is used by the [[VAX/VMS]] operating system, with some modifications.<ref name="Silber">Abraham Silberschatz, Peter Baer Galvin, Greg Gagne. ''Operating Systems Concepts (Seventh Edition).'': Wiley 2005. p. 339.</ref>
Partial second chance is provided by skipping a limited number of entries with valid translation table references,<ref>[http://mx.isti.cnr.it/cgi-bin/conan?key=Sys_Parameters~TBSKIPWSL&title=VMS%20Help&referer= VMS Help<!-- Bot generated title -->]</ref> and additionally, pages are displaced from process working set to a systemwide pool from which they can be recovered if not already re-used.
 
FIFO is a conservative algorithm, so it is <math> \dfrac{k}{k-h+1}</math>-competitive.
 
===Second-chance===
A modified form of the FIFO page replacement algorithm, known as the Second-chance page replacement algorithm, fares relatively better than FIFO at little cost for the improvement. It works by looking at the front of the queue as FIFO does, but instead of immediately paging out that page, it checks to see if its referenced bit is set. If it is not set, the page is swapped out. Otherwise, the referenced bit is cleared, the page is inserted at the back of the queue (as if it were a new page) and this process is repeated. This can also be thought of as a circular queue. If all the pages have their referenced bit set, on the second encounter of the first page in the list, that page will be swapped out, as it now has its referenced bit cleared. If all the pages have their reference bit set then second chance algorithm degenerates into pure FIFO.
 
As its name suggests, Second-chance gives every page a "second-chance" - an old page that has been referenced is probably in use, and should not be swapped out over a new page that has not been referenced.
 
===Clock===
Clock is a more efficient version of FIFO than Second-chance because pages don't have to be constantly pushed to the back of the list, but it performs the same general function as Second-Chance. The clock algorithm keeps a circular list of pages in memory, with the "hand" (iterator) pointing to the last examined page frame in the list. When a page fault occurs and no empty frames exist, then the R (referenced) bit is inspected at the hand's location. If R is 0, the new page is put in place of the page the "hand" points to, otherwise the R bit is cleared. Then, the clock hand is incremented and the process is repeated until a page is replaced.<ref name="Tanenbaum">Andrew S. Tanenbaum. ''Modern Operating Systems (Second Edition)''. pp. 218 (4.4.5). 2001.</ref>
 
==== Variants on Clock ====
* GCLOCK: Generalized clock page replacement algorithm.<ref>[http://portal.acm.org/citation.cfm?id=320276 Sequentiality and prefetching in database systems]</ref>
* Clock-Pro keeps a circular list of information about recently-referenced pages, including all M pages in memory as well as the most recent M pages that have been paged out. This extra information on paged-out pages, like the similar information maintained by [[adaptive replacement cache|ARC]], helps it work better than LRU on large loops and one-time scans.<ref>[http://www.cse.ohio-state.edu/hpcs/WWW/HTML/publications/papers/TR-05-3.pdf "CLOCK-Pro: An Effective Improvement of the CLOCK Replacement" by Song Jiang, Feng Chen, and Xiaodong Zhang, 2005]</ref>
* WSclock.<ref>
"WSCLOCK—a simple and effective algorithm for virtual memory management"
by Richard W. Carr and John L. Hennessy, 1981
[http://portal.acm.org/citation.cfm?id=806596]
[http://infolab.stanford.edu/~manku/quals/zpapers/81-wsclock.pdf.gz]
</ref> The "aging" algorithm and the "WSClock" algorithm are probably the most important page replacement algorithms in practice.<ref>
[http://www.cs.nyu.edu/courses/spring09/V22.0202-002/wsclock-davis.html "WSClock"]
by Allan Gottlieb
</ref><ref>
[http://www.informit.com/articles/article.aspx?p=25260&seqNum=11 "Page Replacement Algorithms"]
by Andrew S. Tanenbaum 2002
</ref>
* Clock with Adaptive Replacement (CAR) is a page replacement algorithm that has performance comparable to [[Adaptive Replacement Cache|ARC]], and substantially outperforms both LRU and CLOCK<ref>{{cite conference
| author =  Sorav Bansal and Dharmendra S. Modha
| title = CAR: Clock with Adaptive Replacement
| booktitle = In Proceedings of the USENIX Conference on File and Storage Technologies (FAST)
| pages = 187--200
| year = 2004
| url = http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.105.6057
| accessdate =19 February 2012
}}</ref>. The algorithm CAR is self-tuning and requires no user-specified magic parameters.
 
CLOCK is a conservative algorithm, so it is <math> \dfrac{k}{k-h+1}</math>-competitive.
===Least recently used===
The least recently used page (LRU) replacement algorithm, though similar in name to NRU, differs in the fact that LRU keeps track of page usage over a short period of time, while NRU just looks at the usage in the last clock interval. LRU works on the idea that pages that have been most heavily used in the past few instructions are most likely to be used heavily in the next few instructions too. While LRU can provide near-optimal performance in theory (almost as good as [[Adaptive Replacement Cache]]), it is rather expensive to implement in practice. There are a few implementation methods for this algorithm that try to reduce the cost yet keep as much of the performance as possible.
 
The most expensive method is the linked list method, which uses a linked list containing all the pages in memory. At the back of this list is the least recently used page, and at the front is the most recently used page. The cost of this implementation lies in the fact that items in the list will have to be moved about every memory reference, which is a very time-consuming process.
 
Another method that requires hardware support is as follows: suppose the hardware has a 64-bit counter that is incremented at every instruction. Whenever a page is accessed, it gains a value equal to the counter at the time of page access. Whenever a page needs to be replaced, the [[operating system]] selects the page with the lowest counter and swaps it out. With present hardware, this is not feasible because the OS needs to examine the counter for every page in memory.
 
Because of implementation costs, one may consider algorithms (like those that follow) that are similar to LRU, but which offer cheaper implementations.
 
One important advantage of the LRU algorithm is that it is amenable to full statistical analysis. It has been proven, for example, that LRU can never result in more than N-times more page faults than OPT algorithm, where N is proportional to the number of pages in the managed pool.
 
On the other hand, LRU's weakness is that its performance tends to degenerate under many quite common reference patterns. For example, if there are N pages in the LRU pool, an application executing a  loop over array of N + 1 pages will cause a page fault on each and every access. As loops over large arrays are common, much effort has been put into modifying LRU to work better in such situations. Many of the proposed LRU modifications try to detect looping reference patterns and to switch into suitable replacement algorithm, like Most Recently Used (MRU).
 
====Variants on LRU====
# [http://www.ics.forth.gr/dcs/Activities/papers/2000.WCW.caching_search.pdf LRU-K] improves greatly on LRU with regard to locality in time. It's also known as LRU-2, for the case that K=2. LRU-1 (i.e. K=1) is the same as normal LRU.
# The [[Adaptive Replacement Cache|ARC]]<ref>Megiddo & Modha, ''[http://www.almaden.ibm.com/cs/people/dmodha/arcfast.pdf ARC: A Self-tuning, low overhead replacement cache]''</ref> algorithm extends LRU by maintaining a history of recently evicted pages and uses this to change preference to recent or frequent access. It is particularly resistant to sequential scans.
 
A comparison of ARC with other algorithms (LRU,MQ,2Q,LRU-2,LRFU,LIRS) can be found in Megiddo & Modha.<ref>Nimrod Megiddo & Dharmendra S. Modha, ''{{PDFlink|[http://www.almaden.ibm.com/cs/people/dmodha/ARC.pdf Outperforming LRU with an Adaptive Replacement Cache Algorithm]|123&nbsp;KB}}'', IEEE Computer Magazine, pp. 58-65, April 2004.</ref>
 
LRU is a marking algorithm, so it is <math> \dfrac{k}{k-h+1}</math>-competitive.
 
===Random===
Random replacement algorithm replaces a random page in memory. This eliminates the overhead cost of tracking page references. Usually it fares better than FIFO, and for looping memory references it is better than LRU, although generally LRU performs better in practice. [[OS/390]] uses global LRU approximation and falls back to random replacement when LRU performance degenerates, and the [[Intel i860]] processor used a random replacement policy (Rhodehamel 1989).
 
===Not frequently used===
 
The not frequently used (NFU) page replacement algorithm requires a counter, and every page has one counter of its own which is initially set to 0. At each clock interval, all pages that have been referenced within that interval will have their counter incremented by 1. In effect, the counters keep track of how frequently a page has been used. Thus, the page with the lowest counter can be swapped out when necessary.
 
The main problem with NFU is that it keeps track of the frequency of use without regard to the time span of use. Thus, in a multi-pass compiler, pages which were heavily used during the first pass, but are not needed in the second pass will be favoured over pages which are comparably lightly used in the second pass, as they have higher frequency counters. This results in poor performance. Other common scenarios exist where NFU will perform similarly, such as an OS boot-up. Thankfully, a similar and better algorithm exists, and its description follows.
 
The not frequently used page-replacement algorithm generates fewer page faults than the least recently used page replacement algorithm when the page table contains null pointer values.
 
===Aging===
The aging algorithm is a descendant of the NFU algorithm, with modifications to make it aware of the time span of use. Instead of just incrementing the counters of pages referenced, putting equal emphasis on page references regardless of the time, the reference counter on a page is first shifted right (divided by 2), before adding the referenced bit to the left of that binary number. For instance, if a page has referenced bits 1,0,0,1,1,0 in the past 6 clock ticks, its referenced counter will look like this: 10000000, 01000000, 00100000, 10010000, 11001000, 01100100. Page references closer to the present time have more impact than page references long ago. This ensures that pages referenced more recently, though less frequently referenced, will have higher priority over pages more frequently referenced in the past. Thus, when a page needs to be swapped out, the page with the lowest counter will be chosen.
 
Note that aging differs from LRU in the sense that aging can only keep track of the references in the latest 16/32 (depending on the bit size of the processor's integers) time intervals. Consequently, two pages may have referenced counters of 00000000, even though one page was referenced 9 intervals ago and the other 1000 intervals ago. Generally speaking, knowing the usage within the past 16 intervals is sufficient for making a good decision as to which page to swap out. Thus, aging can offer near-optimal performance for a moderate price.
 
===Techniques for hardware with no reference bit===
Many of the techniques discussed above assume the presence of a reference bit associated with each page.  Some hardware has no such bit, so its efficient use requires techniques that operate well without one.
 
One notable example is [[VAX]] hardware running [[OpenVMS]]. This system knows if a page has been modified, but not necessarily if a page has been read. Its approach is known as Secondary Page Caching.  Pages removed from working sets (process-private memory, generally) are placed on special-purpose lists while remaining in physical memory for some time.  Removing a page from a working set is not technically a page-replacement operation, but effectively identifies that page as a candidate.  A page whose backing store is still valid (whose contents are not dirty, or otherwise do not need to be preserved) is placed on the tail of the Free Page List.  A page that requires writing to backing store will be placed on the Modified Page List.  These actions are typically triggered when the size of the Free Page List falls below an adjustable threshold.
 
Pages may be selected for working set removal in an essentially random fashion, with the expectation that if a poor choice is made, a future reference may retrieve that page from the Free or Modified list before it is removed from physical memory.  A page referenced this way will be removed from the Free or Modified list and placed back into a process working set.  The Modified Page List additionally provides an opportunity to write pages out to backing store in groups of more than one page, increasing efficiency.  These pages can then be placed on the Free Page List.  The sequence of pages that works its way to the head of the Free Page List resembles the results of a LRU or NRU mechanism and the overall effect has similarities to the Second-Chance algorithm described earlier.
 
Another example is used by the [[Linux kernel]] on [[ARM architecture|ARM]]. The lack of hardware functionality is made up for by providing two page tables - the processor-native page tables, with neither referenced bits nor [[dirty bit]]s, and software-maintained page tables with the required bits present. The emulated bits in the software-maintained table are set by page faults. In order to get the page faults, clearing emulated bits in the second table revokes some of the access rights to the corresponding page, which is implemented by altering the native table.
 
== Working set ==
{{main|Working set}}
The working set of a process is the set of pages expected to be used by that process during some time interval.
 
The "working set model" isn't a page replacement algorithm in the strict sense (it's actually a kind of [[scheduling (computing)#Medium-term_scheduling|medium-term scheduler]]){{Clarify|date=August 2011}}
 
== References ==
{{reflist|30em}}
 
===See also===
* K. Y. Wong, "[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1580916 Web Cache Replacement Policies: A Pragmatic Approach]" IEEE Network, vol. 20, iss. 2, Jan-Feb. 2006, pp.&nbsp;28–34.
* Aho, Denning and Ullman, ''[http://portal.acm.org/citation.cfm?id=321632&coll=portal&dl=ACM Principles of Optimal Page Replacement]'', Journal of the ACM, Vol. 18, Issue 1,January 1971, pp 80–93
* Rhodehamel, Michael W. "[http://ieeexplore.ieee.org/xpl/abs_free.jsp?arNumber=63392 The Bus Interface and Paging Units of the i860(tm) Microprocessor]". In Proc. IEEE International Conference on Computer Design, p.&nbsp;380-384, 1989.
* Tanenbaum, Andrew S. ''Operating Systems: Design and Implementation (Second Edition)''. New Jersey: Prentice-Hall 1997.
* Tanenbaum, Andrew S. ''Modern Operating Systems (Second Edition)''. New Jersey: Prentice-Hall 2001. Online excerpt on page replacement algorithms: [http://www.informit.com/articles/article.aspx?p=25260 Page Replacement Algorithms].
* Johnson and Shasha, ''{{PDFlink|[http://www.vldb.org/conf/1994/P439.PDF 2Q: A Low Overhead High Performance Buffer Management Replacement Algorithm]|1.01&nbsp;MB}}'' [http://www.vldb.org/dblp/db/conf/vldb/vldb94-439.html abstract]
* Gideon Glass and Pei Cao [http://portal.acm.org/citation.cfm?id=258681&jmp=abstract&coll=portal&dl=ACM&CFID=12125227&CFTOKEN=21656990#abstract  Adaptive-Page-Replacement-Based-on-Memory-Reference-Behavior].  Also available in extended form as [http://www.cs.wisc.edu/techreports/viewreport.php?report=1338 Technical Report 1338] at www.cs.wisc.edu
* Jongmin Kim and others, ''{{PDFlink|[http://ssrnet.snu.ac.kr/~choijm/paper/IC-2000-OSDI-UBM.pdf A Low-Overhead High-Performance Unified Buffer Management Scheme that Exploits Sequential and Looping References]|4.14&nbsp;MB}}'', [http://www.usenix.org/events/osdi2000/ Usenix Symposium on Operating System Design and Implementation (OSDI'2000)], San Diego, CA,  17–21 October 2000
* Sorav Bansal and Dharmendra S. Modha, ''{{PDFlink|[http://www.almaden.ibm.com/cs/people/dmodha/clockfast.pdf CAR: Clock with Adaptive Replacement]|212&nbsp;KB}}''
* Smaragdakis and others, ''{{PDFlink|[http://www.cs.amherst.edu/~sfkaplan/courses/spring-2004/cs40/papers/SKW:EELRUSEAPR.pdf EELRU: Simple and Effective Adaptive Page Replacement]|1.55&nbsp;MB}}''
* Song Jiang and Xiaodong Zhang, ''{{PDFlink|[http://www.cse.ohio-state.edu/hpcs/WWW/HTML/publications/papers/TR-02-6.pdf LIRS: a Low Inter Reference recency Set replacement]|283&nbsp;KB}}'', SIGMETRICS 2002
* D. Lee and others, ''[http://csdl2.computer.org/persagen/DLAbsToc.jsp?resourcePath=/dl/proceedings/&toc=comp/proceedings/euromicro/1997/8215/00/8215toc.xml&DOI=10.1109/EMSCNT.1997.658446  Implementation and Performance Evaluation of the LRFU Replacement Policy]'', p.&nbsp;0106,  23rd [http://csdl2.computer.org/persagen/DLAbsToc.jsp?resourcePath=/dl/proceedings/euromicro/&toc=comp/proceedings/euromicro/1997/8215/00/8215toc.xml Euromicro Conference: New Frontiers of Information Technology-Short Contributions], 1997
* Elizabeth J. O'Neil and others, ''{{PDFlink|[http://www.cs.cmu.edu/~christos/courses/721-resources/p297-o_neil.pdf The LRU-K page replacement algorithm for database disk buffering]|1.17&nbsp;MB}}'',  [http://portal.acm.org/citation.cfm?id=170036.170081 ACM SIGMOD Conf.], pp.&nbsp;297–306, 1993.
* Y. Zhou and J.F. Philbin, ''[http://www.usenix.org/events/usenix01/full_papers/zhou/ The Multi-Queue Replacement Algorithm for Second-Level Buffer Caches]'', Proc. Usenix Ann. Tech. Conf. (Usenix 2001), pp.&nbsp;91–104.
 
{{DEFAULTSORT:Page Replacement Algorithm}}
[[Category:Virtual memory]]
[[Category:Memory management algorithms]]
[[Category:Online algorithms]]
 
[[ar:تبديل الصفحات]]
[[es:Algoritmos de reemplazo de páginas]]
[[eu:Orriak ordezkatzeko algoritmoak]]
[[ko:페이지 교체 알고리즘]]
[[ja:ページ置換アルゴリズム]]
[[mk:Алгоритми за преместување на мемориски страници]]
[[pt:Algoritmo de troca de página]]
[[simple:Page replacement algorithm]]
[[tr:Sayfa yer değiştirme algoritması]]
[[uk:Задача заміщення сторінок]]

Latest revision as of 22:52, 15 September 2019

This is a preview for the new MathML rendering mode (with SVG fallback), which is availble in production for registered users.

If you would like use the MathML rendering mode, you need a wikipedia user account that can be registered here [[1]]

  • Only registered users will be able to execute this rendering mode.
  • Note: you need not enter a email address (nor any other private information). Please do not use a password that you use elsewhere.

Registered users will be able to choose between the following three rendering modes:

MathML

E=mc2


Follow this link to change your Math rendering settings. You can also add a Custom CSS to force the MathML/SVG rendering or select different font families. See these examples.

Demos

Here are some demos:


Test pages

To test the MathML, PNG, and source rendering modes, please go to one of the following test pages:

Bug reporting

If you find any bugs, please report them at Bugzilla, or write an email to math_bugs (at) ckurs (dot) de .