documentation: Fix some inconsistencies in RTFP.txt

Some of the early history leaves out some citations and vice versa.  This
commit fixes these up.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
This commit is contained in:
Paul E. McKenney 2014-02-15 08:52:32 -08:00
parent 9b2b3bf531
commit e4696a1d3b

View File

@ -31,6 +31,14 @@ has lapsed, so this approach may be used in non-GPL software, if desired.
(In contrast, implementation of RCU is permitted only in software licensed
under either GPL or LGPL. Sorry!!!)
In 1987, Rashid et al. described lazy TLB-flush [RichardRashid87a].
At first glance, this has nothing to do with RCU, but nevertheless
this paper helped inspire the update-side batching used in the later
RCU implementation in DYNIX/ptx. In 1988, Barbara Liskov published
a description of Argus that noted that use of out-of-date values can
be tolerated in some situations. Thus, this paper provides some early
theoretical justification for use of stale data.
In 1990, Pugh [Pugh90] noted that explicitly tracking which threads
were reading a given data structure permitted deferred free to operate
in the presence of non-terminating threads. However, this explicit
@ -41,11 +49,11 @@ providing a fine-grained locking design, however, it would be interesting
to see how much of the performance advantage reported in 1990 remains
today.
At about this same time, Adams [Adams91] described ``chaotic relaxation'',
where the normal barriers between successive iterations of convergent
numerical algorithms are relaxed, so that iteration $n$ might use
data from iteration $n-1$ or even $n-2$. This introduces error,
which typically slows convergence and thus increases the number of
At about this same time, Andrews [Andrews91textbook] described ``chaotic
relaxation'', where the normal barriers between successive iterations
of convergent numerical algorithms are relaxed, so that iteration $n$
might use data from iteration $n-1$ or even $n-2$. This introduces
error, which typically slows convergence and thus increases the number of
iterations required. However, this increase is sometimes more than made
up for by a reduction in the number of expensive barrier operations,
which are otherwise required to synchronize the threads at the end
@ -55,7 +63,8 @@ is thus inapplicable to most data structures in operating-system kernels.
In 1992, Henry (now Alexia) Massalin completed a dissertation advising
parallel programmers to defer processing when feasible to simplify
synchronization. RCU makes extremely heavy use of this advice.
synchronization [HMassalinPhD]. RCU makes extremely heavy use of
this advice.
In 1993, Jacobson [Jacobson93] verbally described what is perhaps the
simplest deferred-free technique: simply waiting a fixed amount of time
@ -90,27 +99,29 @@ mechanism, which is quite similar to RCU [Gamsa99]. These operating
systems made pervasive use of RCU in place of "existence locks", which
greatly simplifies locking hierarchies and helps avoid deadlocks.
2001 saw the first RCU presentation involving Linux [McKenney01a]
at OLS. The resulting abundance of RCU patches was presented the
following year [McKenney02a], and use of RCU in dcache was first
described that same year [Linder02a].
The year 2000 saw an email exchange that would likely have
led to yet another independent invention of something like RCU
[RustyRussell2000a,RustyRussell2000b]. Instead, 2001 saw the first
RCU presentation involving Linux [McKenney01a] at OLS. The resulting
abundance of RCU patches was presented the following year [McKenney02a],
and use of RCU in dcache was first described that same year [Linder02a].
Also in 2002, Michael [Michael02b,Michael02a] presented "hazard-pointer"
techniques that defer the destruction of data structures to simplify
non-blocking synchronization (wait-free synchronization, lock-free
synchronization, and obstruction-free synchronization are all examples of
non-blocking synchronization). In particular, this technique eliminates
locking, reduces contention, reduces memory latency for readers, and
parallelizes pipeline stalls and memory latency for writers. However,
these techniques still impose significant read-side overhead in the
form of memory barriers. Researchers at Sun worked along similar lines
in the same timeframe [HerlihyLM02]. These techniques can be thought
of as inside-out reference counts, where the count is represented by the
number of hazard pointers referencing a given data structure rather than
the more conventional counter field within the data structure itself.
The key advantage of inside-out reference counts is that they can be
stored in immortal variables, thus allowing races between access and
deletion to be avoided.
non-blocking synchronization). The corresponding journal article appeared
in 2004 [MagedMichael04a]. This technique eliminates locking, reduces
contention, reduces memory latency for readers, and parallelizes pipeline
stalls and memory latency for writers. However, these techniques still
impose significant read-side overhead in the form of memory barriers.
Researchers at Sun worked along similar lines in the same timeframe
[HerlihyLM02]. These techniques can be thought of as inside-out reference
counts, where the count is represented by the number of hazard pointers
referencing a given data structure rather than the more conventional
counter field within the data structure itself. The key advantage
of inside-out reference counts is that they can be stored in immortal
variables, thus allowing races between access and deletion to be avoided.
By the same token, RCU can be thought of as a "bulk reference count",
where some form of reference counter covers all reference by a given CPU
@ -123,8 +134,10 @@ can be thought of in other terms as well.
In 2003, the K42 group described how RCU could be used to create
hot-pluggable implementations of operating-system functions [Appavoo03a].
Later that year saw a paper describing an RCU implementation of System
V IPC [Arcangeli03], and an introduction to RCU in Linux Journal
Later that year saw a paper describing an RCU implementation
of System V IPC [Arcangeli03] (following up on a suggestion by
Hugh Dickins [Dickins02a] and an implementation by Mingming Cao
[MingmingCao2002IPCRCU]), and an introduction to RCU in Linux Journal
[McKenney03a].
2004 has seen a Linux-Journal article on use of RCU in dcache
@ -383,6 +396,21 @@ for Programming Languages and Operating Systems}"
}
}
@phdthesis{HMassalinPhD
,author="H. Massalin"
,title="Synthesis: An Efficient Implementation of Fundamental Operating
System Services"
,school="Columbia University"
,address="New York, NY"
,year="1992"
,annotation={
Mondo optimizing compiler.
Wait-free stuff.
Good advice: defer work to avoid synchronization. See page 90
(PDF page 106), Section 5.4, fourth bullet point.
}
}
@unpublished{Jacobson93
,author="Van Jacobson"
,title="Avoid Read-Side Locking Via Delayed Free"
@ -671,6 +699,20 @@ Orran Krieger and Rusty Russell and Dipankar Sarma and Maneesh Soni"
[Viewed October 18, 2004]"
}
@conference{Michael02b
,author="Maged M. Michael"
,title="High Performance Dynamic Lock-Free Hash Tables and List-Based Sets"
,Year="2002"
,Month="August"
,booktitle="{Proceedings of the 14\textsuperscript{th} Annual ACM
Symposium on Parallel
Algorithms and Architecture}"
,pages="73-82"
,annotation={
Like the title says...
}
}
@Conference{Linder02a
,Author="Hanna Linder and Dipankar Sarma and Maneesh Soni"
,Title="Scalability of the Directory Entry Cache"
@ -727,6 +769,24 @@ Andrea Arcangeli and Andi Kleen and Orran Krieger and Rusty Russell"
}
}
@conference{Michael02a
,author="Maged M. Michael"
,title="Safe Memory Reclamation for Dynamic Lock-Free Objects Using Atomic
Reads and Writes"
,Year="2002"
,Month="August"
,booktitle="{Proceedings of the 21\textsuperscript{st} Annual ACM
Symposium on Principles of Distributed Computing}"
,pages="21-30"
,annotation={
Each thread keeps an array of pointers to items that it is
currently referencing. Sort of an inside-out garbage collection
mechanism, but one that requires the accessing code to explicitly
state its needs. Also requires read-side memory barriers on
most architectures.
}
}
@unpublished{Dickins02a
,author="Hugh Dickins"
,title="Use RCU for System-V IPC"
@ -735,6 +795,17 @@ Andrea Arcangeli and Andi Kleen and Orran Krieger and Rusty Russell"
,note="private communication"
}
@InProceedings{HerlihyLM02
,author={Maurice Herlihy and Victor Luchangco and Mark Moir}
,title="The Repeat Offender Problem: A Mechanism for Supporting Dynamic-Sized,
Lock-Free Data Structures"
,booktitle={Proceedings of 16\textsuperscript{th} International
Symposium on Distributed Computing}
,year=2002
,month="October"
,pages="339-353"
}
@unpublished{Sarma02b
,Author="Dipankar Sarma"
,Title="Some dcache\_rcu benchmark numbers"
@ -749,6 +820,19 @@ Andrea Arcangeli and Andi Kleen and Orran Krieger and Rusty Russell"
}
}
@unpublished{MingmingCao2002IPCRCU
,Author="Mingming Cao"
,Title="[PATCH]updated ipc lock patch"
,month="October"
,year="2002"
,note="Available:
\url{https://lkml.org/lkml/2002/10/24/262}
[Viewed February 15, 2014]"
,annotation={
Mingming Cao's patch to introduce RCU to SysV IPC.
}
}
@unpublished{LinusTorvalds2003a
,Author="Linus Torvalds"
,Title="Re: {[PATCH]} small fixes in brlock.h"
@ -982,6 +1066,23 @@ Realtime Applications"
}
}
@article{MagedMichael04a
,author="Maged M. Michael"
,title="Hazard Pointers: Safe Memory Reclamation for Lock-Free Objects"
,Year="2004"
,Month="June"
,journal="IEEE Transactions on Parallel and Distributed Systems"
,volume="15"
,number="6"
,pages="491-504"
,url="Available:
\url{http://www.research.ibm.com/people/m/michael/ieeetpds-2004.pdf}
[Viewed March 1, 2005]"
,annotation={
New canonical hazard-pointer citation.
}
}
@phdthesis{PaulEdwardMcKenneyPhD
,author="Paul E. McKenney"
,title="Exploiting Deferred Destruction: