Jump to content

Program counter: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m Reverting possible vandalism by 46.186.240.174 to version by Jim1138. Report False Positive? Thanks, ClueBot NG. (3228171) (Bot)
Apetzke (talk | contribs)
m →‎Hardware implementation: Added a missing "the"
Tags: Mobile edit Mobile app edit Android app edit
(45 intermediate revisions by 27 users not shown)
Line 1: Line 1:
{{Short description|Processor register that indicates where a computer is in its program sequence}}
{{Use dmy dates|date=February 2020|cs1-dates=y}}
[[File:IBM 701console.jpg|thumb|Front panel of an [[IBM 701]] computer introduced in 1952. Lights in the middle display the contents of various registers. The '''instruction counter''' is at the lower left.]]
[[File:IBM 701console.jpg|thumb|Front panel of an [[IBM 701]] computer introduced in 1952. Lights in the middle display the contents of various registers. The '''instruction counter''' is at the lower left.]]


The '''program counter''' ('''PC'''), commonly called the '''instruction pointer''' ('''IP''') in [[Intel]] [[x86]] and [[Itanium]] [[microprocessor]]s, and sometimes called the '''instruction address register''' ('''IAR'''),<ref>{{cite book|last1= Mead|first1= Carver|authorlink1= Carver Mead|last2= Conway|first2= Lynn|authorlink2= Lynn Conway|year= 1980|title= Introduction to [[VLSI]] Systems|publisher= Addison-Wesley|location= Reading, USA|isbn= 0201043580}}</ref> the '''instruction counter''',<ref>{{cite book|url= http://bitsavers.org/pdf/ibm/701/24-6042-1_701_PrincOps.pdf|title= Principles of Operation, Type 701 and Associated Equipment|publisher= [[IBM]]|year=1953}}</ref> or just part of the instruction sequencer,<ref>
The '''program counter''' ('''PC'''),<ref name="CompArchOrg">{{cite book |title=Computer Architecture and Organization |last=Hayes |first=John P. |isbn=0-07-027363-4 |year=1978 |page=245|publisher=McGraw-Hill }}</ref> commonly called the '''instruction pointer''' ('''IP''') in [[Intel]] [[x86]] and [[Itanium]] [[microprocessor]]s, and sometimes called the '''instruction address register''' ('''IAR'''),<ref name="Mead_1980" /><ref name="CompArchOrg"/> the '''instruction counter''',<ref name="IBM_1953" /> or just part of the instruction sequencer,<ref name="Katzan_1971" /> is a [[processor register]] that indicates where a [[computer]] is in its [[computer program|program]] sequence.<ref group="nb" name="NB1" />
Harry Katzan (1971), ''Computer Organization and the System/370'', Van Nostrand Reinhold Company, New York, USA, LCCCN 72-153191
</ref>
is a [[processor register]] that indicates where a [[computer]] is in its [[computer program|program]] sequence.


In most processors, the PC is incremented after fetching an [[instruction (computer science)|instruction]], and holds the [[memory address]] of ("[[Pointer (computer programming)|points]] to") the next instruction that would be executed. (In a processor where the incrementation precedes the fetch, the PC points to the current instruction being executed.)
Usually, the PC is incremented after fetching an [[instruction (computer science)|instruction]], and holds the [[memory address]] of ("[[Pointer (computer programming)|points]] to") the next instruction that would be executed.<ref name="Silberschatz_2018" />{{refn|group="nb"|name="NB2"|In a processor where the incrementation precedes the fetch, the PC points to the current instruction being executed. In some processors, the PC points some distance beyond the current instruction; for instance, in the [[ARM7]], the value of PC visible to the programmer points beyond the current instruction and beyond the [[delay slot]].<ref name="ARM_AG12" />}}


Processors usually fetch instructions sequentially from memory, but '''control transfer''' instructions change the sequence by placing a new value in the PC. These include [[branch (computer science)|branches]] (sometimes called jumps), [[subroutine]] calls, and [[Return statement|returns]]. A transfer that is conditional on the truth of some assertion lets the computer follow a different sequence under different conditions.
Processors usually fetch instructions sequentially from memory, but ''control transfer'' instructions change the sequence by placing a new value in the PC. These include [[branch (computer science)|branches]] (sometimes called jumps), [[subroutine]] calls, and [[return statement|returns]]. A transfer that is conditional on the truth of some assertion lets the computer follow a different sequence under different conditions.


A branch provides that the next instruction is fetched from somewhere else in memory. A subroutine call not only branches but saves the preceding contents of the PC somewhere. A return retrieves the saved contents of the PC and places it back in the PC, resuming sequential execution with the instruction following the subroutine call.
A branch provides that the next instruction is fetched from elsewhere in memory. A subroutine call not only branches but saves the preceding contents of the PC somewhere. A return retrieves the saved contents of the PC and places it back in the PC, resuming sequential execution with the instruction following the subroutine call.


==Hardware implementation==
== Hardware implementation ==
In a typical [[central processing unit]] (CPU), the PC is a [[Counter (digital)|digital counter]] (which is the origin of the term "program counter") that may be one of many [[Processor register|registers]] in the CPU hardware. The [[instruction cycle]]<ref>[[John L. Hennessy]] and [[David Patterson (scientist)|David A. Patterson]] (1990), ''Computer Architecture: a quantitative approach'', Morgan Kaufmann Publishers, Palo Alto, USA, {{ISBN|1-55860-069-8}}</ref> begins with a '''fetch''', in which the CPU places the value of the PC on the [[address bus]] to send it to the memory. The memory responds by sending the contents of that memory location on the [[Bus (computing)|data bus]]. (This is the [[stored-program computer]] model, in which executable instructions are stored alongside ordinary data in memory, and handled identically by it<ref>B. Randall (1982), ''The Origins of Digital Computers'', Springer-Verlag, Berlin, D</ref>). Following the fetch, the CPU proceeds to '''execution''', taking some action based on the memory contents that it obtained. At some point in this cycle, the PC will be modified so that the next instruction executed is a different one (typically, incremented so that the next instruction is the one starting at the memory address immediately following the last memory location of the current instruction).
In a simple [[central processing unit]] (CPU), the PC is a [[counter (digital)|digital counter]] (which is the origin of the term "program counter") that may be one of several hardware [[Processor register|registers]]. The [[instruction cycle]]<ref name="Hennessy_1990" /> begins with a ''fetch'', in which the CPU places the value of the PC on the [[address bus]] to send it to the memory. The memory responds by sending the contents of that memory location on the [[Bus (computing)|data bus]]. (This is the [[stored-program computer]] model, in which a single memory space contains both executable instructions and ordinary data.<ref name="Randall_1982" />) Following the fetch, the CPU proceeds to ''execution'', taking some action based on the memory contents that it obtained. At some point in this cycle, the PC will be modified so that the next instruction executed is a different one (typically, incremented so that the next instruction is the one starting at the memory address immediately following the last memory location of the current instruction).


Like other processor registers, the PC may be a bank of binary latches, each one representing one bit of the value of the PC.<ref>[[C. Gordon Bell]] and [[Allen Newell]] (1971), ''Computer Structures: Readings and Examples'', McGraw-Hill Book Company, New York, USA</ref> The number of bits (the width of the PC) relates to the processor architecture. For instance, a “32-bit” CPU may use 32 bits to be able to address 2<sup>32</sup> units of memory. If the PC is a binary counter, it may increment when a pulse is applied to its COUNT UP input, or the CPU may compute some other value and load it into the PC by a pulse to its LOAD input.<ref>{{cite book|author=B.S.Walker|year=1967|title=Introduction to Computer Engineering|publisher=University of London Press|location=London, UK|isbn=0-340-06831-0}}</ref>
Like other processor registers, the PC may be a bank of binary latches, each one representing one bit of the value of the PC.<ref name="Bell_1971" /> The number of bits (the width of the PC) relates to the processor architecture. For instance, a “32-bit” CPU may use 32 bits to be able to address 2<sup>32</sup> units of memory. On some processors, the width of the program counter instead depends on the addressable memory; for example, some [[AVR microcontrollers]] have a PC which wraps around after 12 bits.<ref name="Arnold_2020_AS" />


If the PC is a binary counter, it may increment when a pulse is applied to its COUNT UP input, or the CPU may compute some other value and load it into the PC by a pulse to its LOAD input.<ref name="Walker_1967" />
To identify the current instruction, the PC may be combined with other registers that identify a [[Segmentation (memory)|segment]] or [[Page (computer memory)|page]]. This approach permits a PC with fewer bits by assuming that most memory units of interest are within the current vicinity.


To identify the current instruction, the PC may be combined with other registers that identify a [[segmentation (memory)|segment]] or [[page (computer memory)|page]]. This approach permits a PC with fewer bits by assuming that most memory units of interest are within the current vicinity.
==Consequences in machine architecture==

Use of a PC that normally increments assumes that what a computer does is execute a usually linear sequence of instructions. Such a PC is central to the [[von Neumann architecture]]. Thus programmers write a sequential [[control flow]] even for algorithms that do not have to be sequential. The resulting “[[Von Neumann architecture#Von Neumann bottleneck|von Neumann bottleneck]]” led to research into parallel computing,<ref>F.B. Chambers, D.A. Duce and G.P. Jones (1984), ''Distributed Computing'', Academic Press, Orlando, USA, {{ISBN|0-12-167350-2}}</ref> including non-von Neumann or [[dataflow]] models that did not use a PC; for example, rather than specifying sequential steps, the high-level programmer might specify desired [[functional programming|function]] and the low-level programmer might specify this using [[combinatory logic]].
== Consequences in machine architecture ==
Use of a PC that normally increments assumes that what a computer does is execute a usually linear sequence of instructions. Such a PC is central to the [[von Neumann architecture]]. Thus programmers write a sequential [[control flow]] even for algorithms that do not have to be sequential. The resulting “[[von Neumann bottleneck]]” led to research into [[parallel computing]],<ref name="Chambers_1984" /> including non-von Neumann or [[dataflow]] models that did not use a PC; for example, rather than specifying sequential steps, the high-level programmer might specify desired [[functional programming|function]] and the low-level programmer might specify this using [[combinatory logic]].


This research also led to ways to making conventional, PC-based, CPUs run faster, including:
This research also led to ways to making conventional, PC-based, CPUs run faster, including:
*[[Pipeline (computing)|Pipelining]], in which different hardware in the CPU executes different phases of multiple instructions simultaneously.
*The [[very long instruction word]] (VLIW) architecture, where a single instruction can achieve multiple effects.
*Techniques to predict [[out-of-order execution]] and prepare subsequent instructions for execution outside the regular sequence.


* [[Pipeline (computing)|Pipelining]], in which different hardware in the CPU executes different phases of multiple instructions simultaneously.
==Consequences in high-level programming==
* The [[very long instruction word]] (VLIW) architecture, where a single instruction can achieve multiple effects.
Modern high-level programming languages still follow the sequential-execution model and, indeed, a common way of identifying programming errors is with a “procedure execution” in which the programmer's finger identifies the point of execution as a PC would. The high-level language is essentially the machine language of a virtual machine,<ref>[[Douglas Hofstadter]] (1980), ''Gödel, Escher, Bach: an eternal golden braid'', Penguin Books, Harmondsworth, UK, {{ISBN|0-14-005579-7}}</ref> too complex to be built as hardware but instead emulated or [[Interpreter (computing)|interpreted]] by software.
* Techniques to predict [[out-of-order execution]] and prepare subsequent instructions for execution outside the regular sequence.

== Consequences in high-level programming ==
Modern high-level programming languages still follow the sequential-execution model and, indeed, a common way of identifying programming errors is with a “procedure execution” in which the programmer's finger identifies the point of execution as a PC would. The high-level language is essentially the machine language of a virtual machine,<ref name="Hofstadter_1980" /> too complex to be built as hardware but instead emulated or [[interpreter (computing)|interpreted]] by software.


However, new programming models transcend sequential-execution programming:
However, new programming models transcend sequential-execution programming:
*When writing a [[Thread (computing)|multi-threaded]] program, the programmer may write each thread as a sequence of instructions without specifying the timing of any instruction relative to instructions in other threads.
* In [[event-driven programming]], the programmer may write sequences of instructions to respond to [[Event (computing)|events]] without specifying an overall sequence for the program.
* In [[dataflow programming]], the programmer may write each section of a computing [[pipeline programming|pipeline]] without specifying the timing relative to other sections.


* When writing a [[thread (computing)|multi-threaded]] program, the programmer may write each thread as a sequence of instructions without specifying the timing of any instruction relative to instructions in other threads.
==See also==
* In [[event-driven programming]], the programmer may write sequences of instructions to respond to [[event (computing)|events]] without specifying an overall sequence for the program.
*[[Branch prediction]]
* In [[dataflow programming]], the programmer may write each section of a computing [[pipeline programming|pipeline]] without specifying the timing relative to other sections.
*[[Instruction cache]]
*[[Instruction cycle]]
*[[Instruction unit]]
*[[Instruction pipeline]]
*[[Instruction register]]
*[[Instruction scheduling]]
*[[Program status word]]


==References==
== See also ==
* [[Branch prediction]]
{{Reflist}}
* [[Instruction cache]]
* [[Instruction cycle]]
* [[Instruction unit]]
* [[Instruction pipeline]]
* [[Instruction register]]
* [[Instruction scheduling]]
* [[Program status word]]

== Notes ==
{{Reflist|group="nb"|refs=
<ref group="nb" name="NB1">For modern processors, the concept of "where it is in its sequence" is too simplistic, as [[instruction-level parallelism]] and [[out-of-order execution]] may occur.</ref>
}}

== References ==
{{Reflist|refs=
<ref name="Silberschatz_2018">{{cite book |last1=Silberschatz |first1=Abraham |last2=Gagne |first2=Greg |last3=Galvin |first3=Peter B. |author-link1=Abraham Silberschatz |date=April 2018 |title=Operating System Concepts |url=https://www.wiley.com/en-us/Operating+System+Concepts%2C+10th+Edition-p-9781119320913 |location=United States |publisher=[[Wiley (publisher)|Wiley]] |pages=27, G-29 |isbn=978-1-119-32091-3}}</ref>
<ref name="Mead_1980">{{cite book |author-last1=Mead |author-first1=Carver |author-link1=Carver Mead |author-last2=Conway |author-first2=Lynn |author-link2=Lynn Conway |date=1980 |title=Introduction to VLSI Systems |url=https://archive.org/details/introductiontovl00mead |url-access=registration |publisher=[[Addison-Wesley]] |location=Reading, USA |isbn=0-201-04358-0}}</ref>
<ref name="IBM_1953">{{cite book |url=http://bitsavers.org/pdf/ibm/701/24-6042-1_701_PrincOps.pdf |title=Principles of Operation, Type 701 and Associated Equipment |publisher=[[IBM]] |date=1953}}</ref>
<ref name="Katzan_1971">Harry Katzan (1971), ''Computer Organization and the System/370'', [[Van Nostrand Reinhold Company]], New York, USA, LCCCN 72-153191</ref>
<ref name="ARM_AG12">{{cite web |date=2001| title=ARM Developer Suite, Assembler Guide. Version 1.2 |url=http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0068b/Bcfihdhj.html#id2766362 |publisher=[[ARM Limited]] |access-date=2019-10-18}}</ref>
<ref name="Hennessy_1990">[[John L. Hennessy]] and [[David Patterson (scientist)|David A. Patterson]] (1990), ''Computer Architecture: a quantitative approach'', [[Morgan Kaufmann Publishers]], Palo Alto, USA, {{ISBN|1-55860-069-8}}</ref>
<ref name="Randall_1982">B. Randall (1982), ''The Origins of Digital Computers'', [[Springer-Verlag]], Berlin, D</ref>
<ref name="Bell_1971">[[C. Gordon Bell]] and [[Allen Newell]] (1971), ''Computer Structures: Readings and Examples'', [[McGraw-Hill Book Company]], New York, USA</ref>
<ref name="Walker_1967">{{cite book |author-first=B. S. |author-last=Walker |date=1967 |title=Introduction to Computer Engineering |publisher=[[University of London Press]] |location=London, UK |isbn=0-340-06831-0}}</ref>
<ref name="Chambers_1984">F. B. Chambers, D. A. Duce and G. P. Jones (1984), ''Distributed Computing'', [[Academic Press]], Orlando, USA, {{ISBN|0-12-167350-2}}</ref>
<ref name="Hofstadter_1980">[[Douglas Hofstadter]] (1980), ''Gödel, Escher, Bach: an eternal golden braid'', [[Penguin Books]], Harmondsworth, UK, {{ISBN|0-14-005579-7}}</ref>
<ref name="Arnold_2020_AS">{{cite book |title=Macro Assembler AS – User's Manual |version=V1.42 |author-first=Alfred |author-last=Arnold |translator-first1=Alfred |translator-last1=Arnold |translator-first2=Stefan |translator-last2=Hilse |translator-first3=Stephan |translator-last3=Kanthak |translator-first4=Oliver |translator-last4=Sellke |translator-first5=Vittorio |translator-last5=De Tomasi |date=2020 |orig-year=1996, 1989 |chapter=E. Predefined Symbols |chapter-url=http://john.ccac.rwth-aachen.de:8000/as/as_EN.html#sect_E_ |page=Table E.3: Predefined Symbols – Part 3 |url=http://john.ccac.rwth-aachen.de:8000/as/as_EN.html |access-date=2020-02-28 |url-status=live |archive-url=https://web.archive.org/web/20200228144943/http://john.ccac.rwth-aachen.de:8000/as/as_EN.html |archive-date=2020-02-28 |quote=3.2.12. WRAPMODE […] AS will assume that the processor's program counter does not have the full length of 16 bits given by the architecture, but instead a length that is exactly sufficient to address the internal ROM. For example, in case of the [[AT90S8515]], this means 12 bits, corresponding to 4 Kwords or 8 Kbytes. This assumption allows relative branches from the ROM's beginning to the end and vice versa which would result in an out-of-branch error when using strict arithmetics. Here, they work because the carry bits resulting from the target address computation are discarded. […] In case of the abovementioned AT90S8515, this option is even necessary because it is the only way to perform a direct jump through the complete address space […]}}</ref>
}}


{{X86 assembly topics}}
{{X86 assembly topics}}


{{CPU technologies}}
{{DEFAULTSORT:Program Counter}}

[[Category:Control flow]]
[[Category:Control flow]]
[[Category:Central processing unit]]
[[Category:Central processing unit]]
[[Category:Digital registers]]
[[Category:Digital registers]]

{{CPU technologies}}

Revision as of 12:23, 10 May 2024

Front panel of an IBM 701 computer introduced in 1952. Lights in the middle display the contents of various registers. The instruction counter is at the lower left.

The program counter (PC),[1] commonly called the instruction pointer (IP) in Intel x86 and Itanium microprocessors, and sometimes called the instruction address register (IAR),[2][1] the instruction counter,[3] or just part of the instruction sequencer,[4] is a processor register that indicates where a computer is in its program sequence.[nb 1]

Usually, the PC is incremented after fetching an instruction, and holds the memory address of ("points to") the next instruction that would be executed.[5][nb 2]

Processors usually fetch instructions sequentially from memory, but control transfer instructions change the sequence by placing a new value in the PC. These include branches (sometimes called jumps), subroutine calls, and returns. A transfer that is conditional on the truth of some assertion lets the computer follow a different sequence under different conditions.

A branch provides that the next instruction is fetched from elsewhere in memory. A subroutine call not only branches but saves the preceding contents of the PC somewhere. A return retrieves the saved contents of the PC and places it back in the PC, resuming sequential execution with the instruction following the subroutine call.

Hardware implementation

In a simple central processing unit (CPU), the PC is a digital counter (which is the origin of the term "program counter") that may be one of several hardware registers. The instruction cycle[7] begins with a fetch, in which the CPU places the value of the PC on the address bus to send it to the memory. The memory responds by sending the contents of that memory location on the data bus. (This is the stored-program computer model, in which a single memory space contains both executable instructions and ordinary data.[8]) Following the fetch, the CPU proceeds to execution, taking some action based on the memory contents that it obtained. At some point in this cycle, the PC will be modified so that the next instruction executed is a different one (typically, incremented so that the next instruction is the one starting at the memory address immediately following the last memory location of the current instruction).

Like other processor registers, the PC may be a bank of binary latches, each one representing one bit of the value of the PC.[9] The number of bits (the width of the PC) relates to the processor architecture. For instance, a “32-bit” CPU may use 32 bits to be able to address 232 units of memory. On some processors, the width of the program counter instead depends on the addressable memory; for example, some AVR microcontrollers have a PC which wraps around after 12 bits.[10]

If the PC is a binary counter, it may increment when a pulse is applied to its COUNT UP input, or the CPU may compute some other value and load it into the PC by a pulse to its LOAD input.[11]

To identify the current instruction, the PC may be combined with other registers that identify a segment or page. This approach permits a PC with fewer bits by assuming that most memory units of interest are within the current vicinity.

Consequences in machine architecture

Use of a PC that normally increments assumes that what a computer does is execute a usually linear sequence of instructions. Such a PC is central to the von Neumann architecture. Thus programmers write a sequential control flow even for algorithms that do not have to be sequential. The resulting “von Neumann bottleneck” led to research into parallel computing,[12] including non-von Neumann or dataflow models that did not use a PC; for example, rather than specifying sequential steps, the high-level programmer might specify desired function and the low-level programmer might specify this using combinatory logic.

This research also led to ways to making conventional, PC-based, CPUs run faster, including:

  • Pipelining, in which different hardware in the CPU executes different phases of multiple instructions simultaneously.
  • The very long instruction word (VLIW) architecture, where a single instruction can achieve multiple effects.
  • Techniques to predict out-of-order execution and prepare subsequent instructions for execution outside the regular sequence.

Consequences in high-level programming

Modern high-level programming languages still follow the sequential-execution model and, indeed, a common way of identifying programming errors is with a “procedure execution” in which the programmer's finger identifies the point of execution as a PC would. The high-level language is essentially the machine language of a virtual machine,[13] too complex to be built as hardware but instead emulated or interpreted by software.

However, new programming models transcend sequential-execution programming:

  • When writing a multi-threaded program, the programmer may write each thread as a sequence of instructions without specifying the timing of any instruction relative to instructions in other threads.
  • In event-driven programming, the programmer may write sequences of instructions to respond to events without specifying an overall sequence for the program.
  • In dataflow programming, the programmer may write each section of a computing pipeline without specifying the timing relative to other sections.

See also

Notes

  1. ^ For modern processors, the concept of "where it is in its sequence" is too simplistic, as instruction-level parallelism and out-of-order execution may occur.
  2. ^ In a processor where the incrementation precedes the fetch, the PC points to the current instruction being executed. In some processors, the PC points some distance beyond the current instruction; for instance, in the ARM7, the value of PC visible to the programmer points beyond the current instruction and beyond the delay slot.[6]

References

  1. ^ a b Hayes, John P. (1978). Computer Architecture and Organization. McGraw-Hill. p. 245. ISBN 0-07-027363-4.
  2. ^ Mead, Carver; Conway, Lynn (1980). Introduction to VLSI Systems. Reading, USA: Addison-Wesley. ISBN 0-201-04358-0.
  3. ^ Principles of Operation, Type 701 and Associated Equipment (PDF). IBM. 1953.
  4. ^ Harry Katzan (1971), Computer Organization and the System/370, Van Nostrand Reinhold Company, New York, USA, LCCCN 72-153191
  5. ^ Silberschatz, Abraham; Gagne, Greg; Galvin, Peter B. (April 2018). Operating System Concepts. United States: Wiley. pp. 27, G-29. ISBN 978-1-119-32091-3.
  6. ^ "ARM Developer Suite, Assembler Guide. Version 1.2". ARM Limited. 2001. Retrieved 2019-10-18.
  7. ^ John L. Hennessy and David A. Patterson (1990), Computer Architecture: a quantitative approach, Morgan Kaufmann Publishers, Palo Alto, USA, ISBN 1-55860-069-8
  8. ^ B. Randall (1982), The Origins of Digital Computers, Springer-Verlag, Berlin, D
  9. ^ C. Gordon Bell and Allen Newell (1971), Computer Structures: Readings and Examples, McGraw-Hill Book Company, New York, USA
  10. ^ Arnold, Alfred (2020) [1996, 1989]. "E. Predefined Symbols". Macro Assembler AS – User's Manual. V1.42. Translated by Arnold, Alfred; Hilse, Stefan; Kanthak, Stephan; Sellke, Oliver; De Tomasi, Vittorio. p. Table E.3: Predefined Symbols – Part 3. Archived from the original on 2020-02-28. Retrieved 2020-02-28. 3.2.12. WRAPMODE […] AS will assume that the processor's program counter does not have the full length of 16 bits given by the architecture, but instead a length that is exactly sufficient to address the internal ROM. For example, in case of the AT90S8515, this means 12 bits, corresponding to 4 Kwords or 8 Kbytes. This assumption allows relative branches from the ROM's beginning to the end and vice versa which would result in an out-of-branch error when using strict arithmetics. Here, they work because the carry bits resulting from the target address computation are discarded. […] In case of the abovementioned AT90S8515, this option is even necessary because it is the only way to perform a direct jump through the complete address space […]
  11. ^ Walker, B. S. (1967). Introduction to Computer Engineering. London, UK: University of London Press. ISBN 0-340-06831-0.
  12. ^ F. B. Chambers, D. A. Duce and G. P. Jones (1984), Distributed Computing, Academic Press, Orlando, USA, ISBN 0-12-167350-2
  13. ^ Douglas Hofstadter (1980), Gödel, Escher, Bach: an eternal golden braid, Penguin Books, Harmondsworth, UK, ISBN 0-14-005579-7