Opcode

Last updated

In computing, an opcode (abbreviated from operation code) [1] [2] is an enumerated value that specifies the operation to be performed. Opcodes are employed in hardware devices such as arithmetic logic units (ALUs) and central processing units (CPUs) as well as in some software instruction sets. In ALUs the opcode is directly applied to circuitry via an input signal bus, whereas in CPUs, the opcode is the portion of a machine language instruction that specifies the operation to be performed.

Contents

CPUs

Opcodes are found in the machine language instructions of CPUs as well as in some abstract computing machines. In CPUs, an opcode may be referred to as instruction machine code, [3] instruction code, [4] instruction syllable, [5] [6] [7] [8] instruction parcel or opstring. [9] [2] For any particular processor (which may be a general CPU or a more specialized processing unit), the opcodes are defined by the processor's instruction set architecture (ISA), [10] and can be described by means of an opcode table. The types of operations may include arithmetic, data copying, logical operations, and program control, as well as special instructions (e.g., CPUID). [10]

In addition to the opcode, many instructions also specify the data (known as operands) the operation will act upon, although some instructions may have implicit operands or none at all. [10] Some instruction sets have nearly uniform fields for opcode and operand specifiers, whereas others (e.g., x86 architecture) have a less uniform, variable-length structure. [10] [11] Instruction sets can be extended through the use of opcode prefixes which add a subset of new instructions made up of existing opcodes following reserved byte sequences.[ citation needed ]

Software instruction sets

Opcodes can be found in so-called byte codes and other representations intended for a software interpreter rather than a hardware device. These software-based instruction sets often employ slightly higher-level data types and operations than most hardware counterparts, but are nevertheless constructed along similar lines. Examples include the byte code found in Java class files which are then interpreted by the Java Virtual Machine (JVM), the byte code used in GNU Emacs for compiled Lisp code, .NET Common Intermediate Language (CIL), and many others. [12]

See also

Related Research Articles

<span class="mw-page-title-main">Assembly language</span> Low-level programming language

In computer programming, assembly language, often referred to simply as assembly and commonly abbreviated as ASM or asm, is any low-level programming language with a very strong correspondence between the instructions in the language and the architecture's machine code instructions. Assembly language usually has one statement per machine instruction (1:1), but constants, comments, assembler directives, symbolic labels of, e.g., memory locations, registers, and macros are generally also supported.

<span class="mw-page-title-main">MOS Technology 6502</span> 8-bit microprocessor from 1975

The MOS Technology 6502 is an 8-bit microprocessor that was designed by a small team led by Chuck Peddle for MOS Technology. The design team had formerly worked at Motorola on the Motorola 6800 project; the 6502 is essentially a simplified, less expensive and faster version of that design.

<span class="mw-page-title-main">Machine code</span> Lowest level instructions executed by a computer

In computer programming, machine code is computer code consisting of machine language instructions, which are used to control a computer's central processing unit (CPU). For conventional binary computers, machine code is the binary representation of a computer program which is actually read and interpreted by the computer. A program in machine code consists of a sequence of machine instructions.

<span class="mw-page-title-main">Reduced instruction set computer</span> Processor executing one instruction in minimal clock cycles

In electronics and computer science, a reduced instruction set computer (RISC) is a computer architecture designed to simplify the individual instructions given to the computer to accomplish tasks. Compared to the instructions given to a complex instruction set computer (CISC), a RISC computer might require more instructions in order to accomplish a task because the individual instructions are written in simpler code. The goal is to offset the need to process more instructions by increasing the speed of each instruction, in particular by implementing an instruction pipeline, which may be simpler to achieve given simpler instructions.

<span class="mw-page-title-main">Endianness</span> Order of bytes in a computer word

In computing, endianness is the order in which bytes within a word of digital data are transmitted over a data communication medium or addressed in computer memory, counting only byte significance compared to earliness. Endianness is primarily expressed as big-endian (BE) or little-endian (LE), terms introduced by Danny Cohen into computer science for data ordering in an Internet Experiment Note published in 1980. The adjective endian has its origin in the writings of 18th century Anglo-Irish writer Jonathan Swift. In the 1726 novel Gulliver's Travels, he portrays the conflict between sects of Lilliputians divided into those breaking the shell of a boiled egg from the big end or from the little end. By analogy, a CPU may read a digital word big end first, or little end first.

In computer science, an instruction set architecture (ISA) is an abstract model that generally defines how software controls the CPU in a computer or a family of computers. A device or program that executes instructions described by that ISA, such as a central processing unit (CPU), is called an implementation of that ISA.

<span class="mw-page-title-main">MCS-51</span> Single chip microcontroller series by Intel

The Intel MCS-51 is a single chip microcontroller (MCU) series developed by Intel in 1980 for use in embedded systems. The architect of the Intel MCS-51 instruction set was John H. Wharton. Intel's original versions were popular in the 1980s and early 1990s, and enhanced binary compatible derivatives remain popular today. It is a complex instruction set computer with separate memory spaces for program instructions and data.

<span class="mw-page-title-main">Intel 8008</span> 8-bit microprocessor

The Intel 8008 is an early 8-bit microprocessor capable of addressing 16 KB of memory, introduced in April 1972. The 8008 architecture was designed by Computer Terminal Corporation (CTC) and was implemented and manufactured by Intel. While the 8008 was originally designed for use in CTC's Datapoint 2200 programmable terminal, an agreement between CTC and Intel permitted Intel to market the chip to other customers after Seiko expressed an interest in using it for a calculator.

x86 assembly language is the name for the family of assembly languages which provide some level of backward compatibility with CPUs back to the Intel 8008 microprocessor, which was launched in April 1972. It is used to produce object code for the x86 class of processors.

In computer engineering, Halt and Catch Fire, known by the assembly language mnemonic HCF, is an idiom referring to a computer machine code instruction that causes the computer's central processing unit (CPU) to cease meaningful operation, typically requiring a restart of the computer. It originally referred to a fictitious instruction in IBM System/360 computers, making a joke about its numerous non-obvious instruction mnemonics.

A fat binary is a computer executable program or library which has been expanded with code native to multiple instruction sets which can consequently be run on multiple processor types. This results in a file larger than a normal one-architecture binary file, thus the name.

The x86 instruction set refers to the set of instructions that x86-compatible microprocessors support. The instructions are usually part of an executable program, often stored as a computer file and executed on the processor.

<span class="mw-page-title-main">Intel 8087</span> Floating-point microprocessor made by Intel

The Intel 8087, announced in 1980, was the first floating-point coprocessor for the 8086 line of microprocessors. The purpose of the chip was to speed up floating-point arithmetic operations, such as addition, subtraction, multiplication, division, and square root. It also computes transcendental functions such as exponential, logarithmic or trigonometric calculations. The performance enhancements were from approximately 20% to over 500%, depending on the specific application. The 8087 could perform about 50,000 FLOPS using around 2.4 watts.

In computer engineering, an orthogonal instruction set is an instruction set architecture where all instruction types can use all addressing modes. It is "orthogonal" in the sense that the instruction type and the addressing mode vary independently. An orthogonal instruction set does not impose a limitation that requires a certain instruction to use a specific register so there is little overlapping of instruction functionality.

Bit manipulation is the act of algorithmically manipulating bits or other pieces of data shorter than a word. Computer programming tasks that require bit manipulation include low-level device control, error detection and correction algorithms, data compression, encryption algorithms, and optimization. For most other tasks, modern programming languages allow the programmer to work directly with abstractions instead of bits that represent those abstractions.

TLCS is a prefix applied to microcontrollers made by Toshiba. The product line includes multiple families of CISC and RISC architectures. Individual components generally have a part number beginning with "TMP". E.g. the TMP8048AP is a member of the TLCS-48 family.

A translator or programming language processor is a computer program that converts the programming instructions written in human convenient form into machine language codes that the computers understand and process. It is a generic term that can refer to a compiler, assembler, or interpreter—anything that converts code from one computer language into another. These include translations between high-level and human-readable computer languages such as C++ and Java, intermediate-level languages such as Java bytecode, low-level languages such as the assembly language and machine code, and between similar levels of language on different computing platforms, as well as from any of these to any other of these. Software and hardware represent different levels of abstraction in computing. Software is typically written in high-level programming languages, which are easier for humans to understand and manipulate, while hardware implementations involve low-level descriptions of physical components and their interconnections. Translator computing facilitates the conversion between these abstraction levels. Overall, translator computing plays a crucial role in bridging the gap between software and hardware implementations, enabling developers to leverage the strengths of each platform and optimize performance, power efficiency, and other metrics according to the specific requirements of the application.

An instruction set architecture (ISA) is an abstract model of a computer, also referred to as computer architecture. A realization of an ISA is called an implementation. An ISA permits multiple implementations that may vary in performance, physical size, and monetary cost ; because the ISA serves as the interface between software and hardware. Software that has been written for an ISA can run on different implementations of the same ISA. This has enabled binary compatibility between different generations of computers to be easily achieved, and the development of computer families. Both of these developments have helped to lower the cost of computers and to increase their applicability. For these reasons, the ISA is one of the most important abstractions in computing today.

<span class="mw-page-title-main">Arithmetic logic unit</span> Combinational digital circuit

In computing, an arithmetic logic unit (ALU) is a combinational digital circuit that performs arithmetic and bitwise operations on integer binary numbers. This is in contrast to a floating-point unit (FPU), which operates on floating point numbers. It is a fundamental building block of many types of computing circuits, including the central processing unit (CPU) of computers, FPUs, and graphics processing units (GPUs).

<span class="mw-page-title-main">WD16</span> Microprocessor produced by Western Digital

The WD16 is a 16-bit microprocessor introduced by Western Digital in October 1976. It is based on the MCP-1600 chipset, a general-purpose design that was also used to implement the DEC LSI-11 low-end minicomputer and the Pascal MicroEngine processor. The three systems differed primarily in their microcode, giving each system a unique instruction set architecture (ISA).

References

  1. Barron, David William (1978) [1971, 1969]. "2.1. Symbolic instructions". Written at University of Southampton, Southampton, UK. In Floretin, J. John (ed.). Assemblers and Loaders. Computer Monographs (3 ed.). New York, USA: Elsevier North-Holland Inc. p. 7. ISBN   0-444-19462-2. LCCN   78-19961. (xii+100 pages)
  2. 1 2 Chiba, Shigeru (2007) [1999]. "Javassist, a Java-bytecode translator toolkit". Archived from the original on 2020-03-02. Retrieved 2016-05-27.
  3. "Appendix B - Instruction Machine Codes" (PDF). MCS-4 Assembly Language Programming Manual - The INTELLEC 4 Microcomputer System Programming Manual (Preliminary ed.). Santa Clara, California, USA: Intel Corporation. December 1973. pp. B-1–B-8. MCS-030-1273-1. Archived (PDF) from the original on 2020-03-01. Retrieved 2020-03-02.
  4. Raphael, Howard A., ed. (November 1974). "The Functions Of A Computer: Instruction Register And Decoder" (PDF). MCS-40 User's Manual For Logic Designers. Santa Clara, California, USA: Intel Corporation. p. viii. Archived (PDF) from the original on 2020-03-03. Retrieved 2020-03-03. […] Each operation that the processor can perform is identified by a unique binary number known as an instruction code. […]
  5. Jones, Douglas W. (June 1988). "A Minimal CISC". ACM SIGARCH Computer Architecture News. 16 (3). New York, USA: Association for Computing Machinery (ACM): 56–63. doi: 10.1145/48675.48684 . S2CID   17280173.
  6. Domagała, Łukasz (2012). "7.1.4. Benchmark suite". Application of CLP to instruction modulo scheduling for VLIW processors. Gliwice, Poland: Jacek Skalmierski Computer Studio. pp. 80–83 [83]. ISBN   978-83-62652-42-6. Archived from the original on 2020-03-02. Retrieved 2016-05-28.
  7. Smotherman, Mark (2016) [2013]. "Multiple Instruction Issue". School of Computing, Clemson University. Archived from the original on 2016-05-28. Retrieved 2016-05-28.
  8. Jones, Douglas W. (2016) [2012]. "A Minimal CISC". Computer Architecture On-Line Collection. Iowa City, USA: The University of Iowa, Department of Computer Science. Archived from the original on 2020-03-02. Retrieved 2016-05-28.
  9. Schulman, Andrew (2005-07-01). "Finding Binary Clones with Opstrings & Function Digests". Dr. Dobb's Journal . Part I. Vol. 30, no. 7. CMP Media LLC. pp. 69–73. ISSN   1044-789X. #374. Archived from the original on 2020-03-02. Retrieved 2020-03-02; Schulman, Andrew (2005-08-01). "Finding Binary Clones with Opstrings & Function Digests". Dr. Dobb's Journal . Part II. Vol. 30, no. 8. CMP Media LLC. pp. 56–61. ISSN   1044-789X. #375. Archived from the original on 2020-03-02. Retrieved 2016-05-28; Schulman, Andrew (2005-09-01). "Finding Binary Clones with Opstrings & Function Digests". CMP Media LLC . Part III. Vol. 30, no. 9. United Business Media. pp. 64–70. ISSN   1044-789X. #376. Archived from the original on 2020-03-02. Retrieved 2016-05-28.
  10. 1 2 3 4 Hennessy, John L.; Patterson, David A.; Asanović, Krste; Bakos, Jason D.; Colwell, Robert P.; Bhattacharjee, Abhishek; Conte, Thomas M.; Duato, José; Franklin, Diana; Goldberg, David; Jouppi, Norman P.; Li, Sheng; Muralimanohar, Naveen; Peterson, Gregory D.; Pinkston, Timothy M.; Ranganathan, Parthasarathy; Wood, David A.; Young, Cliff; Zaky, Amr (2017-11-23). Computer architecture: A quantitative approach (6 ed.). Cambridge, Massachusetts, USA: Morgan Kaufmann Publishers. ISBN   978-0-12811905-1. OCLC   983459758.
  11. Mansfield, Richard (1983). "Introduction: Why Machine Language?". Machine Language For Beginners. Compute! Books (1 ed.). Greensboro, North Carolina, USA: COMPUTE! Publications, Inc., American Broadcasting Companies, Inc.; Small System Services, Inc. ISBN   0-942386-11-6. Archived from the original on 2008-02-13. Retrieved 2016-05-28.
  12. "bytecode Definition". PC Magazine . PC Magazine Encyclopedia. Archived from the original on 2012-10-06. Retrieved 2015-10-10.