Machine code |
---|
General concepts |
Instructions |
In computing, an opcode (abbreviated from operation code) [1] [2] is an enumerated value that specifies the operation to be performed. Opcodes are employed in hardware devices such as arithmetic logic units (ALUs) and central processing units (CPUs) as well as in some software instruction sets. In ALUs the opcode is directly applied to circuitry via an input signal bus, whereas in CPUs, the opcode is the portion of a machine language instruction that specifies the operation to be performed.
Opcodes are found in the machine language instructions of CPUs as well as in some abstract computing machines. In CPUs, an opcode may be referred to as instruction machine code, [3] instruction code, [4] instruction syllable, [5] [6] [7] [8] instruction parcel or opstring. [9] [2] For any particular processor (which may be a general CPU or a more specialized processing unit), the opcodes are defined by the processor's instruction set architecture (ISA), [10] and can be described by means of an opcode table. The types of operations may include arithmetic, data copying, logical operations, and program control, as well as special instructions (e.g., CPUID). [10]
In addition to the opcode, many instructions also specify the data (known as operands) the operation will act upon, although some instructions may have implicit operands or none at all. [10] Some instruction sets have nearly uniform fields for opcode and operand specifiers, whereas others (e.g., x86 architecture) have a less uniform, variable-length structure. [10] [11] Instruction sets can be extended through the use of opcode prefixes which add a subset of new instructions made up of existing opcodes following reserved byte sequences.[ citation needed ]
Opcodes can be found in so-called byte codes and other representations intended for a software interpreter rather than a hardware device. These software-based instruction sets often employ slightly higher-level data types and operations than most hardware counterparts, but are nevertheless constructed along similar lines. Examples include the byte code found in Java class files which are then interpreted by the Java Virtual Machine (JVM), the byte code used in GNU Emacs for compiled Lisp code, .NET Common Intermediate Language (CIL), and many others. [12]
In computer programming, assembly language, often referred to simply as assembly and commonly abbreviated as ASM or asm, is any low-level programming language with a very strong correspondence between the instructions in the language and the architecture's machine code instructions. Assembly language usually has one statement per machine instruction (1:1), but constants, comments, assembler directives, symbolic labels of, e.g., memory locations, registers, and macros are generally also supported.
The MOS Technology 6502 is an 8-bit microprocessor that was designed by a small team led by Chuck Peddle for MOS Technology. The design team had formerly worked at Motorola on the Motorola 6800 project; the 6502 is essentially a simplified, less expensive and faster version of that design.
In computer programming, machine code is computer code consisting of machine language instructions, which are used to control a computer's central processing unit (CPU). For conventional binary computers, machine code is the binary representation of a computer program which is actually read and interpreted by the computer. A program in machine code consists of a sequence of machine instructions.
In electronics and computer science, a reduced instruction set computer (RISC) is a computer architecture designed to simplify the individual instructions given to the computer to accomplish tasks. Compared to the instructions given to a complex instruction set computer (CISC), a RISC computer might require more instructions in order to accomplish a task because the individual instructions are written in simpler code. The goal is to offset the need to process more instructions by increasing the speed of each instruction, in particular by implementing an instruction pipeline, which may be simpler to achieve given simpler instructions.
In computing, endianness is the order in which bytes within a word of digital data are transmitted over a data communication medium or addressed in computer memory, counting only byte significance compared to earliness. Endianness is primarily expressed as big-endian (BE) or little-endian (LE), terms introduced by Danny Cohen into computer science for data ordering in an Internet Experiment Note published in 1980. The adjective endian has its origin in the writings of 18th century Anglo-Irish writer Jonathan Swift. In the 1726 novel Gulliver's Travels, he portrays the conflict between sects of Lilliputians divided into those breaking the shell of a boiled egg from the big end or from the little end. By analogy, a CPU may read a digital word big end first, or little end first.
In computer science, an instruction set architecture (ISA) is an abstract model that generally defines how software controls the CPU in a computer or a family of computers. A device or program that executes instructions described by that ISA, such as a central processing unit (CPU), is called an implementation of that ISA.
The Intel MCS-51 is a single chip microcontroller (MCU) series developed by Intel in 1980 for use in embedded systems. The architect of the Intel MCS-51 instruction set was John H. Wharton. Intel's original versions were popular in the 1980s and early 1990s, and enhanced binary compatible derivatives remain popular today. It is a complex instruction set computer with separate memory spaces for program instructions and data.
The Intel 8008 is an early 8-bit microprocessor capable of addressing 16 KB of memory, introduced in April 1972. The 8008 architecture was designed by Computer Terminal Corporation (CTC) and was implemented and manufactured by Intel. While the 8008 was originally designed for use in CTC's Datapoint 2200 programmable terminal, an agreement between CTC and Intel permitted Intel to market the chip to other customers after Seiko expressed an interest in using it for a calculator.
x86 assembly language is the name for the family of assembly languages which provide some level of backward compatibility with CPUs back to the Intel 8008 microprocessor, which was launched in April 1972. It is used to produce object code for the x86 class of processors.
In computer engineering, Halt and Catch Fire, known by the assembly language mnemonic HCF, is an idiom referring to a computer machine code instruction that causes the computer's central processing unit (CPU) to cease meaningful operation, typically requiring a restart of the computer. It originally referred to a fictitious instruction in IBM System/360 computers, making a joke about its numerous non-obvious instruction mnemonics.
A fat binary is a computer executable program or library which has been expanded with code native to multiple instruction sets which can consequently be run on multiple processor types. This results in a file larger than a normal one-architecture binary file, thus the name.
The x86 instruction set refers to the set of instructions that x86-compatible microprocessors support. The instructions are usually part of an executable program, often stored as a computer file and executed on the processor.
The Intel 8087, announced in 1980, was the first floating-point coprocessor for the 8086 line of microprocessors. The purpose of the chip was to speed up floating-point arithmetic operations, such as addition, subtraction, multiplication, division, and square root. It also computes transcendental functions such as exponential, logarithmic or trigonometric calculations. The performance enhancements were from approximately 20% to over 500%, depending on the specific application. The 8087 could perform about 50,000 FLOPS using around 2.4 watts.
In computer engineering, an orthogonal instruction set is an instruction set architecture where all instruction types can use all addressing modes. It is "orthogonal" in the sense that the instruction type and the addressing mode vary independently. An orthogonal instruction set does not impose a limitation that requires a certain instruction to use a specific register so there is little overlapping of instruction functionality.
Bit manipulation is the act of algorithmically manipulating bits or other pieces of data shorter than a word. Computer programming tasks that require bit manipulation include low-level device control, error detection and correction algorithms, data compression, encryption algorithms, and optimization. For most other tasks, modern programming languages allow the programmer to work directly with abstractions instead of bits that represent those abstractions.
TLCS is a prefix applied to microcontrollers made by Toshiba. The product line includes multiple families of CISC and RISC architectures. Individual components generally have a part number beginning with "TMP". E.g. the TMP8048AP is a member of the TLCS-48 family.
A translator or programming language processor is a computer program that converts the programming instructions written in human convenient form into machine language codes that the computers understand and process. It is a generic term that can refer to a compiler, assembler, or interpreter—anything that converts code from one computer language into another. These include translations between high-level and human-readable computer languages such as C++ and Java, intermediate-level languages such as Java bytecode, low-level languages such as the assembly language and machine code, and between similar levels of language on different computing platforms, as well as from any of these to any other of these. Software and hardware represent different levels of abstraction in computing. Software is typically written in high-level programming languages, which are easier for humans to understand and manipulate, while hardware implementations involve low-level descriptions of physical components and their interconnections. Translator computing facilitates the conversion between these abstraction levels. Overall, translator computing plays a crucial role in bridging the gap between software and hardware implementations, enabling developers to leverage the strengths of each platform and optimize performance, power efficiency, and other metrics according to the specific requirements of the application.
An instruction set architecture (ISA) is an abstract model of a computer, also referred to as computer architecture. A realization of an ISA is called an implementation. An ISA permits multiple implementations that may vary in performance, physical size, and monetary cost ; because the ISA serves as the interface between software and hardware. Software that has been written for an ISA can run on different implementations of the same ISA. This has enabled binary compatibility between different generations of computers to be easily achieved, and the development of computer families. Both of these developments have helped to lower the cost of computers and to increase their applicability. For these reasons, the ISA is one of the most important abstractions in computing today.
In computing, an arithmetic logic unit (ALU) is a combinational digital circuit that performs arithmetic and bitwise operations on integer binary numbers. This is in contrast to a floating-point unit (FPU), which operates on floating point numbers. It is a fundamental building block of many types of computing circuits, including the central processing unit (CPU) of computers, FPUs, and graphics processing units (GPUs).
The WD16 is a 16-bit microprocessor introduced by Western Digital in October 1976. It is based on the MCP-1600 chipset, a general-purpose design that was also used to implement the DEC LSI-11 low-end minicomputer and the Pascal MicroEngine processor. The three systems differed primarily in their microcode, giving each system a unique instruction set architecture (ISA).
[…] Each operation that the processor can perform is identified by a unique binary number known as an instruction code. […]