address generation interlock
a mechanism to stall the pipeline for one cycle when an address used in one machine cycle is being calculated or loaded in the previous cycle. Address generation interlocks cause the CPU to be delayed for a cycle. [SILC99]

a mechanism to refer to a device or storage location by an indentifying number, character, or group of characters, which may contain a piece of data or a program setup. [SILC99]

addressing mode
defines how a processor determines the destination address for an operation. The different addressing modes of a processor determine the variety of ways that an operand or its address can be referenced by an instruction. [SILC99]

addressing range
defines the number of memory locations addressable by the CPU. For a processor with one address space, the range is determined by the number of signal lines on the address bus of the CPU. [SILC99]

placement of operand values in a memory, at addresses relative to their sizes or length. By naturally aligned data (or aligned, for short) we understand that a data item's lowest-addressed byte must reside in the memory at an address that is a multiple of the size of the data item (in bytes). Thus, a properly aligned value is positioned at an address equal to an integral multiple of its size. For example, the address of a naturally aligned long word is a multiple of four. [SILC99]

a potential conflict between two instructions when the second instruction alters an operand which is read by the first instruction. For correct results, the first instruction must read the operand before the second alters it. Also called a write-after-read hazard. [SILC99]

architectural state
the value of registers, flags, and memory as viewed by the programmer. [SILC99]

an image of a computing system as seen by a programmer capable of programming in machine language. Includes all registers accessible by any instruction, including the privileg instructions, the complete instruction set, all instruction and data formats, addressing modes, and other details that are necessary in order to write a machine language program. [SILC99]

arithmetic instruction
a machine instruction that performs computation, such as addition or multiplication. [SILC99]

arithmetic logic unit (ALU)
the logic circuitry that performs arithmetic calculations on binary numbers and makes logical decisions based on Boolean operations. [SILC99]

associative memory
a memory in which each storage location is selected by its contents and then an associated data location can be accessed. Requires a comparator with each storage location and hence is more complex than random-access memory. Used in fully associative cache memory and in some translation lookaside buffers. Also called content addressable memory. [SILC99]

associativity, in a cache
the number of lines in a set. An n-way set-associative cache has n lines in each set. The term block is also used for line. [SILC99]

a system (e.g. computer, circuit, devices) in which events are not executed in a regular time relationship. They are timing-independent. Each event or operation is performed upon receipt of a signal generated by the completion of a previous event or operation, or upon availability of the system resources required by the event or operation. [SILC99]


barrel shifter
a shifter, which contains log2(max number of bits shifted) stages, where each stage shifts the input by a different power of 2 number of positions. It lends itself well to being pipelined. [SILC99]

standard tests that are used to compare the performance of computers, processors, circuits, or algorithms. [SILC99]

a storage scheme in which the most significant unit of data or an address is stored at the lowest memory address. [SILC99]

binary operator
any mathematical operator that requires two data elements with which to perform the operation. [SILC99]

block, in a cache
a group of sequential locations held as one unit in a cache and selected as whole. Also called a line. [SILC99]

branch history table (BHT)
a buffer that is used to hold the history of previous branch paths taken during the execution of individual branch instructions. The BHT is used to improve prediction of the correct branch path wehenver a branch instruction is encountered. [SILC99]

branch penalty
the delay in a pipeline after a branch instruction when instructions in the pipeline must be cleared from the pipeline and other instructions fetched. Occurs because instructions are fetched into the pipeline one after the other and before the outcome of branch instructions is known. [SILC99]

branch prediction
a mechanism used to predict the outcome of branch instructions prior to their execution. Pipelined machines must fetch the next instruction before they have completely executed the previous instruction. If the previous instruction was a branch, the next instruction fetch could have been from the wrong place. Branch prediction is a technique that attempts to infer the proper next instruction address, knowing only the current one, typically using an associative memory called a branch target buffer. [SILC99]

branch recovery
when a branch is mispredicted, the speculative state of the machine must be flushed and fetching restarted from the correct target address. [SILC99]

branch target address
the address of the instruction to be executed after a branch instruction if the conditions of the branch are satisfied. [SILC99]

branch target buffer (BTB)
a hardware component that holds the branch target address of previously executed branch instructions. Used to predict the outcome of branch instructions when these instructions are next encountered. [SILC99]


cache coherence
state of a multiprocessor computer system having multiple caches ensuring (by a cache coherence protocol) that a read-data access of a processor will always deliver the last memory word written by any other processor to that memory address. [SILC99]

cache, direct-mapped
a cache using random-access memory in which each cache line and the most significant bits of its main memory address (the tag) are held together in the cache at a location given by the least significant bits of the memory address (the index). After the cache line is selected by its index, the tag is compared with the most significant bits of the required memory address to find whether the line is the required line and to access the line. [SILC99]

cache, fully associative
a cache using associative memory in which the addresses of the lines are stored with the lines. All the addresses stored in the cache are compared with the incoming address simultaneously to find whether the line is in the cache and to access the line. [SILC99]

cache hit
occurs when the processor requests data from memory and the data requested is already in the cache memory. [SILC99]

cache line
a block of data associated with a cache tag. [SILC99]

cache memory
a small, fast, redundant memory used to store the most frequently accessed parts of the main memory. [SILC99]

cache miss
occurs when the processor requests data from memory and the data requested is not in the cache memory. When this occurs it is necessary to access the next level in the memory hierarchy (potentially the main memory) to retrieve the data. [SILC99]

cache, set-associative
a cache which is divided into a number of sets, each set consisting of groups of lines and each line has its own stored tag (the most significant bits of the address). A set is accessed first using the index (the least significant bits of the address). Then all the tags in the set are compared with that of the required line to find whether the line is in the cache and to access the line. [SILC99]

cache, unified
a cache which can hold both instructions and data. [SILC99]

central processing unit (CPU)
a part of a computer which performs the actual data processing operations and controls the whole computing system. [SILC99]

CISC processor
a processor with a large quantity of instructions, some of which may be quite complicated, as well as a large quantity of different addressing modes, instruction and data formats, and other attributes. A CISC processor usually has a relatively complicated control unit. Most CISC processors are microprogrammed. [SILC99]

clock cycle
one complete event of a synchronous system's timer, including both the high and low periods. [SILC99]

context switching
an operation that switches the CPU from one process to another, by saving all of the CPU registers for the first and replacing them with the CPU registers for the second. [SILC99]

a processor that is connected to a main processor and operates concurrently with the main processor, although under the control of the main processor. Coprocessors are usually special-purpose processing units, such as floating-point, array, DSP, or graphics data processors. [SILC99]


data dependence
the situation between two sequential instructions in a program when the first instruction produces a result that is used as an input operand by the second instruction. To obtain the desired result, the second instruction must not read the location that will hold the result until the first has written its result to the location. Also called a read-after-write dependence or a flow dependence. [SILC99]

dataflow architecture
an architecture that operates by having source operands trigger the issue and execution of each operation, without relying on the traditional, sequential von Neumann style of fetching and issuing instructions. [SILC99]

dataflow computer
a computer in which instructions are executed when the operands that the instructions require become available rather than being selected in sequence by a program counter as in a traditional von Neumann computer. Usually, more than one processor is present to execute the instructions simultaneously when possible. [SILC99]

dataflow graph
a directed graph consisting of named nodes, which represent instructions, and arcs, which represent data dependences between instructions. During the execution of the program, data propagate along the arcs in data packets, called tokens. [SILC99]

a cache that only holds the data of a program (not instructions). [SILC99]

delayed branch instruction
a form of conditional branch instruction in which one or more instructions immediately following the branch instruction are executed irrespective of the outcome of the branch. The branch then takes effect. Used to reduce branch penalty. [SILC99]

delay slot
in a pipelined processor, a time slot following a branch instruction. An instruction issued within this slot is executed regardless of whether the branch condition is met, so it may appear that the program is executing instructions out of order. Delay slots can be filled (by compilers) by rearranging the program steps, but when this is not possible, they are filled with no-op instructions. [SILC99]

execution where an instruction is executed if there is a demand for the result. [SILC99]

a logical constraint between two operations based on information flowing between their source and/or destination operands; the constraint imposes an ordering on the order of execution of (at least) portions of the operations. [SILC99]

digital signal processor (DSP)
a microprocessor specifically designed for processing digital signals. [SILC99]

dynamic scheduling
issuing instructions to functional units out of program order. The processor can dynamically issue an instruction as soon as all its operands are available and the required functional unit is not busy. Thus, an instruction is not delayed by a stalled previous instruction unless it needs the results of that previous instruction. [SILC99]


an event that causes suspension of normal program execution. Types include addressing exception, data exception, operation exception, overflow exception, protection exception, and underflow exception. [SILC99]

explicit token store
the concept of allocating a separate frame for each active loop iteration or subprogram invocation in the token memory of a dataflow processor. [SILC99]


fetch cycle
the period of time during which an instruction is retrieved from memory. [SILC99]

field programmable array (FPGA)
a programmable logic device which consists of a matrix of programmable cells embedded in a programmable routing mesh. The combined programming of the cell functions and the routing network define the function of the device. [SILC99]

firing rule
a computational rule of the dataflow model that specifies when an instruction can actually be executed. [SILC99]

floating-point unit (FPU)
a circuit that performs floating-point computations, which are generally addition, subtraction, multiplication, or division. [SILC99]

flush (pipeline)
the act of clearing out all actions being processed in a pipeline structure. This may be achieved by aborting all of those actions, or by refusing to issue new actions to the pipeline until those present in the pipeline have left the pipeline because their processing has been completed. [SILC99]

to provide the result of the previous instruction immediately to the current instruction, before the result is written to the register file. Also called bypass. [SILC99]

functional unit (FU)
a module in which actual instruction execution takes place. There may be a number of functional units of different types within a single CPU, including integer units, floating-point units, load/store units, and branch units. [SILC99]


general-purpose register
a digital storage element inside the CPU which is used to hold values temporarily for later transfer to the ALU or memory. General-purpose registers are typically not equipped with any dedicated logic to operate on the data stored in the register. [SILC99]


Harvard architecture
a computer design feature where there are two separate memory units: one for instructions and the other for data. [SILC99]


a cache that only holds the instructions of a program (not data). I-caches generally do not need a write policy. [SILC99]

in-order issue
the situation in which instructions are sent to be executed in the same order as they appear in the program. [SILC99]

instruction decoder unit
the module that receives an instruction from the instruction fetch unit, identifies the type of instruction from the opcode, assembles the complete instruction with its operands, and sends the instruction to the appropriate functional unit, or to an instruction pool to await execution. [SILC99]

instruction format
the specification of the number and size of all possible instruction fields in an instruction set architecture. [SILC99]

instruction issue
the act of initiating the performance of an instruction (not its fetch). Issue policies are important design decisions in systems that use parallelism and execution out of program order to achieve more speed. [SILC99]

instruction-level parallelism (ILP)
the concept of executing two or more instructions in parallel (generally instructions taken from a sequential, not parallel, stream of instructions). [SILC99]

instruction pipeline
a structure that separates the execution of instructions into multiple phases, and executes separate instructions in each phase simultaneously. [SILC99]

instruction reordering
a technique in which the CPU executes instructions in an order different from that specified by the program, with the purpose of increasing the overall execution speed of the CPU. [SILC99]

instruction scheduling
the relocation of independent instructions in order to maximize instruction-level parallelism (and/or minimize instruction stalls). [SILC99]

instruction set
the collection of all the machine-language instructions available to the programmer. [SILC99]

instruction window
for an out-of-order issue mechanism, a buffer holding a group of instructions being considered for issue to functional units. Instructions are issued from the instruction window when dependences have been resolved. [SILC99]

integer unit
a type of functional unit designed specifically for the execution of integer-type instructions. [SILC99]

interleaving, block
instructions of a thread are executed successively until an event occurs that may cause latency. This event induces a context switch. Also called coarse-grained multithreading. [SILC99]

interleaving, cycle-by-cycle
an instruction of another thread is fetched and fed into the execution pipeline at each processor cycle. Also called fine-grained multithreading. [SILC99]

internal forwarding
a mechanism in a pipeline which allows results from one pipeline stage to be sent directly back to one or more waiting pipeline stages. The technique can reduce stalls in the pipeline. [SILC99]

may be viewed as a data repository obeying the single-assignment rule. [SILC99]


Java Virtual Machine
the (abstract) engine that actually executes a Java program compiled to Java bytecode. [SILC99]



L1 cache
in systems with two separate sets of cache memory between the CPU and standard memory, the set nearest the CPU. L1 cache is often provided within the same integrated circuit that contains the CPU. In operation, the CPU accesses L1 cache memory; if L1 cache memory does not contain the required reference, it accesses L2 cache memory, which in turn accesses standard memory, if necessary. [SILC99]

L2 cache
in systems with two separate sets of cache memory between the CPU and standard memory, the set between L1 cache and standard memory. [SILC99]

line, bus
one wire of a bus, which may be used for transmitting a datum, a bit of an address, or a control signal. [SILC99]

line, cache
a group of words from successive locations in memory stored in cache memory together with an associated tag, which contains the starting memory reference address for the group. [SILC99]

a storage scheme in which the least significant unit of data or an address is stored at the lowest memory address. [SILC99]

load instruction
an instruction that requests a datum from a memory address to be placed in a specified register. [SILC99]

load/store architecture
a system design in which the only processor operations that access memory are simple register loads and stores. [SILC99]

load/store unit
a functional unit used to process instructions that load data from memory or store data to memory. [SILC99]

local bus
the set of wires that connects a processor to its local memory module. [SILC99]

logical operation
the machine level instruction that performs Boolean operations. [SILC99]

the number of instructions that can be accessed for issue by the scheduler of the instruction window (usually corresponding to the length of the instruction window). [SILC99]


machine code
the machine format of a compiled executable, in which individual instructions are represented in binary notation. [SILC99]

main memory
the level of memory hierarchy farthest from the processor. [SILC99]

memory data register
the processor register that holds data being written to or read from memory. [SILC99]

memory hierarchy
the separation of memory systems into groups based on cost and access times. The "top" of the hierarchy is usually the most expensive, fastest and smallest, from where information "percolates" as it is used. [SILC99]

memory latency
the time between the iniatition of a memory request and its completion. [SILC99]

memory management unit (MMU)
a part of a processor, or a separate component, that implements virtual memory functions. A MMU translates virtual addresses from the processor into real addresses for the memory. [SILC99]

memory-reference instruction
an instruction that communicates with memory, writing to it (store) or reading from it (load). [SILC99]

memory word
the total number of bits that may be stored in each addressable memory location. [SILC99]

M.E.S.I. protocol
a cache coherence protocol for a single-bus multiprocessor. Each cache line exists in one of four states, modified (M), exclusive (E), shared (S), or invalid (I). [SILC99]

microarchitecture, processor
refers to the internal organization of the processor. Several specific processors with different microarchitectures may share the same architecture. [SILC99]

a collection of two-level operations that are executed as a result of a single instruction being issued. [SILC99]

MIMD architecture
a parallel processing system architecture where there is more than one processor and where each processor performs different instruction on different data values simultaneously. [SILC99]

a computer system that has more than one internal processor capable of operating collectively on a computation. Normally associated with those systems where the processors can access a common main memory. [SILC99]

multithreaded architecture
supports execution whereby several enabled instructions from different threads all become candidates for execution. [SILC99]


no fetch on write, in a cache
in a write-through cache policy, a line is not fetched from the main memory into the cache on a cache miss, if the reference is a write reference. Also called non-allocate on write as space is not allocated in the cache on write misses. [SILC99]

a computer instruction that performs no operation. It can be used to put a delay between the execution of other instructions. [SILC99]


a part of an assembly language instruction that represents an operation to be performed by the processor. [SILC99]

specification of a storage location that provides data to or receives data from an operation. [SILC99]

operand address
the location of an element of data that will be processed by the computer. [SILC99]

operand address register
the internal CPU register that points to the memory location that contains the data element that will be processed by the computer. [SILC99]

out-of-order issue
the situation in which instructions are sent to be executed not necessarily in the order that they appear in the program. An instruction is issued as soon as any data dependence with other instructions are resolved. [SILC99]

output dependence
the situation when two sequential instructions in a program write to the same location. To obtain the desired result, the second instruction must write to the location after the first instruction. Also known as write-after-write hazard. [SILC99]


parallel architecture
a computer system architecture made up of multiple CPUs. [SILC99]

parallel computing
performed on computers that have more than one CPU operating simultaneously. [SILC99]

PC-relative addressing
an addressing mechanism for machine instructions in which the address of the target location is given by the contents of the program counter and an offset held as a constant in the instruction, added together. Allows the target location to be specified as a number of locations from the current (or next) instruction. Generally only used for control transfer instructions (e.g. jumps and branches). [SILC99]

physical address
the actual address of a value in the physical memory. [SILC99]

physical register set
an additional register set to hold the results of speculative instruction execution until the instruction retires. Physical registers (also called rename registers) are used to prevent conflicts between instructions that would normally use the same registers. See also: speculative execution. [SILC99]

pipeline hazard, control
arises from branch, jump and other control-flow change instructions. For example, if a branch is to be taken, the flow of instructions into the pipeline has to be interrupted, and the branch target must be fetched before the pipeline can resume execution. [SILC99]

pipeline hazard, data
arises because of the unavailability of an operand. [SILC99]

pipeline hazard, structural
arises from some combinations of instructions that cannot be accomodated because of resource conflicts. [SILC99]

pipeline interlock
a hardware mechanism to prevent instructions from proceeding through a pipeline when a data dependence or other conflict exists. [SILC99]

pipeline latency
the number of cycles between the time an instruction is issued and the time a dependent instruction (which uses its results as an operand) can be issued. [SILC99]

pipeline machine cycle
the time required to move an instruction one step down the pipeline. [SILC99]

pipeline processor
a processor that executes more than one instruction at a time, in pipelined fashion. The execution of each instruction is divided into a sequence of simpler suboperations. Each suboperation is performed by a separate hardware section called a stage, and each stage passes its result to a succeeding stage. Normally, each instruction only remains at each stage for a single cycle, and each stage begins executing a new instruction as previous instructions are being completed in later stages. Thus, a new intruction can often begin during every cycle. Pipelines greatly improve the rate at which instructions can be executed, as long as there are no dependences. The efficient use of a pipeline requires that several instructions be executed in parallel, however the result of any instruction is not available for several cycles after that instruction has entered the pipeline. Thus, new instructions must not depend on the results of instructions which are still in the pipeline. [SILC99]

pipeline repeat rate
the number of cycles that occur between the issuance of one instruction and the issuance of the next instruction to the same functional unit. [SILC99]

pipeline throughput
the number of instructions that can leave a pipeline per cycle. [SILC99]

splitting the CPU into a number of stages, which allows multiple instructions to be executed concurrently. [SILC99]

pop instruction
an instruction that retrieves contents from the top of the stack and places the contents in a specified register. [SILC99]

an addressing mode in which the address is incremented after accessing the memory value. Used to access elements of arrays in memory. [SILC99]

precise interrupts
an implementation of the interrupt mechanism such that the processor can restart after the interrupt at exactly where it was interrupted. All instructions that have started prior to the interrupt should appear to have completed before the interrupt takes place and all instructions after the interrupt should not appear to start until after the interrupt routine has finished. [SILC99]

an addressing mode using an index or address register in which the contents of the address are reduced by the size of the operand before the access is attempted. [SILC99]

prediction (of branches)
the act of guessing the likely outcome of a conditional branch decision. Prediction is an important technique for speeding-up execution in overlapped processor designs. Increasing the depth of the prediction (the number of branch predictions that can be unresolved at any time) increases both the complexity and speed. [SILC99]

the act of fetching instructions prior to being needed by the CPU. [SILC99]

prefetch queue
a queue of instructions which have been prefetched. [SILC99]

an assembly language addressing mode in which the address is incremented prior to accessing the memory value. Used to access elements of arrays in memory. [SILC99]

Princeton architecture
a computer architecture in which the same memory holds both data and instructions. [SILC99]

processor-in-memory (PIM)
integrates one or more processors with large on-chip memory, which provides the processor(s) with sufficient bandwidth at a reasonable cost. [SILC99]

program counter (PC)
a CPU register that contains the address of the next instruction in sequence to be executed. [SILC99]

push instruction
an instruction that stores the contents of a specified register(s) on the stack. [SILC99]


a data unit formed from four words. [SILC99]

a data structure maintaining a first-in, first-out discipline of insertion and removal. [SILC99]


random-access memory (RAM)
a memory that allows access to any element in the same period of time. [SILC99]

read-only memory (ROM)
a form of random access memory in which storage locations can only be accessed for reading, not for writing. Normally also has non-volatile characteristics. [SILC99]

reconfigurable computing system
combines programmable general-purpose computing with reconfigurable hardware. [SILC99]

a circuit formed from identical flip-flops or latches and capable of storing several bits of data. [SILC99]

register direct addressing
an instruction addressing method in which the memory address of the data to be accessed or stored is found in a general-purpose register. [SILC99]

register file
a collection of CPU registers addressable by number. [SILC99]

register indirect addressing
an instruction addressing method in which the register field contains a pointer to a memory location that contains the memory address of the data to be accessed or stored. [SILC99]

register renaming
dynamically allocating a location in a special register file for an instance of a destination register appearing in an instruction prior to its execution. Used to remove antidependences and output dependences. [SILC99]

register window
a set, or window, of registers selected out of a larger group. [SILC99]

relative addressing
an addressing mechanism in which the address of the target location is given by the contents of a specific register and an offset held as a constant in the instruction, added together. [SILC99]

reorder buffer
a set of storage locations holding instructions (and sometimes also the result values) in program order. [SILC99]

reservation station
a storage location placed in front of the functional units and provided to hold instruction and associated operands until the functional units become available. [SILC99]

resource conflict
the situation when a component such as a register or functional unit is required by more than one instruction simultaneously. [SILC99]

retire unit
the unit used to assure that instructions are completed in program order, even though they may have been executed out of order. [SILC99]

RISC processor
a processor implementing the computer design philosophy of a relatively simple control unit design with a reduced number of instructions (selected to be simple), data and instructions formats, and addressing modes. The processor is pipelined. One of the particular features of a RISC processor is the restriction that all memory accesses should be by load and store intructions only (the so-called load/store architecture). All arithmetic logic operations in a RISC are register-to-register, meaning that both the sources and destinations of all operations are CPU registers. All this tends to reduce CPU-to-memory data traffic significantly, thus improving performance. In addition, RISCs usually have the following properties: most intructions execute within a single cycle, all instructions have the same size, the control unit is hardwired (to increase the speed of operations), and there is a CPU register file of considerable size. [SILC99]


scalar processor
a CPU that issues at most one instruction at a time. [SILC99]

a centralized control unit which enables out-of-order execution of instructions. It holds various information to detect dependences. [SILC99]

shared-memory architecture
an organization of a computer system havingmore than one processor in which each processor can access a common main memory. [SILC99]

SIMD architecture
a parallel processing architeture where more than one processor performs the same instruction on different data simultaneously. [SILC99]

simultaneous multithreading (SMT)
when instructions are simultaneously issued from multiple threads to the functional units of a superscalar processor. [SILC99]

single-address instruction
an instruction defining an operation and exactly one address of an operand or another instruction. [SILC99]

single-assignment rule
means that a variable may appear on the left-hand side of an assignment only once within the area of the program in which it is active. [SILC99]

(single)-chip multiprocessor or multiprocessor chip (CMP)
integrates tow or more complete processors on a single chip. [SILC99]

source operand
in ALU operations, one of the input values. [SILC99]

spatial locality, cache
when items whose addresses are near one another tend to be referenced close together in time. [SILC99]

SPEC benchmarks
suites of test programs created by the System Performance and Evaluation Cooperative. The cooperative was formed by four companies, Apollo, Hewlett-Packard, MIPS, and Sun Microsystems, to evaluate smaller computers. The programs are actually scientific and engineering applications. [SILC99]

speculative execution
a technique in which instructions are executed speculatively and are discarded if speculation was wrong. [SILC99]

a hardware or software data structure in which items are stored in a last-in, first-out manner. [SILC99]

stack architecture
an architecture that accesses data as though it were in a pile and only the top-most elements are directly accessible. [SILC99]

stall, in a pipeline
a pause in processing instructions in a pipeline, usually caused by an instruction dependence or resource conflict. [SILC99]

static prediction
a method of branch prediction which uses machine-fixed prediction (e.g., predict always taken/not taken) or which relies on the compiler selecting one of the two alternative instructions for execution after the branch instruction (either the next instruction or that at the target location specified in the branch instruction). A bit is provided in the branch instruction which is set to a 0 for one alternative and 1 for the other. The processor then follows this advice when it executes the branch instruction. [SILC99]

store instruction
a machine instruction which copies the contents of a register into a memory location. [SILC99]

a pipeline design technique in which for every external clock cycle two or more pipeline stages are processed within the processor, because this is standard in contemporary processors, it often means just a long pipeline. [SILC99]

superscalar processor
a processor able to issue multiple instructions dynamically each clock cycle from a conventional linear instruction stream. [SILC99]

symmetric multiprocessor (SMP)
a multiprocessor system where all processors are connected by a global memory system (in contrast to a distributed shared-memory multiprocessor where all memory modules are physically distributed to the processors). [SILC99]

an operation or operations that are cotnrolled or synchronized by a clocking signal. [SILC99]

system bus
in digital systems, the main bus over which information flows. [SILC99]


tag, in caches
a part of a memory address held in a direct mapped or set-associative cache next to the corresponding line, generally the most significant bits of the address. [SILC99]

temporal locality, cache
when recently accessed items are likely to be accessed in the near future. [SILC99]

threaded dataflow
a technique where the dataflow principle is modified so that instructions of certain instruction streams are processed in succeeding machine cycles. [SILC99]

Tomasulo's scheme
a hardware-dependent resolution scheme that allows out-of-order execution of instructions in the presence of hazards. [SILC99]

trace cache
a new paradigm for caching instructions. [SILC99]

translation lookaside buffer (TLB)
for a paging system, a high-speed hardware lookup table for the conversion of virtual addresses generated by the processor into real addresses. The table is of limited size and only holds recently used page addresses. [SILC99]

two-address instruction
a class of instruction in which two operands addresses are specified and the third one is implicit. One of the two addresses is also used to store the result of the ALU operation. [SILC99]

two-port memory
a memory system that has two access paths, one path is usually used by the CPU and the other by I/O devices. This is also called dual-port memory. [SILC99]

two-way interleaved
in memory technology, a technique that provides faster access to memory values by interleaving memory values in two separate modules. [SILC99]


a method in dataflow computing used for assigning tags to each execution of an instruction. [SILC99]

unary operation
an operation a computer performs that involves only one data element. [SILC99]

unconditional branch
an instruction that causes a transfer of control to another address without regard to the state of any condition flags. [SILC99]

user-visible register
an alternative name for general-purpose registers, emphasizing the fact that these registers are accessible to the instructions in user programs. The counterpart to user-visible registers are registers that are reserved for use by privileged instructions, particularly within the operating system. [SILC99]


virtual address
the address generated by the processor in a paging (virtual memory) system. [SILC99]

virtual memory
a system to handle the memory hierarchy, providing an automatic method of transferring the contents of blocks of memory (pages) into the main memory when needed. Relies on using two addresses for each stored word, a virtual address which is generated by the processor and the corresponding real address for accessing the memory. [SILC99]

VLIW processor
a computer architecture that performs no dynamic analysis on the instruction stream of long instruction words and executes operations precisely as packed by the compiler into a long machine word. [SILC99]


write-back cache
locations in cache memory are grouped together in blocks and when it is necessary to update main memory to reflect changes in the cache, the entire block of main memory is updated rather than just individual locations. [SILC99]




zero-address instruction
an instruction in which the operands are kept on a first-in, first-out stack in the CPU, and thus require no explicit addresses. [SILC99]