You are here

Glossary

Definition of terms utilized in the project


The main goals of this glossary are:

  1. Help non-experts understand our work (papers, reports, presentations, ...)
  2. Each participant provides exact definitions for the keywords s/he uses
  3. Share these definitions among participants (help collaboration)
Please add your keywords and definitions

Contents


A

Adaptive system 
(...)


B

Bayesian network 
(...)


C

Callback 
(Wikipedia) In computer programming, a callback is executable code that is passed as an argument to other code. It allows a lower-level software layer to call a subroutine (or function) defined in a higher-level layer.
Channel (LTTng) 
(...)
Clock cycle 
(Wikipedia) In electronics and especially synchronous digital circuits, a clock signal is a signal used to coordinate the actions of two or more circuits.
CodeAnalyst 
(...)
Code coverage 
(Wikipedia) Code coverage is a measure used in software testing. It describes the degree to which the source code of a program has been tested. It is a form of testing that looks at the code directly and as such comes under the heading of white box testing. Currently, the use of code coverage is extended to the field of digital hardware, the contemporary design methodology of which relies on Hardware description languages (HDL's). Code coverage techniques were amongst the first techniques invented for systematic software testing. The first published reference was by Miller and Maloney in Communications of the ACM in 1963.
Collection Spy 
(...)
Concurrent computing 
(Wikipedia) Concurrent computing is a form of computing in which programs are designed as collections of interacting computational processes that may be executed in parallel. Concurrent programs can be executed sequentially on a single processor by interleaving the execution steps of each computational process, or executed in parallel by assigning each computational process to one of a set of processors that may be close or distributed across a network. The main challenges in designing concurrent programs are ensuring the correct sequencing of the interactions or communications between different computational processes, and coordinating access to resources that are shared among processes. A number of different methods can be used to implement concurrent programs, such as implementing each computational process as an operating system process, or implementing the computational processes as a set of threads within a single operating system process. Pioneers in the field of concurrent computing include Edsger Dijkstra, Per Brinch Hansen, and C.A.R. Hoare.
Context switch 
(Wikipedia) A context switch is the computing process of storing and restoring the state (context) of a CPU such that multiple processes can share a single CPU resource. The context switch is an essential feature of a multitasking operating system. Context switches are usually computationally intensive and much of the design of operating systems is to optimize the use of context switches. A context switch can mean a register context switch, a task context switch, a thread context switch, or a process context switch. What constitutes the context is determined by the processor and the operating system.
Context switch 
(Foldoc) When a multitasking operating system stops running one process and starts running another. Many operating systems implement concurrency by maintaining separate environments or "contexts" for each process. The amount of separation between processes, and the amount of information in a context, depends on the operating system but generally the OS should prevent processes interfering with each other, e.g. by modifying each other's memory. A context switch can be as simple as changing the value of the program counter and stack pointer or it might involve resetting the MMU to make a different set of memory pages available. In order to present the user with an impression of parallism, and to allow processes to respond quickly to external events, many systems will context switch tens or hundreds of times per second.
Convex Hull method 
(...)
CPU 
Central Processing Unit
CPU modes, privilege mode, kernel mode, protected mode, user modes 
(Wikipedia) CPU modes (also called processor modes or CPU privilege levels, and by other names) are operating modes for the central processing unit of some computer architectures that place restrictions on the operations that can be performed by the process currently running in the CPU. This design allows the operating system to run at different privilege levels. These different privilege levels are called rings when referring to their implementation at the OS abstraction level, while CPU modes when referring to their implementation at the cpu firmware abstraction level. (…) At a minimum, any CPU with this type of architecture will support at least two distinct operating modes, and at least one of the modes will provide completely unrestricted operation of the CPU. The unrestricted mode is usually called kernel mode, but many other designations exist (master mode, supervisor mode, privileged mode etc.). Other modes are usually called user modes, but are occasionally known by other names (slave mode etc.). In kernel mode, the CPU may perform any operation provided for by its architecture. Any instruction may be executed, any I/O operation may be initiated, any area of memory may be accessed, and so on. In the other CPU modes, certain restrictions on CPU operations are enforced by the hardware. Typically certain instructions are not permitted, I/O operations may not be initiated, some areas of memory cannot be accessed etc. Usually the user-mode capabilities of the CPU are a subset of the kernel mode capabilities, but in some cases (such as hardware emulation of non-native architectures), they may be significantly different from kernel capabilities, and not just a subset of them. At least one user mode is always defined, but some CPU architectures support multiple user modes, often with a hierarchy of privileges. These architectures are often said to have ring-based security, wherein the hierarchy of privileges resembles a set of concentric rings, with the kernel mode in the central, innermost ring. Multics hardware was the first significant implementation of ring security, but many other hardware platforms have been designed along similar lines, including the Intel 80286 protected mode, and the IA-64 as well, though it is referred to by a different name in these cases. Mode protection may extend to resources beyond the CPU processing hardware itself. Hardware registers track the current operating mode of the CPU, but additional virtual-memory registers, page-table entries, and other data may track mode identifiers for other resources. For example, a CPU may be operating in Ring 0 as indicated by a status word in the CPU itself, but every access to memory may additionally be validated against a separate ring number for the virtual-memory segment targeted by the access, and/or against a ring number for the physical page (if any) being targeted. This has been demonstrated with the PSP handheld system.
Critical section 
(wikipedia) In concurrent programming a critical section is a piece of code that accesses a shared resource (data structure or device) that must not be concurrently accessed by more than one thread of execution. A critical section will usually terminate in fixed time, and a thread, task or process will have to wait a fixed time to enter it (aka bounded waiting). Some synchronization mechanism is required at the entry and exit of the critical section to ensure exclusive use, for example a semaphore. By carefully controlling which variables are modified inside and outside the critical section (usually, by accessing important state only from within), concurrent access to that state is prevented. A critical section is typically used when a multithreaded program must update multiple related variables without a separate thread making conflicting changes to that data. In a related situation, a critical section may be used to ensure a shared resource, for example a printer, can only be accessed by one process at a time. How critical sections are implemented varies among operating systems.


D

DebugFS 
see this text from J. Corbet (2004)
Decision tree 
(...)
Dempster-Shafer evidence theory (D-S) 
(...)
Device driver 
(Wikipedia) In computing, a device driver or software driver is a computer program allowing higher-level computer programs to interact with a hardware device. A driver typically communicates with the device through the computer bus or communications subsystem to which the hardware connects. When a calling program invokes a routine in the driver, the driver issues commands to the device. Once the device sends data back to the driver, the driver may invoke routines in the original calling program. Drivers are hardware-dependent and operating-system-specific. They usually provide the interrupt handling required for any necessary asynchronous time-dependent hardware interface.
Distributed computing
(Wikipedia) Distributed computing is a field of computer science that studies distributed systems. A distributed system consists of multiple autonomous computers that communicate through a computer network. The computers interact with each other in order to achieve a common goal. A computer program that runs in a distributed system is called a distributed program, and distributed programming is the process of writing such programs. Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one computer.
Distributed tracing 
(...)
Dprobes 
(...)
DSF 
(Eclipse) Debugger Services Framework [1]
DTrace 
(...)
Dynamic instrumentation 
(...)
Dynamic program analysis 
(Wikipedia) Dynamic program analysis is the analysis of computer software that is performed with executing programs built from that software on a real or virtual processor (analysis performed without executing programs is known as static code analysis). Dynamic program analysis tools may require loading of special libraries or even recompilation of program code. For dynamic program analysis to be effective, the target program must be executed with sufficient test inputs to produce interesting behavior. Use of software testing techniques such as code coverage helps to ensure that an adequate slice of the program's set of possible behaviors has been observed. Care must be taken to minimize the effect that instrumentation or probing has on the execution (including temporal properties) of the target program, to minimize the appearance of Heisenbugs.
Dynamic program analysis 
(...)


E

Eclipse 
(...)
Eclipse TPTP 
(...)
Eclipse Tracing Framework 
(...)
Eclipse Linux Tools Project 
(...)
Expert system 
(...)


F

Formal methods 
(Wikipedia) In computer science and software engineering, formal methods are mathematically-based techniques for the specification, development and verification of software and hardware systems.[1] The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analyses can contribute to the reliability and robustness of a design.[2] However, the high cost of using formal methods means that they are usually only used in the development of high-integrity systems, where safety or security is important.
FSM 
Finite State Machine
FTrace 
(...)
Fuzzy logic 
(...)


G

GDB 
GNU Debugger
Genetic programming 
(...)
GNU Binutils 
(...)
gprof 
(...)


H

Hardware performance counter or hardware counter 
(Wikipedia) In computers, hardware performance counters, or hardware counters are a set of special-purpose registers built in modern microprocessors to store the counts of hardware-related activities within computer systems. Advanced users often rely on those counters to conduct low-level performance analysis or tuning.
Hidden Markov model (HMM) 
(...)
Hooking, hooks 
(Wikipedia) Hooking in programming is a technique employing so called hooks to make a chain of procedures as an event handler. Thus, after the handled event occurs, control flow follows the chain in specific order. The new hook registers its own address as handler for the event and is expected to call the original handler at some point, usually at the end. Each hook is required to pass execution to the previous handler, eventually arriving to the default one, otherwise the chain is broken. Unregistering the hook means setting the original procedure as the event handler. Hooking can be used for many purposes, including debugging and extending original functionality. It can also be misused to inject (potentially malicious) code to the event handler - for example, rootkits try to make themselves invisible by faking the output of API calls that would otherwise reveal their existence. A special form of hooking employs intercepting the library functions calls made by a process. Function hooking is implemented by changing the very first few code instructions of the target function to jump to an injected code.
HyperTransport bus 
(...)


I

Instrumentation 
(Mario) The insertion of additional code or breakpoints into software in order to collect runtime information about program execution which would otherwise not be obtainable.
Interrupt 
(Wikipedia) In computing, an interrupt is an asynchronous signal from hardware indicating the need for attention or a synchronous event in software indicating the need for a change in execution. A hardware interrupt causes the processor to save its state of execution via a context switch, and begin execution of an interrupt handler. Software interrupts are usually implemented as instructions in the instruction set, which cause a context switch to an interrupt handler similar to a hardware interrupt. Interrupts are a commonly used technique for computer multitasking, especially in real-time computing. Such a system is said to be interrupt-driven. An act of interrupting is referred to as an interrupt request ("IRQ").
Interrupt handler 
(Wikipedia) An interrupt handler, also known as an interrupt service routine (ISR), is a callback subroutine in an operating system or device driver whose execution is triggered by the reception of an interrupt. Interrupt handlers have a multitude of functions, which vary based on the reason the interrupt was generated and the speed at which the Interrupt Handler completes its task. An interrupt handler is a low-level counterpart of event handlers. These handlers are initiated by either hardware interrupts or interrupt instructions in software, and are used for servicing hardware devices and transitions between protected modes of operation such as system calls.


J

JProbe 
(...)
JProfiler 
(...)
JRockit Profiler 
(...)
jvisualvm 
(...)
JVM 
Java Virtual Machine


K

Kernel 
(Wikipedia) In computer science, the kernel is the central component of most computer operating systems (OS). Its responsibilities include managing the system's resources (the communication between hardware and software components).[1] As a basic component of an operating system, a kernel provides the lowest-level abstraction layer for the resources (especially memory, processors and I/O devices) that application software must control to perform its function. It typically makes these facilities available to application processes through inter-process communication mechanisms and system calls. These tasks are done differently by different kernels, depending on their design and implementation. While monolithic kernels will try to achieve these goals by executing all the code in the same address space to increase the performance of the system, microkernels run most of their services in user space, aiming to improve maintainability and modularity of the codebase.[2] A range of possibilities exists between these two extremes.
Kernel space, user space 
(Wikipedia) A conventional operating system usually segregates virtual memory into kernel space and user space. Kernel space is strictly reserved for running the kernel, kernel extensions, and some device drivers. In most operating systems, kernel memory is never swapped out to disk. In contrast, user space is the memory area where all user mode applications work and this memory can be swapped out when necessary. The term userland is often used for referring to operating system software that runs in user space. Each user space process normally runs in its own virtual memory space, and, unless explicitly requested, cannot access the memory of other processes. This is the basis for memory protection in today's mainstream operating systems, and a building block for privilege separation. Depending on the privileges, processes can request the kernel to map part of another process's memory space to its own, as is the case for debuggers. Programs can also request shared memory regions with other processes. Another approach taken in experimental operating systems, is to have a single address space for all software, and rely on the programming language's virtual machine to make sure that arbitrary memory cannot be accessed — applications simply cannot acquire any references to the objects that they are not allowed to access.[1] This approach has been implemented in JXOS, Unununium as well as Microsoft's Singularity research project.
Kprobes 
(...)
KVM 
Kernel-based Virtual Machine, a Linux kernel virtualization infrastructure


L

Large-scale multiprocessor 
(...)
Level of abstraction 
(Wahab)
Level of abstraction 
(Tim)
Level of abstraction 
(Béchir)
ltrace 
(...)
LTTng 
Linux Trace Toolkit, Next Generation
LTTV 
Linux Trace Toolkit Viewer
LXC 
Linux Containers


M

Marker (LTTng) 
(...)
Monitoring 
(...)
Mutex 
(Wikipedia) Mutual exclusion (often abbreviated to mutex) algorithms are used in concurrent programming to avoid the simultaneous use of a common resource, such as a global variable, by pieces of computer code called critical sections. A critical section is a piece of code where a process or thread accesses a common resource. The critical section by itself is not a mechanism or algorithm for mutual exclusion. A program, process, or thread can have the critical section in it without any mechanism or algorithm implementing mutual exclusion. Examples of such resources are fine-grained flags, counters or queues, used to communicate between code that runs concurrently, such as an application and its interrupt handlers. The synchronization of access to those resources is an acute problem because a thread can be stopped or started at any time. To illustrate: suppose a section of code is altering a piece of data over several program steps, when another thread, perhaps triggered by some unpredictable event, starts executing. If this second thread reads from the same piece of data, the data, which is in the process of being overwritten, is in an inconsistent and unpredictable state. If the second thread tries overwriting that data, the ensuing state will probably be unrecoverable. These shared data being accessed by critical sections of code must therefore be protected so that other processes which read from or write to the chunk of data are excluded from running. A mutex is also a common name for a program object that negotiates mutual exclusion among threads, also called a lock.
Multi-core CPU 
(Wikipedia) A multi-core CPU (or chip-level multiprocessor, CMP) combines two or more independent cores into a single package composed of a single integrated circuit (IC), called a die, or more dies packaged together. A dual-core processor contains two cores and a quad-core processor contains four cores. A multi-core microprocessor implements multiprocessing in a single physical package. A processor with all cores on a single die is called a monolithic processor. Cores in a multicore device may share a single coherent cache at the highest on-device cache level (e.g. L2 for the Intel Core 2) or may have separate caches (e.g. current AMD dual-core processors). The processors also share the same interconnect to the rest of the system. Each "core" independently implements optimizations such as superscalar execution, pipelining, and multithreading. A system with N cores is effective when it is presented with N or more threads concurrently. The most commercially significant (or at least the most 'obvious') multi-core processors are those used in computers (primarily from Intel & AMD) and game consoles (e.g., the Cell processor in the PS3). In this context, "multi" typically means a relatively small number of cores. However, the technology is widely used in other technology areas, especially those of embedded processors, such as network processors and digital signal processors, and in graphical processing units (GPUs).
Multi-level 
(...)


N

Neural network 
(...)
Noise in a LTTng execution trace 
(Wahab Team) Any event associated with memory management, page faults, and interrupts. Can occur anywhere in the trace and in any order. Associated events are treated as a set.
Non-maskable interrupt 
(Wikipedia) A non-maskable interrupt (NMI) is a computer processor interrupt that cannot be ignored by standard interrupt masking techniques in the system. It is typically used to signal attention for non-recoverable hardware errors. (Some NMIs may be masked, but only by using proprietary methods specific to the particular NMI.). An NMI is often used when response time is critical, and when an interrupt should never be disabled in the normal operation of the system. Such uses include the reporting of non-recoverable hardware errors, methods for system debugging and profiling, and special case handling (last minute hacks) such as system resets. In modern architectures, NMIs are typically used to handle non-recoverable errors, which need immediate attention. Therefore, such interrupts should not be masked in the normal operation of the system. These errors include non-recoverable internal system chipset errors, corruption in system memory such as parity and ECC errors, and data corruption detected on system and peripheral buses.
Non-uniform memory access 
(Wikipedia) Non-Uniform Memory Access or Non-Uniform Memory Architecture (NUMA) is a computer memory design used in multiprocessors, where the memory access time depends on the memory location relative to a processor. Under NUMA, a processor can access its own local memory faster than non-local memory, that is, memory local to another processor or memory shared between processors. NUMA architectures logically follow in scaling from symmetric multiprocessing (SMP) architectures. Their commercial development came in work by Burroughs, Convex Computer (later HP), SGI, Sequent and Data General during the 1990s. Techniques developed by these companies later featured in a variety of Unix-like operating systems, as well as to some degree in Windows NT.


O

OProfile 
(...)
OS 
Operating System
OSE 
(VirtualBox) Open Source Edition


P

Page table 
(Wikipedia) A page table is the data structure used by a virtual memory system in a computer operating system to store the mapping between virtual addresses and physical addresses. Virtual addresses are those unique to the accessing process. Physical addresses are those unique to the CPU, i.e., RAM.
Parallel Studio 
(...)
Performance counter 
(Wikipedia) In computers, hardware performance counters, or hardware counters are a set of special-purpose registers built in modern microprocessors to store the counts of hardware-related activities within computer systems. Advanced users often rely on those counters to conduct low-level performance analysis or tuning. (…) Compared to software profilers, hardware counters provide low-overhead access to a wealth of detailed performance information related to CPU's function units, caches and main memory etc. Another benefit of using them is that no source code modifications are needed in general. However, the types and meanings of hardware counters vary from one kind of architecture to another due to the variation in hardware organizations. Also, it is difficult to correlate the low level performance metrics back to source code. The limited number of registers to store the counters often force users to conduct multiple measurements to collect all desired performance metrics.
Preemption 
(Wikipedia) In computing, preemption (sometimes pre-emption) is the act of temporarily interrupting a task being carried out by a computer system, without requiring its cooperation, and with the intention of resuming the task at a later time. Such a change is known as a context switch. It is normally carried out by a privileged task or part of the system known as a preemptive scheduler, which has the power to preempt, or interrupt, and later resume, other tasks in the system.
Probe (LTTng) 
(...)
Process 
(Wikipedia) In computing, a process is an instance of a computer program that is being sequentially executed.[1] A computer program itself is just a passive collection of instructions, while a process is the actual execution of those instructions. Several processes may be associated with the same program, for example opening up two windows of the same program typically means two processes are being executed. A single computer processor executes only one instruction at a time. To allow users to run several programs at once, single-processor computer systems can perform time-sharing - processes switch between being executed and waiting to be executed. In most cases this is done at a very fast rate, giving an illusion that several processes are executing at once. Using multiple processors achieves actual simultaneous execution of multiple instructions from different processes, but time-sharing is still typically used to allow more processes to run at once. For security reasons most modern operating systems prevent direct inter-process communication, providing mediated and limited functionality. However, a process may split itself into multiple threads that execute in parallel, running different instructions on much of the same resources and data. This is useful when, for example, it is necessary to make it seem that multiple things within the same process are happening at once (such as a spell check being performed in a word processor while the user is typing), or if part of the process needs to wait for something else to happen (such as a web browser waiting for a web page to be retrieved).
Profiling 
(Wikipedia) In software engineering, performance analysis, more commonly profiling, is the investigation of a program's behavior using information gathered as the program runs (i.e. it is a form of dynamic program analysis, as opposed to static code analysis). The usual goal of performance analysis is to determine which parts of a program to optimize for speed or memory usage. (…) In software engineering, performance analysis, more commonly profiling, is the investigation of a program's behavior using information gathered as the program runs (i.e. it is a form of dynamic program analysis, as opposed to static code analysis). The usual goal of performance analysis is to determine which parts of a program to optimize for speed or memory usage. (…) A profiler is a performance analysis tool that measures the behavior of a program as it runs, particularly the frequency and duration of function calls. The output is a stream of recorded events (a trace) or a statistical summary of the events observed (a profile). Profilers use a wide variety of techniques to collect data, including hardware interrupts, code instrumentation, operating system hooks, and performance counters. The usage of profilers is called out in the performance engineering process. As the summation in a profile often is done related to the source code positions where the events happen, the size of measurement data is linear to the code size of the program. In contrast, the size of a trace is linear to the program's execution time, making it somewhat impractical. For sequential programs, a profile is usually enough, but performance problems in parallel programs (waiting for messages or synchronization issues) often depend on the time relationship of events, thus requiring the full trace to get an understanding of the problem.
Program counter 
(Wikipedia) The program counter, or shorter PC (also called the instruction pointer, part of the instruction sequencer in some computers) is a register in a computer processor which indicates where the computer is in its instruction sequence. Depending on the details of the particular machine, it holds either the address of the instruction being executed, or the address of the next instruction to be executed. The program counter is automatically incremented for each instruction cycle so that instructions are normally retrieved sequentially from memory. Certain instructions, such as branches and subroutine calls and returns, interrupt the sequence by placing a new value in the program counter. In most processors, the instruction pointer is incremented immediately after fetching a program instruction; this means that the target address of a branch instruction is obtained by adding the branch instruction's operand to the address of the next instruction (byte or word, depending on the computer type) after the branch instruction. The address of the next instruction to be executed is always found in the instruction pointer. The basic model (non von Neumann) of Reconfigurable Computing systems, however, uses data counters instead of a program counter.
PTrace 
(...)


Q


R

RCU (Read-copy-update) 
(Wikipedia) Read-copy-update (RCU) is an operating system kernel technology for improving performance on computers with more than one CPU. More technically it is a synchronization mechanism which can sometimes be used as an alternative to a readers-writer lock. It allows extremely low overhead, wait-free reads. However, RCU updates can be expensive, as they must leave the old versions of the data structure in place to accommodate pre-existing readers. These old versions are reclaimed after all pre-existing readers finish their accesses. RCU can be put to a number of other tasks, including dynamically changing NMI (Non-Maskable interrupt) handlers and implementing lazy barriers, but it is most frequently used as replacement for reader-writer locking. RCU is available in a number of operating systems, including version 2.6 of the Linux kernel. The technique is covered by U.S. software patent 5,442,758, issued August 15, 1995 and assigned to Sequent Computer Systems, as well as by 5,608,893, 5,727,528, 6,219,690, and 6,886,162. The now-expired US Patent 4,809,168 covers a closely related technique. RCU is also the topic of one claim in the SCO v. IBM lawsuit.
Readers-writer lock 
(Wikipedia) In computer science, a readers-writer lock (also known by the name multi-reader lock,[1] or by typographical variants such as readers/writers lock) is a synchronization primitive that solves one of the readers-writers problems. A readers-writer lock is like a mutex, in that it controls access to some shared memory area, but it allows multiple threads to read from the shared area concurrently. Any thread that needs to write to the shared memory, of course, needs to acquire an exclusive lock. One potential problem with a conventional RW lock is that it can lead to write-starvation, meaning that so long as at least one reading thread holds the lock, no writer thread will be able to acquire it. Since multiple reader threads may hold the lock at once, this means that a writer thread may continue waiting for the lock while new reader threads are able to acquire the lock, even to the point where the writer may still be waiting after all of the readers which were holding the lock when it first attempted to acquire it have finished their work in the shared area and released the lock. To avoid writer starvation, a variant on a readers-writer lock can be constructed which prevents any new readers from acquiring the lock if there is a writer queued and waiting for the lock, so that the writer will acquire the lock as soon as the readers which were already holding the lock are finished with it. This variation is sometimes known as a "write-preferring" or "write-biased" readers-writer lock. Readers-writer locks are usually constructed on top of mutexes and condition variables, or on top of semaphores. They are rarely implemented from scratch. The read-copy-update (RCU) algorithm is one solution to the readers-writers problem. RCU is wait-free for readers. The Linux-Kernel implements a special solution for few writers called seqlock.
Real time 
(Wikipedia) In computer science, real-time computing (RTC) is the study of hardware and software systems which are subject to a "real-time constraint"—i.e., operational deadlines from event to system response. By contrast, a non-real-time system is one for which there is no deadline, even if fast response or high performance is desired or even preferred. The needs of real-time software are often addressed in the context of real-time operating systems, and synchronous programming languages, which provide frameworks on which to build real-time application software. A real time system may be one where its application can be considered (within context) to be mission critical. (…) Real-time computations can be said to have failed if they are not completed before their deadline, where their deadline is relative to an event. A real-time deadline must be met, regardless of system load.
Reentrency (LTTng perspective)
(...)
RLNX 
Révolution Linux, a service entreprise specialised in free software
RSE 
Remote System Explorer


S

Sample 
(Mario) A sample is one record of a specific observation that was made on a running system at one specific instant “t”. The record may be composed of one or more different fields. Examples are: the degree of network load or the complete image of the system’s memory at one given time.
Sampling 
(...)
Sampling Profiler 
(...)
Semaphore 
(Wikipedia) In computer science, a semaphore is a protected variable or abstract data type which constitutes the classic method for restricting access to shared resources such as shared memory in a parallel programming environment. A counting semaphore is a counter for a set of available resources, rather than a locked/unlocked flag of a single resource. It was invented by Edsger Dijkstra. Semaphores are a solution to preventing race conditions in the dining philosophers problem, although they do not prevent resource deadlocks.
Static code analysis 
(Wikipedia) Static code analysis is the analysis of computer software that is performed without actually executing programs built from that software (analysis performed on executing programs is known as dynamic analysis). In most cases the analysis is performed on some version of the source code and in the other cases some form of the object code. The term is usually applied to the analysis performed by an automated tool, with human analysis being called program understanding or program comprehension. The sophistication of the analysis performed by tools varies from those that only consider the behavior of individual statements and declarations, to those that include the complete source code of a program in their analysis. Uses of the information obtained from the analysis vary from highlighting possible coding errors (e.g., the lint tool) to formal methods that mathematically prove properties about a given program (e.g., its behavior matches that of its specification). Some people (…) consider software metrics and reverse engineering to be forms of static analysis. A growing commercial use of static analysis is in the verification of properties of software used in safety-critical computer systems and locating potentially vulnerable code.
Static instrumentation 
(...)
Support vector machine (SVM) 
(...)
Swap 
(Red Hat) Also known as "swap space." When a program requires more memory than is physically available in the computer, currently-unused information can be written to a temporary buffer on the hard disk, called swap, thereby freeing memory. Some operating systems support swapping to a specific file, but Linux normally swaps to a dedicated swap partition. A misnomer, the term swap in Linux is used to define demand paging.
Symmetric multi-processing 
(Wikipedia) Symmetric multiprocessing, or SMP, is a multiprocessor computer architecture where two or more identical processors are connected to a single shared main memory. Most common multiprocessor systems today use an SMP architecture. In case of multi-core processors, the SMP architecture applies to the cores, treating them as separate processors. SMP systems allow any processor to work on any task no matter where the data for that task are located in memory; with proper operating system support, SMP systems can easily move tasks between processors to balance the workload efficiently.
Sysprof 
(...)
SystemTap 
(...)


T

Task 
(Mario) A task is "an execution path through address space". In other words, a set of program instructions that is loaded in memory. The address registers have been loaded with the initial address of the program. At the next clock cycle, the CPU will start execution, in accord with the program. The sense is that some part of 'a plan is being accomplished'. As long as the program remains in this part of the address space, the task can continue, in principle, indefinitely, unless the program instructions contain a halt, exit, or return. In the computer field, "task" has the sense of a real-time application, as distinguished from process, which takes up space (memory), and execution time. (…) Both "task" and "process" should be distinguished from event, which takes place at a specific time and place, and which can be planned for in a computer program. In a computer graphical user interface (GUI), an event can be as simple as a mouse click or keystroke.
TCF 
Target Communication Framework (Eclipse)
TCP 
  1. Telephony Control Protocol, protocol included in the Bluetooth protocol stack
  2. Transmission Control Protocol, one of the core protocols of the Internet protocol suite
  3. Trusted Computing Platform, a cross-platform hardware-based platform for improved security
Thread 
(Wikipedia) A thread in computer science is short for a thread of execution. Threads are a way for a program to fork (or split) itself into two or more simultaneously (or pseudo-simultaneously) running tasks. Threads and processes differ from one operating system to another but, in general, a thread is contained inside a process and different threads of the same process share some resources while different processes do not. Multiple threads can be executed in parallel on many computer systems. This multithreading generally occurs by time-division multiplexing ("time slicing") in very much the same way as the parallel execution of multiple tasks (computer multitasking): The processor switches between different threads. This context switching can happen so fast as to give the illusion of simultaneity to an end user. On a multiprocessor or multi-core system, threading can be achieved via multiprocessing, wherein different threads and processes can run literally simultaneously on different processors or cores. Many modern operating systems directly support both time-sliced and multiprocessor threading with a process scheduler. The operating system kernel allows programmers to manipulate threads via the system call interface. Some implementations are called a kernel thread, whereas a lightweight process is a specific type of kernel thread that shares the same state and information. Absent that, programs can still implement threading by using timers, signals, or other methods to interrupt their own execution and hence perform a sort of ad hoc time-slicing. These are sometimes called user-space threads.
TMF 
(Eclipse) Tracing and Monitoring Framework [2]
Trace 
(Mario) A trace is a chronological suite of records resulting from observations that were made on a running system at different instants “tn”, during a specific period “delta-t”. Traces are ordered lists of records specifying instructions that were executed on one or many local/distributed single-core/multi-core CPUs with respect to time.
Trace abstraction 
(...)
Trace abstraction level 
(...)
Trace correlation 
(...)
Trace pattern 
(...)
Trace point (Lttng) 
(...)


U

UDP 
User Datagram Protocol, a simple transport protocol used in the Internet
UML 
Unified Modeling Language
Umple 
(...)
UST 
LTTng Userspace Tracer


V

Valgrind 
(...)
Virtual machine 
(Wikipedia) In computer science, a virtual machine (VM) is a software implementation of a machine (computer) that executes programs like a real machine. A virtual machine was originally defined by Popek and Goldberg as an efficient, isolated duplicate of a real machine. Current use includes virtual machines which have no direct correspondence to any real hardware.[1] Example: A program written in Java would receive services from the Java Runtime Environment software by issuing commands from which the expected result is returned by the Java software. By providing these services to the program, the Java software is acting as a "virtual machine," taking the place of the operating system or hardware for which the program would ordinarily have had to have been specifically written. Virtual machines are separated in two major categories, based on their use and degree of correspondence to any real machine. A system virtual machine provides a complete system platform which supports the execution of a complete operating system (OS). In contrast, a process virtual machine is designed to run a single program, which means that it supports a single process. An essential characteristic of a virtual machine is that the software running inside is limited to the resources and abstractions provided by the virtual machine it cannot break out of its virtual world.
VM 
Virtual Machine (see above)
VTune Performance Analyzer 
(...)


W

Wait-free 
(Wikipedia) In computer science, non-blocking synchronization ensures that threads competing for a shared resource do not have their execution indefinitely postponed by mutual exclusion. A non-blocking algorithm is lock-free if there is guaranteed system-wide progress; wait-free if there is also guaranteed per-thread progress. Literature up to the turn of the 21st century used "non-blocking" synonymously with lock-free. However, since 2003, the term has been weakened to only prevent progress-blocking interactions with a preemptive scheduler. In modern usage, therefore, an algorithm is non-blocking if the suspension of one or more threads will not stop the potential progress of the remaining threads. They are designed to avoid requiring a critical section. Often, these algorithms allow multiple processes to make progress on a problem without ever blocking each other. For some operations, these algorithms provide an alternative to locking mechanisms.


X


Y


Z

Zoom 
(...)