Turbonomic Glossary

Turbonomic Global Glossary

Browse the glossary using this index

Special | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | ALL

Page: (Previous)   1  2  3  4  5  6  7  8  9  10  ...  14  (Next)




 A computer cluster is a group of two or more computers, or hosts, that are connected in a network and run in parallel to complete individual tasks. Turbonomic discovers clusters in your environment, and represents them in the Supply Chain. You can use clusters to specify scope for your Turbonomic session, for plans, for policies, and for charts.

In Turbonomic, you can configure placement policies that merge clusters. Turbonomic can use a Merge placement policy to move workloads across cluster boundaries. This creates a Supercluster.


See Buyer


A Container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computer environment to another (Docker website). The benefits include quick delivery and feedback and lower release risk. 

Turbonomic discovers Containers through Kube-Turbo targets that run in your Kubernetes cluster (see also Container Pod, Container Spec, Workload Controller, and Namespace).

Controller (Kubernetes)

In Kubernetes, controllers are control loops that watch the state of your cluster, then make or request changes where needed. Each controller tries to move the current cluster state closer to the desired state. (Kubernetes Documentation: https://kubernetes.io/docs/concepts/architecture/controller/)


In Turbonomic on-prem environments, CPU (Central Processing Unit) is the measure of processing capacity on a host. CPU is typically expressed on the host as the number of cores, with a given processing speed. Turbonomic measures allocated CPU capacity and utilized CPU in hertz of processing power (GHz or MHz). 


Data Cloud

Turbonomic Data Cloud is a data service that integrates directly with Turbonomic to pull data about your Applications, Container Orchestration, Hypervisors, Storage, and Cloud Service Providers and give you the means to show and share how Turbonomic is currently assuring application performance.

For more information, see the Data Cloud documentation at 

Data Exporter

The Data Exporter is a reporting capability that customers can use to feed reporting data to an external system. By using Data Exporter, valuable Turbonomic insights can be visualized with reporting solutions that customers already have in their environments.

The Data Exporter is a component that extracts reporting data from the core Turbonomic platform, transforms the data into JSON, and regularly publishes that data to a Kafka topic.

Data Transfer Object (DTO)

A Data Transfer Object (DTO) is an object that is used to encapsulate data and send it from one subsystem of Turbonomic to another. The API uses DTOs to send and receive REST payloads. For example, a DTO can represent the list of actions for a given entity. Turbonomic uses Kafka software to communicate DTOs between its platform components.


The memory in use by the database, as a percentage of the allocated capacity. Database configuration determines the capacity for this resource. Note that for databases, Turbonomic uses this resource to drive actions, instead of the VMem on the hosting VM. This means that actions are driven by the actual memory consumption on the database.

Page: (Previous)   1  2  3  4  5  6  7  8  9  10  ...  14  (Next)