Floating Point Operations Per Second (FLOPS)

2025-06-22
2 min read

Overview

FLOPS stands for Floating Point Operations Per Second. It’s a measure of a computer’s performance, especially in fields of scientific calculations, machine learning, 3D rendering, and other tasks that require heavy numerical computations.

What is a “Floating Point Operation”?

A floating point operation is a mathematical calculation—like addition, multiplication, etc.—involving numbers with decimal points (i.e., floating point numbers like 3.14159 or 0.00001)

These operations are fundamental to many computational tasks, especially in scientific computing, graphics, machine learning, and simulations.

For example:

  • \( 1.23 + 4.56 \) → 1 floating point addition
  • \( 3.14 * 2.72 \) → 1 floating point multiplication

Why FLOPS Matter

FLOPS tell you how many of these floating point operations a system can perform per second. The higher the FLOPS, the more capable the system is of handling complex computations quickly.

Examples of FLOPS Units

Table. FLOPS Units
AbbreviationFull NameFLOPS Value
KFLOPSKilo FLOPS1,000 FLOPS
MFLOPSMega FLOPS1 million FLOPS
GFLOPSGiga FLOPS1 billion FLOPS
TFLOPSTera FLOPS1 trillion FLOPS
PFLOPSPeta FLOPS1 quadrillion FLOPS
EFLOPSExa FLOPS1 quintillion FLOPS

Real-World Use

  • Supercomputer (e.g., Frontier by Oak Ridge): Over 1 EFLOPS
  • Gaming GPU (e.g., NVIDIA RTX 4090): ~80–100 TFLOPS (FP32)
  • iPhone or modern laptop: Often in the range of a few hundred GFLOPS

In AI and ML

In AI and machine learning, FLOPS measurements help determine how much computational power is needed to train or run neural networks. Training Large language models and deep learning algorithms involve millions to billions of floating point operations.

Hardware optimized for high FLOPS (like GPUs, TPUs, or specialized AI chips) can train or infer much faster. The term is sometimes written as “FLOP/s” to clarify that it’s a rate measurement (operations per unit time), though “FLOPS” is the more common usage.