What is Floating-Point Performance or FLOPS?

When it comes to processor speed whether it's a CPU or a GPU you probably think in megahertz or gigahertz at first but these numbers aren't actually very useful if you're comparing processors across different models.

So maybe that's why graphics card makers in particular have started leaning on a specification called flops to describe their latest and greatest hardware.

But what does that mean I mean is that a measure of how many disappointing $60 games your card will run before it dies, no FLOPS is a performance metric that stands for Floating-Point Operations Per Second now this might sound like the time it takes for your magic 8-ball to give you an answer.

But it's really a measure of how quickly your processor can do math that involves a mix of large small and fractional numbers and it matters because computers only have a finite amount of space to store the numbers that they work with.

For a single number this is typically either 32 or 64 bits depending on what processor and program you're using, so in order to express a range of very large and very small numbers some of these bits are allocated destroying the digits of the number itself while others are reserved to specify where the decimal point should be a little bit like scientific notation.

This makes it easy to express huge or tiny numbers in a limited number of bits but keep in mind that floating-point operations are less straight forward for your processor to carry out than ones that only involve integers due to the computer needing to deal with ever-changing exponents converting them to and from decimal numbers and rounding them off.

Now if you read my previous article explaining the difference between CPUs and GPUs which if you haven't you can check out here you might already understand why it's more common to see a flops measurement on a graphics card spec page versus one for a CPU.

Much of the math your GPU needs to do in order to render the images that you see on your screen uses vectors to determine where each line and shape should go and crunching these numbers involves using many different floating-point numbers whose exponent values can vary quite a bit.

Now supercomputers and more powerful workstations used for scientific research and weather modeling are also often described in terms of flops as they also heavily rely on using floating-point numbers.

But what exactly does any of this mean for you out there yes you who's just trying to score a good deal on a video card should you go for whatever has the highest flops that you can still afford the answer is probably no even though more flops does indicate more raw computational power it means it's better in the same way that a CPU with more megahertz or a digital camera with more megapixels is better.

Many other things will play a huge role in how good of an experience you'll have with your graphics card including memory capacity and bandwidth the specific architecture that your GPU uses and even how nicely the drivers play with the specific games in your library so the takeaway here is that unless you're an AI or Big Data researcher there's no need to be all starry-eyed at how the new Titan V has a hundred and ten teraflops of pure power but at least you know that the specification on the back of the box doesn't refer to the sound that your wallet makes as it flops down on the checkout counter.