Linux Performance Analysis (2015)

166 benjacksondev 36 7/29/2025, 1:15:49 PM netflixtechblog.com ↗

Comments (36)

janvdberg · 10h ago
My first command is always 'w'. And I always urge young engineers to do the same.

There is no shorter command to show uptime, load averages (1/5/15 minutes), logged in users. Essential for quick system health checks!

mmh0000 · 8h ago
It should also be mentioned, Linux Load Average is a complex beast[1]. However, a general rule of thumb that works for most environments is:

You always want the load average to be less than the total number of CPU cores. If higher, you're likely experiencing a lot of waits and context switching.

[1] https://www.brendangregg.com/blog/2017-08-08/linux-load-aver...

tanelpoder · 50m ago
On Linux this is not true, on an IO heavy system - with lots of synchronous I/Os done concurrently by many threads - your load average may be well over the number of CPUs, without having a CPU shortage. Say, you have 16 CPUs, load avg is 20, but only 10 threads out of 20 are in Runnable (R) mode on average, and the other 10 are in Uninterruptible sleep (D) mode. You don't have a CPU shortage in this case.

Note that synchronous I/O completion checks for previously submitted asynchronous I/Os (both with libaio and io_uring) do not contribute to system load as they sleep in the interruptible sleep (S) mode.

That's why I tend to break down the system load (demand) by the sleep type, system call and wchan/kernel stack location when possible. I've written about the techniques and one extreme scenario ("system load in thousands, little CPU usage") here:

https://tanelpoder.com/posts/high-system-load-low-cpu-utiliz...

lotharcable · 25m ago
The proper way is to have a idea of what it normally is before you need to troubleshoot issues.

What is a 'good load' depends on the application and how it works. Some servers something close to 0 is a good thing. Other servers a 10 or lower means something is seriously wrong.

Of course if you don't know what is a 'good' number or you are trying to optimize a application and looking for bottlenecks then it is time to reach for different tools.

chasil · 5h ago
Glances is nice. I think it is a clone of HP-UX Glance.

https://nicolargo.github.io/glances/

I have also hacked basic top to add database login details to server processes.

Propelloni · 9h ago
Me too! So much so that I add it to my .bashrc everywhere.
__turbobrew__ · 10h ago
If you like this post, I would recommend “BPF Performance Tools” and “Systems Performance: Enterprise and the Cloud” by Brenden Gregg.

I have pulled out a few miracles using these tools (identifying kernel bottlenecks or profiling programs using ebpf) and it has been well worth the investment to read through the books.

yankcrime · 8h ago
Agreed, highly recommended reading. A slightly more up-to-date post of his which recommends tools in such situations is: https://www.brendangregg.com/blog/2024-03-24/linux-crisis-to...
wcunning · 8h ago
Literally did miracles at my last job with the first book and that got me my current job, where I also did some impressive proving which libraries had what performance with it again... Seriously valuable stuff.
__turbobrew__ · 4h ago
Yea it is kindof cheating. I was helping someone debug why their workload was soft locking. I ran the profiling tools and found that cgroup accounting for the workload was taking nearly all the cpu time on locks. From searches through linux git logs I found that cgroup accounting in older kernels had global locks. I saw that newer kernels didn’t have this, so we moved to a newer kernels and all the issues went away.

People thought I was a wizard lol.

tomhow · 4h ago
Previously:

Linux Performance Analysis in 60,000 Milliseconds - https://news.ycombinator.com/item?id=10652076 - Nov 2015 (11 comments)

Linux Performance Analysis - https://news.ycombinator.com/item?id=10654681 - Dec 2015 (82 comments)

Linux Performance Analysis in 60k Milliseconds (2015) [pdf] - https://news.ycombinator.com/item?id=44070741 - May 2025 (1 comment)

ch33zer · 7h ago
Almost all of these have been replaced for me with below: https://developers.facebook.com/blog/post/2021/09/21/below-t...

It is excellent and contains most things you could need. Downside is that it isn't yet a standard tool so you need to get it installed across your fleet

benreesman · 2h ago
Oh man nostalgia city. I vividly remember meeting atop time travel debugging at 3am in Menlo Park in 2012, wild times.
mortar · 11h ago
danieldk · 10h ago
Yeah, I skipped the date and then saw Linux 3.13 in the examples.
5pl1n73r · 4h ago
After this article was written, `free -m` on many systems started to have an "available" column that shows the sum of reclaimable and free memory. It's nicer than the "-/+" section shown in this old article.

  $ free -m
                 total        used        free      shared  buff/cache   available
  Mem:            3915        2116        1288          41         769        1799
  Swap:            974           0         974
fduran · 7h ago
shameless plug: you can practice this in a free VM https://docs.sadservers.com/docs/scenario-guides/practical-l... (there's a typo there to keep you on your feet)
CodeCompost · 10h ago
> At Netflix we have a massive EC2 Linux cloud

Wait a minute. I thought Netflix famously ran FreeBSD.

craftkiller · 10h ago
My understanding was their CDN ran on FreeBSD, but not their API servers. But I don't work for Netflix.
diab0lic · 10h ago
Your understanding is correct.
achierius · 7h ago
Why did they not choose to use it for both (or neither)? I.e., what reasons for using FreeBSD on CDN servers would not also apply to using them for API servers?
seabrookmx · 6h ago
They are extremely different workloads so.. everything?

The CDN servers are basically appliances, and are often embedded in various data centers (includes those ran by ISP's) to aggressively cache content. They care about high throughput and run a single workload. Being able to fine tune the entire stack, right down to the TCP/IP implementation is very valuable in this case. Since they ship the hardware and software, they can tightly integrate the two.

By contrast, API workloads are very heterogeneous. I'd have to imagine the ability to run any standard Linux software there would also be a big plus. Linux clearly has much more vetting on cloud providers than FreeBSD as well.

aflag · 6h ago
Can't you fine tune linux as well? Does FreeBSD perform better somehow on a CDN workload? I find it difficult to imagine that the reason is performance. But I don't know what the reason is.
craftkiller · 6h ago
Netflix discusses their reasons starting at 18:20: https://www.youtube.com/watch?v=veQwkG0WdN8&t=18m20s

tl;dw: the performance, the efficiency of development, the community, FreeBSD is a complete operating system, the code base is smaller, the ports system, and the license.

and this video covers the optimizations Netflix has made to FreeBSD: https://www.youtube.com/watch?v=36qZYL5RlgY

Also potentially a reason: According to drewg123, Linux's kTLS was broken. Which I see drewg123 also commenting in this thread. Is he the "Drew on my team" mentioned in the first video? Is he the speaker in the 2nd video? Idk https://news.ycombinator.com/item?id=28585008

drewg123 · 10h ago
The CDN runs FreeBSD. Linux is used for nearly everything else.
louwrentius · 9h ago
The iostat command has always been important to observe HDD/SDD latency numbers.

Especially SSDs are treated like magic storage devices with infinite IOPS at Planck-scale latency.

Until you discover that SSDs that can do 10GB/s don't do nearly so well (not even close) when you access them in a single thread with random IOPS, with queue depth of 1.

wcunning · 8h ago
That's where you start down the eBPF rabbit hole with bcc/biolatency and other block device histogram tools. Further, the cache hit rate and block size behavior of the SSD/NVME drive can really affect things if, say, your autonomous vehicle logging service uses MCAP with a chunk size much smaller than a drive block... Ask me how I know
rkachowski · 9h ago
it's 10 years later - what's the 60 second equivalent in 2025?
wcunning · 8h ago
BlackLotus89 · 7h ago
PSI (pressure stall information) are missing.

I always use a configured!(F2) htop (not mentioned as well). Always enable PSI information in htop (some red hat systems I work with still don't offer them...).

If you have zfs enable those meters as well and htop has an io tab, use it!

whalesalad · 10h ago
I quite like `iotop` as an alternative to iostat. https://linux.die.net/man/1/iotop
emmelaich · 11h ago
Nice list. sar/sysstat is underrated imho.
mmh0000 · 7h ago
Oh man. There's a blast from the past.

Today, you'd want something like:

Prometheus + Node Exporter [1]

[1] https://github.com/prometheus/node_exporter

ImPostingOnHN · 8h ago
Maybe I missed it, but checking available disk space is often a good step in diagnosing misbehaving systems.
babuloseo · 10h ago
he forgot about rusttop
AnyTimeTraveler · 2h ago
I'm pretty sure that that didn't exist in 2015 ;)