cloudchamber's blog

High throughput is not always answer

Life is full of crossroads. A few days ago, my server hit 100% memory usage and started throwing OOM (Out Of Memory) errors. At first, I suspected a memory leak in my code, but the real cause turned out to be something else entirely.

The issue came from a heavy ingestion process I had implemented — a CPU- and memory-intensive workflow — running in multiple processes. My server was already using multiple workers, and each worker spawned its own set of subprocesses for ingestion. This caused resource consumption to grow uncontrollably.

The solution, surprisingly, was simple. Before diving into tricky optimizations, I just reduced the number of workers — and it had a huge impact. I had been so focused on increasing throughput by using more and more resources that I overlooked the fundamentals.

In computer science, and in many other fields, we learn about trade-offs. But knowing them in theory is very different from applying them in the real world. We often understand the basics yet still neglect them. And in the end, no matter how advanced the age — even in the era of AI — the basics matter most.