Relearning efficient programming

Published: 2024-10-06

By Allison Byrnes

Setting the scene

In recent years there’s been a pretty significant focus on getting things done quick and dirty in industry. This is particularly true for software engineering and computer science. “Fix it in prod” has become less of a meme and more of a hard reality for the big players. A big part of that has been a lack of focus on efficiency. Users today have so much compute power and storage available that there’s almost no reason to write efficiently. This has turned most software into bloated garbage with features people don’t need or want, sometimes just outright spyware, that takes up tens or hundreds of gigabytes and runs like a pig. Open-source projects have attempted to solve some of this, but oftentimes these groups can swing too far the other way into saying things like “a window manager is bloat.”

How did we get here?

The obvious answer would be increased compute power and storage availability, but in reality I think it’s more complex than that. We’ve got a whole generation of new graduates being taught that memory is functionally infinite, storage space is irrelevant and easily expandable, and not really being taught anything efficiency-related besides big O notation and compiler options. Besides the educational issues, there’s increased corporate pressure on getting to market first rather than making the best product. Overall though I think most of this stems from a “we can fix it later” mentality- it’s easy to justify lazy engineering when you can just push an over-the-air update and fix it all later. This really never happens though, and half the time those updates wind up ballooning into bigger file sizes and more performance issues.

How do we fix it?

If I ruled the world I’d love to mandate that every engineer develop for a super constrained processor at least once. I gained a new respect for efficiency and optimization when I had to go to the assembly level to fit some guidance code into 2KB of working memory a few years back. Old-guard game developers write magnificently tiny and optimized programs because fitting your game onto an N64 cartridge required it. Unfortunately, that’s not the world we live in, and frankly those types of constraints aren’t actually relevant to modern devs.

To my mind, the most obvious solution, then, would be changing corporate and educational culture. Like I discussed earlier, a lot of this is coming from corporate pressures and our higher education systems failing us. Not every app can or should be constrained to 2KB of memory, but we should definitely be forcing people to confront these problems in the classroom so they get used to paying attention to efficiency. In the corporate world, we should absolutely be placing more of an emphasis on delivering an efficient product. Particularly in cloud deployments and big-data applications, where instructions might be running millions or billions of times a year, every clock cycle matters. Sure, shaving 10ms of runtime off a script might not seem important, but it can literally save you tens of millions in compute time and electrical bills if you’re running that script on a 100 billion item dataset. These types of lapses shouldn’t be tolerated out of professional engineers making hundreds of thousands a year.

Further, we should be encouraging work on open-source projects and other crowd-maintained initiatives. A lot of people do these to gain experience anyway, but I think it’s something more workplaces should be funding. Pay your engineers to write kernel code for 4 hours a month. It might cost you a bit at first, but let them see what happens if you try and get Torvalds to approve a bloated addition to his baby. You’ll come out the other side with a better team and a reputation for helping open-source, which gets you a hell of a lot of favor from powerusers the world over.