The software pyramid and why you should learn programming from your grandparents
- Atanas Georgiev

- 1 day ago
- 9 min read

For us humans, education is the process of acquiring knowledge, and for some of us, the end goal of this process is to become an expert in a particular field. Education, regardless of the area, always follows the same pattern, which is always a gradient, and always goes from simple to complex. We first learn how to add 5 and 7, then how to multiply them, and eventually, over a decade later, we learn how to apply our prior knowledge to find the local minimum of a function. You cannot learn it backward, and you cannot start solving derivatives without knowing how to add and multiply. More importantly, you cannot conceptually comprehend the more advanced topics before mastering the basics first. This rule applies to every single domain of science, but for some reason it doesn’t seem to apply to the computer science field of today.
The roots of computation theory and algorithms go back to the 9th century. So the mathematical foundation of software is over a millennia old. The birth of modern software engineering in the sense of “programming a digital computer using a high-level programming language” is sometime in the late 1960s. Like in every other domain of knowledge, everything that we consider cutting-edge now is nothing but the sum of all that we have learned and created from the dawn of humankind until today.
When it comes to software. I challenge you to create any type of computer program today that doesn’t critically depend on technology that originated in the 60s. Whichever programming language you decide to use, it most certainly, somewhere down the line, depends on a compiler or interpreter that is written in C, a language released in 1974 and inspired by the language B, which was released in 1969. The operating system on which you will run your program is also most likely written in C, at least the kernel and the core libraries. The UNIX architecture and philosophy on which Linux and macOS are still based today is also a product of the 60s. Both C and UNIX, by the way, are creations of the underappreciated (by the media) Dennis Ritchie, who, by his net contributions to computer science, is much more important than the celebrity businessmen Bill Gates and Steve Jobs combined.
All of our progress in the past 60 years is mostly adding some extra conveniences, and it can be classified as evolution, but it has absolutely not been a revolution. It is not a revolution because nothing has been replaced, only added upon and perhaps improved here and there. At least on the software side. On the hardware side, the progress has been more substantial, in my opinion.
I’m not saying here that everyone in 2026 should be a C developer. I’m saying that the foundation of what we have today should be studied and understood by everyone who wishes to be a professional software engineer. My personal impression is that actively practicing developers today are less and less aware of fundamental concepts like data structures, memory management, CPU instructions, context switching, etc.
Just because we have created programming languages and frameworks that hide all of this from the programmer doesn’t mean that it’s not happening under the hood. None of this has been replaced or made obsolete. We have basically switched from a manual gearbox to an automatic transmission, but even with an automatic transmission, sometimes you need to understand how gears work to get out of the deep mud. The automatic is good enough for driving to the supermarket, but it’s not good enough for the race driver.
Similarly, we all take smartphone photos that look OK, but professional photographers who are serious about their work still need to understand how the camera and optics work and the concepts of shutter speed, curtains, aperture, ISO, etc., in order to take photos on another level.
Software developers, unlike the race drivers and the photographers, have been excused and exempt from having to understand their tool of labour and deliver the best possible results and can pass by with the equivalent of a smartphone photo.
This however is by no means the developer’s fault. To the contrary, this is what we are expected and encouraged to deliver. It’s merely an adjustment to what the job market dictates. The current economic environment of technology in general and software in particular is such that value is mostly driven by presentation, hype and promise rather than functionality, performance, reality and “fitness for a particular purpose” as they say in disclaimers. A software project based on yet another short-lived but fashionable JavaScript framework is perceived as modern, cutting-edge, future-proof and therefore valuable and investment-worth. The same application written in C++ (or even Java or C#) will most probably be 100 times more performant and will require 10 times less maintenance in the long term because it is based on mature technologies that won’t get deprecated after 6 months. Regardless, C++ will be seen by stakeholders and managers alike as ancient and irrelevant because it originated in the 80s and the 80s were like 1000 years ago or something. So in order to be “cutting-edge”, we need to write our thing in a language that will run painfully slow in a browser (web-view) which in turn was written in C++, but then it’s fine, if we don’t see it, it’s not there.
In software, just because something is new and/or trendy, doesn’t mean that it’s automatically better. In many cases it is actually a slower, more complex and therefore less reliable abstraction build on top of something that preceded it. The advertised benefit of such extensions is often about certain complexities being handled automatically by the language/framework instead of being exposed to the programmer. This obscuring of the details is convenient for the simple cases and textbook examples but is always less flexible when things get more complicated, and things always get complicated in a real-world software project. Often times this loss of flexibility leads to ugly workarounds and a high level of over-engineering. These automations also of course make everything universally slower because then the language makes assumptions that might not align with the programmer’s intention and performs checks that are almost always redundant. Often times, modern languages intentionally limit the programmers’ freedom to save them from their own stupidity and incompetence, which is seldom effective but always condescending.
I’m not saying that higher-level, “safe” languages have no place in the world, of course they do. Python for example is great for people who need to do computations but are not programmers. It is shockingly slow, but in many such “non-production” use-cases this is acceptable. However, the recent trend of desktop applications and, horrendously, OS components (on Windows) being written in JavaScript is just an abomination.
Another major risk of jumping on the newest and shiniest trendy language or framework is that in the probable situation of it turning on to be a fad and disappearing, your project also automatically becomes garbage and all you can do is “rm -rf” it and have a glass of water while you contemplate about your life decisions. This isn’t as common with programming languages, but is very common when it comes to trends in architecture and frameworks. We were all there when everything under the sun had to be broken into microservices. We were there when everything had to use blockchain (I still don’t know how). No-code, NoSQL, big data, you name them. We are still here as of 2026 when everything has to use AI, regardless of whether it makes any sense or not, but I have already written about AI, and it’s still unrolling so let’s not digress.
All the abovementioned technologies have valid applications, but they have been wildly misused because they share one thing in common. They are all technologies that have become trendy enough to have made it into the vocabulary of non-technical managers and investors, who have demanded their incorporation into projects where these technologies simply don’t belong.
I would like to let non-technical people know something that they might find hard to believe: There is quite literally no piece of software in existence today which cannot be produced in a programming language from 1969. Actually, pretty much everything in existence today, one way or another, indirectly is written in a language from 1969 anyway. The entire software infrastructure of today is an upside-down pyramid where a tremendous amount of bloat is all supported and held together by a handful of brilliant decisions made 60 years ago by some very clever people.
As I mentioned earlier, we have added some non-essential conveniences. We have developed some new, more efficient approaches to address certain problems. We have developed some decent libraries so we don’t have to reinvent the wheel every day. Furthermore, we have adopted some questionably efficient organisational methodologies and frameworks. But the truth is, there is absolutely nothing groundbreaking or revolutionary invented in software in the past 60 something years. Everything groundbreaking actually originates in hardware. The applications of today are not possible because of advances in the process of software development but because of advances in the speed and form-factor of computer hardware.
If you are a gamer, you might have heard about ray-tracing, which is now all the rage in real-time graphics and is regarded by many gaming outlets as the future of gaming. Well, the mathematical foundation of ray-tracing as a rendering method originates in the 16th century, and it was done on a computer for the first time in 1968 (of course). The actual progress is that computers are now fast enough to execute this algorithm from centuries ago in real time.
The smartphone and the internet are not achievements of software nature either, one is a small form-factor computer with a touch screen and the other is mostly about network infrastructure. Both require software, of course, but all necessary software to support them could and was written in C. A smartphone is programmed just like a desktop machine.
The so-called “cloud” is just someone else’s computer that you access over the internet. A large company allows you to use a small fraction of their data centre for a fee. That’s all there is to it. The perceived convenience of it is that you don’t have to invest in your own hardware, and you can rent more or less of the data centre depending on your current needs. It’s slow and expensive, but that’s a topic for another time. The point is that a “cloud” application is programmed just like any other application. The cloud providers usually expose some convenient APIs to make it easier for you to use their services (but mostly to lock you in). Other than that, it’s a program that runs on a computer, a far away computer.
The so-called “AI” in the face of LLMs is build on ideas that originated in the 40s and the 50s. I will admit that the mathematical models of deep learning have evolved in the more recent decades, but the credit there goes mainly to the area of mathematics. The most important preconditions that allowed the emergence of chatbots are: very fast processors (GPUs) and the availability of tremendous amounts of text on the internet. That said, programming a GPU is slightly different from programming a CPU. It is not night and day different, but it is tangibly different, and enough for us to say that GPU programming was probably the one single major reason for programmers to learn a new paradigm of programming in the past 30 years. Unfortunately, most generalist programmers seem to avoid touching GPUs and as popular as a product they are at the moment, they remain a niche skill in the software world.
If, for a brief moment, you look at a computer from 1995 and compare it with a computer from 2025, all you see is advancements in hardware. Compare the desktop of Windows 95 with the desktop of Windows 11. What you will see today is a higher resolution image with higher colour depth shown on a larger flat panel. All hardware. The difference in software is mostly a matter of UI design. Different colour pallet, icons, animations, transparencies and gradients. All of those we already knew how to do in 1995, it’s just that alpha blending was too expensive for the computers of that time.
The point that I am making here is not that software is stagnating or not going anywhere. In many ways it is maturing, but the meaningful change is slow and gradual, as it should be. It is not moving at the speed of light, and it is not being revolutionised every 5 years. If someone wants to convince you that software engineering as a field is moving at breakneck speed, they either don’t understand it and are falling for marketing terms and hype, or they are the ones running the marketing campaigns.
If you are a good C++ developer, then you will also be a good Java developer and a good Python developer and a good JavaScript developer. The core paradigm hasn’t changed in 70 years. The programming language is a tool for expressing an intent, and the intent is what matters, not the tool. A novel can be great in English, but it can also be great in French. The craftsmanship there is in telling an engaging story, and the language is merely the tool of communication. It helps if you are experienced with your tools, but just knowing how to write in French doesn’t make you a writer.
My advice to my fellow developers is: Learn the foundation, and it will serve you throughout your entire career. It will make you a better engineer because when you know what is at the tip of the pyramid, then you can learn any “modern”, fancy, resume boosting fad, and you will be good at it because you will know what is going on under the hood. In this regard, you might learn more valuable knowledge from your granddad’s programming book than from your favourite guru on YouTube.




Comments