A century ago, a computer was a person. Teams of people with pencils turned uncertainties into columns of numbers. The revolution began when Claude Shannon showed that circuits could embody Boolean logic. Transistors became switches, gates became adders, adders became ALUs, instruction sets, compilers, operating systems, and finally applications. Each layer constrained the next. Two plus two equals four not by hope, but because the stack is engineered for repeatability from electrons up to software. Fast-forward to today, and we’re amid another profound shift. Instead of explicitly coding every behavior, we increasingly teach or train computers using data – a trend exemplified by machine learning and AI. Andrej Karpathy terms this new approach Software 2.0. You specify the intent or goal of the program and provide data, then let the computer automatically generate the logic. In practice, this means feeding a large dataset and a rough model architecture into a training process, which then “compiles” the data into a working model. The end result is a neural network that embodies the solution.
What might programming look like in 2050? Many technologists envision that by 2050, software development will be far more about declaring goals or constraints and letting adaptive systems do the rest. Perhaps future “programmers” will curate training data, set high-level objectives, and supervise fleets of AI agents that actually produce and verify the code. In other words, the act of programming could evolve into steering intelligent systems, telling an AI what you need, not how to do it. This is a natural extension of current trends.
Such a future raises a fundamental question: how do we trust and understand programs that aren’t explicitly coded? Traditional programming has been largely deterministic, given the same input, a well-written program follows predictable steps to produce the same output every time. This determinism made programs understandable (at least in principle) through logic and flow control. In contrast, modern machine learning models are often non-deterministic and opaque. They involve randomness (in training or inference) and extremely complex internal representations that even their creators can’t straightforwardly interpret.
To reach the 2050 vision, certain conditions and assumptions must hold. One is that we continue raising the level of abstraction – in essence, making programming more declarative (specifying what outcome we want) and less about the step-by-step imperative detail (how to achieve it). This isn’t a new idea; in fact, it’s a recurring theme in computer science theory. In 1977, John Backus (who created Fortran) argued that mainstream programming was stuck in the “von Neumann style” – a “primitive word-at-a-time style of programming” in which programs simulate the computer’s step-by-step operation. He famously called conventional languages “fat and weak” because they were hampered by this sequential, stateful mindset. Backus advocated functional programming as a way to liberate us from the von Neumann bottleneck, enabling programs to be more mathematical, compositional, and free of mutable state. Similarly, many modern experts believe that declarative approaches (functional, logic, or constraint-based languages, for example) make it easier to express intent. Instead of describing the control flow, you describe the goal, and let the language runtime or engine figure out the flow. We see this in databases (SQL is declarative: you state what data you want, not how to get it) and in configuration management, etc. As we push toward AI-assisted coding, being able to clearly specify the intent will be crucial, the AI can’t read our minds, so we must express our desires in a high-level, unambiguous way it can work with.
Moreover, human insight and understanding remain the foundation of software, no matter how advanced our tools become. A classic piece of software engineering wisdom by Peter Naur reminds us that “a program is not its source code. A program is a shared mental construct (a theory) that lives in the minds of the people who work on it.” In other words, the real “program” is the understanding that developers have of how a system works and why it’s built that way. The source code (or neural network weights, in an AI) are just an artifact of that understanding. If future programming involves less coding and more guiding of AI, the theory-building aspect, understanding the problem domain and the system’s approach to it, becomes even more critical. We will still need programmers in the sense of people who deeply grasp the problem and can ensure the software (however it’s produced) aligns with human needs. Naur’s insight suggests that even if an AI writes the code, humans must transfer their understanding to the AI (through training data, examples, constraints, etc.), and conversely, humans must be able to extract and maintain a theory of what the AI system is doing.
This essential human understanding, however, faces a significant challenge in today’s environment, where much of the AI discourse seems driven more by market speculation than genuine problem-solving. Unlike the focused theoretical work of pioneers like Shannon, the rush to claim territory in the “AI revolution” creates noise that obscures the real technical challenges: building trustworthy non-deterministic systems, maintaining human understanding of opaque programs, and ensuring AI alignment with actual human needs rather than investor presentations. The most transformative technologies in history emerged from researchers focused on the work itself, not the market implications.
Yet if we step back from the current hype cycle, a grander pattern emerges. Carl Sagan often spoke about the awe of discovery and the cosmic significance of human progress. If we channel a bit of his perspective, we might say: Programming is the way life has taught matter to think. It started with simple instructions etched on silicon, and it is evolving toward something akin to intentionality, our thoughts made executable. Our ancestors taught stones how to add; we may be the generation that teaches circuits how to dream.
References: