Consider the attitude toward low-level assembly languages 30-40 years ago. The attraction of writing programs in pure assembly language was that it allowed highly skilled expert programmers to produce code that is extremely optimized for size and performance. Compiled languages (like C, FORTRAN, or BASIC) were easier to work with and could be ported between systems much more easily, but they were often slower and orders of magnitude larger. Thirty years ago it was common to hear statements such as “Higher-level compiled languages are convenient, but they’ll never be able to compete with hand-optimized assembly!”
A funny thing happened, though…actually three funny things:
First, Moore’s Law worked in our favor such that the performance difference between human-written assembly and compiler-generated code practically vanished.
Second, the scope and complexity of programs increased by several orders of magnitude, making assembly code-bases harder to manage, particularly with large teams.
Finally, and perhaps most significantly, compilers improved significantly. Vast resources were focused on improving the performance of compilers and the efficiency of higher-level languages. Eventually, they reached the point where the assembly code they were producing was smaller and more efficient than what average developers could write.
That’s actually a very common pattern for programming languages: They begin as solutions to specific types of problems, then they’re generalized to solve many other existing problems. As new types of problems arise, they’re expanded to address those as well, gradually expanding until the weaknesses in their underlying models become apparent, at which point they’re supplanted by new languages and/or tools that address those weaknesses.
In particular, some of the major problems they’re attempting to solve are:
Let’s quickly look at a few examples:
At least let’s hope so!