A conversation with a coworker recently lead me to wonder why none of the innovation in writing software recently has been in terms of runtimes, or at least not pluggable ones. Before I get into that in any detail, I’ll go over a bit of what a “runtime” even is, and how it’s distinct from other components of software.
This article is probably appropriate reading for someone who has written code in at least one language, is vaguely aware of some of the differences in the ways other languages run, and is willing to take some things on faith. Any more knowledge will just mean more and more of it, up to the last two headings, will be a review.
How Software Runs
There are two major, coarse-grained ways of defining how software is actually run: what we call “compiled” and “interpreted” code.
Compiled code is written, like all* code, in plain text. You open up a text file, write some human-readable characters (and some *
s and ->
s and so forth). Then when you want to actually run it, you use what you’ve written as input to a program called, unsurprisingly, a compiler, which takes the plain text you’ve written and turns it into “machine code,” which is just a whole lot of bits that your computer can actually run. The compiler outputs an executable, and then you just run that on its own. This looks something like gcc hello.c -o hello ; ./hello
– first you compile (gcc stands for “GNU C Compiler”) some source code (in the file you write, “hello.c”) into an executable named hello (specified by the “-o hello” part). Then you run it (“./hello” means execute the program in the current directory named hello).
Interpreted code is, of course, written in plain text as well. There is no compilation step separate from running the code, though; instead you feed your code as input to an interpreter directly. This looks something like python hello.py
. What’s going on under the hood is that python is a program that’s already compiled and runs on your machine successfully, and it opens up your file and executes specific instructions for each line that it reads.
There’s also a third, sort of in-between way of executing code. Some programming languages (perhaps most famously Java) have a separate compilation step but also still run their own interpreter on the compiled output. What’s happening is that the Java compiler does not compile to machine code, it compiles to what we usually refer to as bytecode. Java then reads in the bytecode and executes machine code based on it. In keeping with the last two sections, this looks something like javac Hello.java ; java Hello
– you’ll note it has aspects of both of the prior examples.
Runtimes
No program is an island. Every program has to hook into the operating system at some point, if nothing else. The environment in which an executing program operates is sometimes called its “runtime environment,” or just runtime. When you write a C program, your runtime environment is very bare – the C compiler turns your code into machine code and really adds very little beyond that. When you write a Python program, however, the runtime is rich: the interpreter is the thing that’s actually running, and does things like managing your memory and doing function dispatch for you.
Other languages are in between. Since Java compiles your source text to bytecode before being executed, the only thing that’s running when you execute your code is the Java Virtual Machine (JVM), which is not a Java source compiler. It does things like manage memory allocations and deletions for you, which C won’t, but you can’t so much add a new data type at runtime, or change the object hierarchy around, like you can in Python.**
Different runtimes make different guarantees, and different languages provide their guarantees in different ways. For example, Python won’t leak memory because the interpreter creates all the objects for you and can’t lose track of them. Rust programs, however, won’t leak memory because the compiler makes sure you didn’t write code that could leak memory, and then writes down the parts about freeing memory into the final executable; the safety guarantees in the Rust case are not provided by the program actually running in the end, but by the other program (the Rust compiler) that created it. Yes, this is very difficult to understand at first, and I am not the greatest teacher; please don’t feel bad if your head is hurting.
Why Is Bytecode Important?
Normally, when you’re creating a new programming language, you have a whole lot of work to do: you need to define how the language looks (syntax) and works (semantics). Once you have your ideas for that, you need to write some programs that will actually take source text that humans write and run it. There’s a lot of different ways to do this! More now than ever. The major ways are basically writing a compiler or an interpreter. But writing compilers is really fucking hard, and writing interpreters isn’t a ton easier (and it makes the resultant code much slower, in general). So what if there were a way to get around having to do all that work, but still end up with a fast, correct executable at the end?
The key insight here is that you can write a compiler from your language’s source code to someone else‘s bytecode. If you can just get that one pass done, from your language into, say, Java bytecode, you get the whole JVM for free. The JVM gives you garbage collection and very fast execution (along, of course, with great library support, among other things, which isn’t really relevant here but is to many language designers). Instead of writing a compiler that takes human-readable text and turns it into machine-executable code, you can do the much easier task of turning that text into JVM-executable code.
Beyond Bytecode: Intermediate Representations
Bytecode is similar in some ways to a thing called intermediate representation, or IR, that many compilers use. This is basically breaking down the source text -> machine executable into an extra step, but then instead of leaving it as code for a runtime to execute, you finish up: source text -> IR -> executable.
Let’s summarize here, with examples of how some languages do things:
Language | |
---|---|
C | Source text -> machine executable |
Objective-C | Source text -> LLVM IR -> machine executable |
Java | Source text -> JVM bytecode (bytecode executed by the runtime) |
Python | Source text is executed directly by the runtime |
Every language starts with source text; some move through an IR or bytecode step, and some go all the way to an executable, while the others have an interpreter to run as the last step.
Plugging In
All the languages I’ve used as examples so far are self-contained within the parse-compile-execute zone. Many languages are in fact built as I alluded to a moment ago – they compile from their source down to an IR or bytecode format that is then compiled or executed by a compiler or interpreter the language authors did not write. The example on my mind today is Rust, which compiles down to LLVM IR, which is then compiled by LLVM into machine code. The Rust authors don’t have to write their own compilation to machine code, and can instead leverage the expertise of the LLVM team. Other examples include languages like Clojure and Scala, which compile to Java bytecode and are then run on the JVM. Notably, LLVM and JVM are by far the most prominent targets in this scenario: they’re both open-source and well-documented targets that provide a ton of firepower, and there frankly aren’t a ton of other options.***
Where Are We Going?
Currently we’ve talked about a few approaches. You can own the entire source-code-to-running-program pipeline, or you can abandon it somewhere before the end. If you want machine code, you can compile it yourself or only compile to LLVM IR and then let the LLVM toolchain finish the job. If you want the entire weight (both in terms of heaviness and in terms of force, here) of the JVM, you can compile to Java bytecode.
What I don’t see is anyone writing a runtime that can be compiled to. The JVM is very powerful, but it has its own potential issues (type erasure and garbage collection come to mind). But what if you want a runtime with some features? what if you want, say, a runtime with very powerful coroutine support?
The answer is: you write Go.
The answer should be: you write a language that compiles to Go’s IR.
From some quick googling, it seems like there in fact is a separate IR for go, but I was unable to dig up information about how to compile that IR. I also found no evidence that anyone has tried one way or the other (compiling it or compiling to it). The fact that the Go team seems dead set on naming everything as inconveniently as possible for googling**** (the language is Go, and they call their IR “assembly,” which means that googling for “go assembly” mostly finds questions about using assembly language inside Go, not using Go’s “assembly” language) makes it hard to be confident in my assertion that no one is doing it, but it is at best niche and rare.
So Let’s Write Runtimes!
From everything I’ve heard on the internet and my coworkers, the Go language is mostly pretty crappy, or, at least, has several very serious flaws, but the performance is good and it’s excellent for high-concurrency web stuff, mostly because you get coroutines for free. Unfortunately, the two are shackled together, and I will have to wait for someone who actually has the time, energy, and skill to write a runtime with first-class coroutines as a pluggable backend that I can compile to. I mean, realistically I’ll keep writing slow, non-concurrent, interpreted Python, but perhaps you, dear reader, have an actual need for such a thing.
More than that, I’m partly just curious why no one is working on this. The JVM was never written to be a generic runtime, but it’s been wildly successful in that role. LLVM was written on purpose to be a generic compiler backend, and it’s also wildly successful. Why is no one looking at reproducing the success of the JVM, without all the baggage? There is a project called Parrot VM that was an attempt to do just this, provide a generic IR format for executing dynamic languages, but it never took off, probably because it was originally tied to the tragic development of Perl 6, and thus didn’t gain traction outside of that community. Given the number of new languages that leverage the JVM or other virtual machines (Elixir is a notable example that runs on the Erlang VM), there’s clearly a demand for this kind of thing.
* Very, very nearly all
** Probably not strictly true. I don’t know the exact limits of the JVM runtime. Last time I looked at it, you could certainly look up methods by name, which would be very difficult in C, but I don’t think you could create new ones at runtime. Even if this stuff is strictly possible though, the point is that Java bytecode is somewhere in-between Python and C in terms of the runtime environment.
*** One important other option is using another programming language as your IR! If you can compile (or perhaps more accurately transpile) your language into C, then you can leverage gcc (or clang, etc) into compiling that all the way down into machine code. This isn’t relevant to the post, though, really, despite being interesting. There’s also CIL, which is basically Microsoft’s version of Java bytecode, interpreted by the CLR. This is not different enough from the JVM situation for me to include it much elsewhere.
**** Is this on purpose? They work at Google! Maybe they want it to take several queries instead of one, which then allows them to put ads in front of your face more times in a row? I can’t believe anyone is that nefarious; given all the dumb decisions behind Go, I’ll have to assume Hanlon’s Razor here.
Check out the Eclipse OMR project: https://github.com/eclipse/omr where IBM is doing this. We’ve released a lot of the components from our JVM as building blocks for language runtimes – from porting and threading layers to GC tech and eventually the JIT as well.
Your fairly lengthy list of examples of successful runtime projects are the first thing I thought of when I saw your post title, and they seem like a pretty strong set of counterexamples to your “no one is writing runtimes” assertion. So is the real question perhaps “why isn’t the Go runtime easier to target with an alternate front-end language”?
Actually, I’m not sure why I specified “successful” runtime projects, since you mentioned Parrot, but even that seems like a good answer to the question in your title: runtimes aren’t likely to be adopted without a solid front-end language.
You can leak memory in Rust pretty easily. Just not pretty easily accidentally.
The easy on purpose way is to use `mem::forget(your_value)`. The destructor will not be ran and if it points to a heap allocation, the heap allocation will not be cleaned up.
The not so easy way is to make two reference counted pointers (a.k.a. `Rc`) point at each other to form a non-tree-like data structure. Unless you manually drop each reference counted pointer manually, dropping the one that is top-level accessed by the rest of your program leaves the other one holding onto the entire graph.
—
You’ve also left out the BEAM which while originally for Erlang, both Elixir and Lisp Flavored Erlang compile into.
On the JVM, you can dynamically create new methods and classes and I believe swap out existing classes. In fact, the only major difference between the JVM and the Python byte code interpreter is that Python uses easily accessible dictionaries for every kind of lookup, making the language ridiculously dynamic and almost impossible to optimize.
JVM bytecode is JITed, not executed.
Writing a competetive runtime is just so much harder than writing a parser + front end language. Its quite unlikely one matches stability and performance of e.g. Java’s hotspot JIT within a reasonable amount of time
When searching for Go related stuff on the internet, use Golang as part of the query.
I actually just discovered Graal, and it both proves your point and shows what you’re missing. They basically fixed the JVM for dynamic code, and are doing crazy things like hotspot compiling LLVM IR!
https://github.com/graalvm/sulong
(There’s more, but that’s what really has my attention. It’s allowing stuff like accelerating Python and Ruby, just interpreting C extensions [!] along the way.)
No one is a too much generalization.
I use bytecodes for interpreter mode and a runtime for interface with the SO when compile with https://github.com/phreda4
For me, the SO is a 12 entry points (for now).
Too much power in the computer allow too much complication in the tools, really not need more. just my opinion of course.
Note that Python is actually very similar to Java: Python also compiles to bytecode, which is executed. Python the language being more dynamic is orthogonal to this.
Rust originally had its own compilation backend, it was only later ported to LLVM.
Other notable IRs: C– (used in GHC) and GIMPLE (used in gcc).
For a more rarely used runtime targeted by a different language frontend:
http://www.little-lang.org/index.html
It is a hybrid of perl/C/Tcl running on the Tcl runtime.
Pingback: Чтобы вы были в тренде,