Hot Code Reloading, with libc or in the browser

The full source code for the framework described here is available at gh:ayazhafiz/hotreload.

In this cc, we will develop a small framework for executing software programs with support for hot code reloading. Hot code reloading permits dynamic changes to part of a program's implementation without interfering with the active state of the program or requiring a full recompilation, permitting faster iteration cycles during software development. As a small example, here is our framework hot-reloading parts of a simple counter program without changing its state:

Demo of hot code reloading for a simple counter program with the native backend
Demo of hot code reloading for a simple counter program with the browser backend

This is not just live code reloading. When scale or shift are reloaded, i is not affected.

To show how we could implement hot code reloading for both machine-code programs and those running in a browser (both of which are shown in the examples above), we will implement two backends. One will compile our software to machine code and execute it in a two-sided runtime; this is called the native backend. The other will compile our software to JavaScript code and execute it in the browser with a client/server runtime; this is the browser backend.

But first, it will be helpful to mention the framework language, which has been designed to provide a common frontend for both runtimes.

The framework language

Here's the source code of the counter example we saw aboveThe code shown in the example above is actually an earlier version of the framework language, but the presented code works all the same.:

ts
import { hotreload, HotReloadProgram } from "../runtime/runtime";
 
class Counter extends HotReloadProgram {
@hotreload
scale(a: number): number {
return a * 1;
}
 
@hotreload
shift(a: number): number {
return a + 0;
}
 
async main(): Promise<number> {
for (let i = 0; ; ++i) {
let n = this.shift(this.scale(i));
this.print(n);
await this.sleep_seconds(1);
}
}
}

This is just TypeScript code, which is checked by the TypeScript compiler and then translated to either C++ (for the native backend) or JavaScript (for the browser backend).

Each program is described by a single class that extends from HotReloadProgram; the entry point is the main method of that class. Some standard functions (print, sleep_seconds) are provided by the base class. The most important thing here is the @hotreload decorator, which marks methods whose implementations should be watched and reloaded as needed by the runtime.

When targeting the browser runtime, the input program can contain any code admissible by the TypeScript compiler. When targeting the native backend, only a subset of TypeScript code is admissible, as I would like this to be an exploration of hot-code reloading rather than that of TS->C++ code generation. Adding support for translation of more constructs should be trivial, but the language really isn't the point here.

Our choice of TypeScript as a high-level DSL works well because it is trivial to translate to both target languages, and all we need is something that we can input into either backend to check if our hot-reloading implementation "just works". We could talk more about programming language interfaces if we were implementing some production system based on this work. In fact, I am of the opinion that it should be programming language compilers/interpreters that provide support for hot code reloading, not external frameworks.

But okay, I digress. Let's get on with what we're really here for -- hot code reloading! First up, the native backend and runtime.

The native backend

So let's say we have some program that we know how to compile to machine code, load, and execute. What extra work do we need to do to support hot code reloading within the executable?

The main thing to figure out is how the executable be composed. It's clear that in order to change the implementation of a function on the fly without changing the state of a running binary, we cannot statically link the function routine with that binary; otherwise, we would need to re-link (and thus restart) the entire binary when the function changes. So we need to be able to dynamically load, link, and unload the function symbol as needed. And that's just it -- we'll compile our @hotreload-annotated functions as dynamic libraries (also known as shared libraries) and give our main program executable some information on how to load and link those libraries. Then, when a @hotreload function implementation changes, we recompile its dynamic library and instruct the main program to reload and re-link the library.

To me, there is nothing really tricky or even interesting about this idea; it just makes sense! Turns out, shared libraries are how pretty much all plugin systems works, and thankfully libc has a series of functions devoted to loading/unloading/reading from dynamic libraries (see man 6 dlopen). So knowing the path forward, the remaining work we have is to actually set up the runtime in the aforementioned manner.

First, let's design a system to load and link dynamic libraries associated with @hotreload functions in the user program (we'll call this the program runtime). For each @hotreload function, we'll have the framework compiler instantiate a HotReload object to manage this work. Let's walk through what that object looks likeBy the way, I use C++ as the target language here to make some things like memory allocation easier than would be in C, where a lot of boilerplate may distract from the more interesting things. Writing this same runtime in C would be straightforward..

cpp
template <typename T>
struct HotReload {
public:
// ...snipped constructor
T* get() {
assure_loaded();
return loaded;
}

The only public API of the object is the get method, which retrieves the function pointer associated with the @hotreload function. The signature of the function is described by T, which we instantiate with a concrete type during compilation of the input program to C++ (we'll see what that looks like in a bit).

Before we get to assure_loaded, let's take a look at the data we associate with each HotReload instance.

cpp
template <typename T>
struct HotReload {
// ...snipped
private:
const char* api;
const char* libpath;
const char* copypath;
const char* lockfile;
void* handle = nullptr;
T* loaded = nullptr;
time_t loadtime = 0;

There are comments in the source code describing what each of these members does, but for exposition let's enumerate them here:

  • api: the name of the function symbol to be loaded from the shared library containing the @hotreload function implementation.
  • libpath: the file path of the dynamic library the @hotreload function routine is defined in. When the function implementation changes, it is recompiled with output at this path.
  • copypath: consider the case in which we go to access a function routine while a new implementation is being recompiled to libpath. If the compilation process touches the contents of libpath non-atomically (which it almost certainly does), we would have to spin until the compilation is finished. To avoid this, whenever we detect a change to libpath we first copy its contents to copypath and then read the function routine from there. This way, the user program can continue to use a stale function implementation while the library at libpath is recompiling.
  • lockfile: in general we want to reload the function symbol for api whenever we detect a change to libpath, but libpath may be modified non-atomically during the compilation process, in which case the library contents may be incomplete and non-loadable. To deal with this problem, we check for the presence of lockfile, which exists on the file system when libpath is being written to and is removed once its contents are complete.
  • handle: an opaque handle to the associated dynamic library provided by dlopen.
  • loaded: a pointer to the function symbol api represents.
  • loadtime: the last time we loaded and linked the dynamic library. If the library object file is modified after this time, we know we should reload it again.

Maybe you already see how the loading of symbols is going to work:

cpp
template <typename T>
struct HotReload {
// ...snipped
private:
// ...snipped
void assure_loaded() {
struct stat lib;
stat(libpath, &lib);
if (loadtime != lib.st_mtime) {
if (lockfile_exists()) {
return;
}
if (handle != nullptr) {
if (dlclose(handle)) {
die("dlclose failed: %s\n", dlerror());
}
handle = nullptr;
}
copy_file(libpath, copypath);
handle = dlopen(copypath, RTLD_NOW | RTLD_LOCAL);
if (handle == nullptr) {
die("dlopen failed: %s\n", dlerror());
}
loadtime = lib.st_mtime;
dlerror(); // clear errors
loaded = (T*)dlsym(handle, api);
const char* err = dlerror();
if (err != nullptr) {
die("dlsym failed: %s\n", err);
}
}
}
};

First, we check if libpath has been modified since we last used the function symbol associated with it. If it has and the lockfile associated with active compilation is not present, we proceed with reloading the function symbol. We close our active handle to copypath (remember, this is where we actually read the symbols from, since a handle to libpath would already be invalidated at this point). Then we copy the shared library to our target copypath site. We call dlopen on the shared library which loads and links it, and store the returned library handle in handle. Finally, we grab the actual function routine we're looking for by calling dlsym with the handle and the function name.

You may be wondering, why is handle a data member if its state does not need to persist between multiple calls to assure_loaded? The reason is that the destructor of a HotReload instance should also call dlclose on the shared library handle, because handles to shared libraries are reference counted (when there are no references, the library is unloaded). Of course, since the lifetime of a HotReload instance is exactly that of the user program it's not like we can introduce memory leaks this way, but hey.

Another thing to mention is the use of the RTLD_NOW and RTLD_LOCAL flags in the call to dlopen. RTLD_NOW instructs libc to bind all external symbols in the library immediately rather than lazily on usage. Since for us, each shared library contains only one function routine which is about to be used, there's not much point to delaying symbol resolution. RTLD_LOCAL means that symbols in the library are accessible only by the handle returned from dlopen, which is clearly what we want.

Note that our approach here is lazy in the sense that we cache the function symbol address from the shared library and try to reload the library only when a call to HotReload#get is made. The expectation is that the cost of file copying/loading/linking is amortized across many calls to get(), thus keeping the performance and behavior of the program similar to what it would be without a hot reload runtime. It also avoids a bunch of complexity we might have introduced through background threads listening to file changes or spinning in place of a lockfile.

Now, let's take at what the counter program we showed earlier looks like when compiled with this runtime.

cpp
// /private/var/folders/_j/4xdvs8jj5qd6nsfk8wf6jy900000gn/T/7f828395e1611cb8b3e64ee8c7536f35.cpp
extern "C" int scale(int a) {
return a * 1;
}
// /private/var/folders/_j/4xdvs8jj5qd6nsfk8wf6jy900000gn/T/2872612167e7943ceea64b36d17c89d4.cpp
extern "C" int shift(int a) {
return a + 0;
}
// /private/var/folders/_j/4xdvs8jj5qd6nsfk8wf6jy900000gn/T/fc2d242f0363b851a0b2efd6b9db7df8.cpp
/* <runtime snipped> */
HotReload<int(int)> scale("scale", "/private/var/folders/_j/4xdvs8jj5qd6nsfk8wf6jy900000gn/T/40d1e496db6a6655b65c5d73458b6373", "/private/var/folders/_j/4xdvs8jj5qd6nsfk8wf6jy900000gn/T/2d8885948a7d8c9abf321e4f3f6912c1", "/private/var/folders/_j/4xdvs8jj5qd6nsfk8wf6jy900000gn/T/ef5a9921ce0030c42054ec3fb658b3ad");
HotReload<int(int)> shift("shift", "/private/var/folders/_j/4xdvs8jj5qd6nsfk8wf6jy900000gn/T/fad7d0510a897b50c8f8aec4efc8155e", "/private/var/folders/_j/4xdvs8jj5qd6nsfk8wf6jy900000gn/T/d02eecf94ddeb3592470ff6959fdcaba", "/private/var/folders/_j/4xdvs8jj5qd6nsfk8wf6jy900000gn/T/f3176b0204c187f47b4d0f1cef1a5e37");
int main() {
for (auto i = 0; ; ++i) {
auto n = shift.get()(scale.get()(i));
print(n);
sleep_seconds(1);
}
}

Pretty straightforward -- for each @hotreload-annotated function (scale and shift) the framework runtime allocates some unique, temporary files for libpath, copypath, lockfile, and an implementation file to house the function source code. Then the framework writes the @hotreload function implementations to their implementation files, compiles those files as shared libraries at their libpaths, and generates HotReload instances referencing those functions, type-parameterized by their function signatures, in the main program. Finally, we rewrite the raw calls to shift and scale to be shift.get() and scale.get(). If you're wondering why we label the definitions of scale and shift as extern "C", it's to ensure conformance with the C ABI as expected by dlsym (otherwise a C++ compiler may mangle the names).

The presence of an additional runtime on the framework side is why I refer to the native backend as having a two-sided runtime. As well as compiling and executing the resulting C++ code, the framework runtime is responsible for listening to changes in the input program and recompiling the shared libraries associated with @hotreload functions as needed.

And that's all there is to it! We've implemented a DSL and framework for running programs with no formal dependencies other than that of libc, and in very little time. Get rid of the DSL, and the runtime alone can be modified to fit any project that doesn't mind a dependency on libc.

Our fun isn't over yet -- now, let's show how to perform hot code reloading with programs in the browser, where program state is abound and there certainly is no libc, let alone the concept of machine code loading and linking.

The browser backend

Obviously, the approach we described above cannot be readily translated to the execution of JavaScript programs by JavaScript engines in the browser. But thanks to the dynamic nature of JavaScript, changing implementations arbitrarily during a program's execution is startlingly easy. The key idea is that we can inject arbitrary JavaScript code via fresh <script> elements inserted in the DOM, which will immediately execute any code they contain.

In general this is kind of arbitrary code injection is a great way to introduce security vulnerabilities, but since we expect hot code reloading to be used only in local development environments, we'll wave our hands at that and instead accept it as a huge boon to the ease of our implementation.

Since our program is defined entirely in a class, and hot-reloadable functions are just methods on that class, we can apply new changes just by changing the definition of the method on an instance of the class. And in JavaScript, class methods are just properties. For example:

html
<body>
<!-- some html -->
<script>
class Counter {
// the counter example above
}
const program = new Counter();
program.main(); // execute the program
</script>
<script>
program.scale = (function scale(a) {
return a * 2;
}).bind(program);
</script>
</body>

The second <script> element executes in the same context as the first, so it overwrites the scale method of the program with an implementation that scales the input variable by 2 rather than 1. Calls to program.scale will immediately begin referencing this implementationFor those unfamiliar, fn.bind(obj) updates the this reference in fn to be that of obj. By default, the this reference in a free function is just that function. Of course, in this example it doesn't matter because scale does not use this..

Knowing how to apply changes, the other thing we need to figure out is how to inform the running program of new changes. Since we have no file system access from the browser, it seems we need a web server. And that's exactly what we'll do -- the framework runtime will spin up a web server (we'll call this the server runtime) that serves a web page with the user program and a runtime for applying changes to @hotreload methods of the active program (we'll call this the client runtime). The client runtime will open a websocket with the server runtime, over which the server will send code patches (of the form program.foo = newFoo.bind(program) we saw above) to the client whenever it detects changes to the implementation of @hotreload methods in the input program.

Let's quickly walk through the client runtime to make sure we're on the same pageThe server runtime is not that interesting, but I have attempted to leave it well-commented if you are interesting in reading through it..

js
let _π_reload_id = 0;
const _π_resolve_reload = {};
async function πhotreload(patch) {
const s = document.createElement("script");
 
const reload_id = _π_reload_id++;
const wait_hotreload = new Promise((resolve) => {
_π_resolve_reload[reload_id] = resolve;
});
 
s.innerHTML = [patch, `_π_resolve_reload[${reload_id}]();`].join("\n");
 
document.body.appendChild(s);
await wait_hotreload;
document.body.removeChild(s);
}
 
const πrecv = new WebSocket(πHR_ROUTE);
πrecv.onmessage = function (event) {
πhotreload(event.data);
};

When we receive a message over the πrecv websocket, we assume that the message is a well-formed and complete patch to the class instance containing the main program (the patch is assembled in the compiler). We chuck the patch over to πhotreload, which will actually load it in the active session. First, we allocate a new <script> element to hold the patch. It's polite to clean up after yourself, we also allocate a fresh Promise that will be resolved by code in the <script> element after the patch has been evaluated. We inject the <script> element into the DOM, await the promise, and then remove the <script> element.

It's that easy! Now we can go to work on web applications without the annoyance of a full recompilation, refresh, and re-navigation to the state we were at every time we want to make a change.

Appendix: Hot Module Reloading

Most modern JavaScript bundlers employ a hot code reloading technique called "hot module reloading" that is more general, but less granular than the per-function hot reloading we have presented here.

Hot module reloading reloads entire modules (i.e. on the granularity of files) when they are changed. This provides for an even simpler runtime implementation than that presented here, since you can just load up a static file from the runtime server whenever something changes. It also doesn't tie you down to the opinionated and somewhat contrived framework language we used here; all you need is a project that is relatively well-modularized. Of course, the downside is that the time and resources associated with recompilation/reloading will now be proportional to the number of modules in your project.

Anyway, hope this has been fun. I doubt this is something most of us will need to think about, let alone implement, on a frequent basis, but it doesn't hurt to know how to do it -- especially when it's so easy, and looks so nice!

Analytics By visiting this site, you agree to its use of Cloudflare Analytics. No identifiable information is transmitted to Cloudflare. See Cloudflare Analytics user privacy.