hare-update assists in addressing breaking changes in your code June 11, 2025 by Drew DeVault

We’re working on a new tool to release along the next upcoming stable release of Hare (likely Hare 0.25.2, or 0.25.3, following our release policy) – hare-update. The coming Hare release includes a number of small breaking changes, as per usual during Hare’s development phase, but one in particular is relatively odious for users to deal with: nomem.

Handling error cases is mandatory in Hare, even if you just assert on error (terminating the program) – you have to do something about errors. However, failure to allocate memory has long been an exception to this rule, and in many cases it was not possible for Hare programs to handle this sort of failure properly. The “nomem” patches which Lorenz landed in Hare master several months ago addresses this oversight, but in so doing all allocations throughout the Hare ecosystem can now return errors that they previously would not and which programmers are required to modify their code to address.

The inclusion of the nomem patch in the coming release is the primary reason why the release has been delayed for some time now. Because the nomem change has a significant impact on downstream users, we wanted to ship the next release with a tool that our users could take advantage of to assist in the process of updating their codebases to cope with breaking changes. Now that hare-update is almost done, we’re ready to solve this problem.

The hare-update tool has a novel, powerful design which offers a great deal of flexibility in designing, and applying, rules that determine the right updates to your code to address breaking changes. Moreover, the approach should generalize – for example, third-party libraries could employ this tool to assist users in mitigating breaking changes from their own releases, perhaps, for example, automating most of the process of upgrading from hare-sdl2 to hare-sdl3. Furthermore, the tool itself may be Hare-specific but the approach it uses could be generalized to other programming languages.

Before I explain the design and internals of this tool, fascinating though the topic shall be, I will first familiarize you with how it is used and how you can expect to apply it to your own projects when the next Hare release is ready to ship.

Using hare-update to upgrade your project

Let’s start with a demonstration:

Warning: The screencast player is known to work poorly on mobile devices.

The first detail you’ll notice here is that hare-update is invoked by running hare tool update. The coming release of Hare introduces external tools to the build driver (the “hare” command) which can be invoked via hare tool. In this case it executes /usr/libexec/hare/hare-update (or whatever is appropriate for your installation prefix).

As you might infer from this, hare-update is distributed separately from the rest of Hare. It is an optional extension that we will encourage distributions to ship in a separate package, that you can install for only as long as you need it (i.e. however long it takes to update your Hare projects). We may eventually dispense with the need for hare-update altogether when we freeze the language at 1.0 – though it may continue to prove useful for downstream users to mitigate breaking changes in third-party libraries and such, as we’ll see.

The tool has been programmed with most of the breaking changes which are included in the upcoming release, and it will identify affected areas in your code and walk you through them one at a time. It will explain the breaking change and suggest one or more solutions for you to choose from. You’ll be shown a diff of these changes that you may apply if you wish. If you have decided on one strategy for fixing a particular breaking change to apply consistently throughout your code, you can also instruct hare-update to apply this fix without presenting each case to you for separate consideration.

At the end of the process you should use your version control system to review the changes, test them and fix up anything which requires manual intervention, and commit the changes to your repository. It’s as easy as that!

Some rules can be applied trivially – for instance, the errors::nomem symbol in the standard library has been replaced with the built-in nomem type, and you can just swap one for the other in most cases – hare-update will do that for you. However, it won’t remove the errors import if a particular file no longer uses it after the change, so you may want to review the changes and tidy up loose ends like that. Another example that will require manual intervention is if you choose to return nomem errors to the caller in some function – the call-sites of those functions are not updated for you. Some rules will even just insert TODO comments in your code near sites of interest for you to address later.

Nonetheless, you should find that hare-update smooths the upgrade process over considerably, and I’m optimistic that with time we will improve the sophistication of the tool such that it can address these edge-cases, too. The design of hare-update allows for a great deal of sophistication in applying complex rules to handle the innumerable options available to you for mitigating all sorts of breaking changes.

The design and internals of hare-update

And speaking of the design, let’s get into the details of how hare-update works. An understanding of its design will assist in understanding what kind of problems it can solve, and how, and I think that you will find its design intriguing – you might be interested in utilizing it for your own projects or applying its principles to other programming languages.

The rules engine DSL

At its heart, hare-update contains a rules engine that is programmed with a DSL that is a superset of the Hare language. Each rule can be injected at a specific nonterminal in the Hare grammar through a series of parser hooks.

Note: A “nonterminal” refers to a rule in the language syntax which is composed of many tokens, such as terminals like identifiers or operators like “(” and “!=”, or other nonterminals. These are the building blocks of the language’s “grammar”. The nonterminals (and terminals, for that matter) of the Hare grammar are defined by the language specification.

An example of a nonterminal is the “call-expression”. It consists of an expression which indicates the function (the “object”) of the call, often called the “lvalue”, as in the left-hand value of a binary operation; as well as a pair of terminals – the parenthesis tokens, “(” and “)” – and the “parameter-list” nonterminal. It describes the grammatical structure of a function call in the Hare syntax.

Here’s a simple rule that updates callsites of the now-deprecated time::unix function from the Hare standard library:

@rule@("time::unix has been deprecated") :: "call-expression" {
	const lvalue = data: *ast::expr;
	if (!match_access(lvalue, "time::unix")) {
		return;
	};

	@match@ { (${obj:"expression"}) };

	const edit = @edit@ {
		@replace@(lvalue.start, $.end, $obj.text);
		@append@($obj.end, ".sec");
	};
	@present@(edit, "Replace with time::instant.sec");
};

The rule is defined with a hook on a certain nonterminal (call-expression in this case). When the parser encounters a call-expression it will execute this rule, i.e. run this snippet of Hare code. The rule can then adapt the behavior of the parser to analyze the Hare code at this location. First it tests if the lvalue of the call-expression is time::unix; if not then there’s nothing to do and the rule exits.

Next up is the interesting part: the @match@ macro. When this parser hook executes, the parser has consumed the lvalue and is waiting to parse the rest of the call expression with the file offset pointing to the first “(” token. This macro picks up where the parser left off and starts pattern matching against the Hare tokens in the @match@ macro. It starts looking for basic lexical tokens, first grabbing the “(” token at the start of the pattern, but when it encounters ${obj:"expression"} it runs the sub-parser for an expression nonterminal – and captures it in a variable named “obj”.

If the pattern is matched, the rule prepares an @edit@ to fix the breaking change. The @edit@ macro creates a new “edit group” which is bound to the Hare variable “edit”. The @replace@ macro requires a start offset, end offset, and the text to replace – in this case it replaces everything from the start of the lvalue to the end of the entire @match@ expression ($.end) and replaces it with the text of the captured $obj variable. Then it @appends@ “.sec” to the end of the $obj capture, successfully changing time::unix(whatever) to whatever.sec.

Finally, this edit group is @present@ed to the user for approval. Here is the rule in action:

Nifty! This works even if the subject of the time::unix call is an arbitrarily complex expression, like this:

You can also present the user with multiple edits to consider. Here’s the rule that applies the basic nomem changes:

@rule@("Allocations may return nomem errors") ::
	"allocation-expression",
	"append-expression",
	"insert-expression" {
	@match@ {
		alloc${:"balanced"}${l:"location"}
	}, {
		append${:"balanced"}${l:"location"}
	}, {
		insert${:"balanced"}${l:"location"}
	};

	const tok = lex::lex(lex)?;
	switch (tok.0) {
	case ltok::LNOT, ltok::QUESTION =>
		return; // Already fixed by user
	case => void;
	};

	const assertion = @edit@ {
		@insert@($l.start, "!");
	};
	const propagate = @edit@ {
		ensure_nomem(ctx, __eg);
		@insert@($l.start, "?");
	};

	@choice@ {
	case "Add an error assersion when out of memory" => assertion,
	case "Propagate the nomem error to the caller" => propagate,
	};

	rules::warning(ctx,
`You should review your code for possible memory leaks when returning nomem
and leaking objects allocated prior to the memory allocation failure.`);
};

This rule hooks into several parts of the parser, such as “append-expression” nonterminals. These expressions aren’t function calls, but built-ins, so the hooks are different. This rule can match on several different patterns, each one that fails to match just moves onto the next (or terminates the rule if there are no matches at all).

We also see that there are some “pseudo-nonterminals” supported by the rules engine. The “balanced” pattern matches any number of “balanced” tokens, such that any “opening” token like “(” or “[” is paired with its corresponding “closing” token, in this case “)” and “]”. The “location” token doesn’t match anything, but captures an empty variable at the parser’s current location (so $l.text is "" but $l.start is useful). Essentially, this allows us to simplify pattern matching the more complex grammar of these expressions (which have numerous forms depending on usage) by taking advantage of the fact that they always take the form of a keyword, followed by “(”, any number of ‘balanced’ tokens, and finally “)”.

We can also access the lexer directly through an API comparable to the standard library’s hare::lex and hare::parse modules. We use this to determine if the error was already fixed, either by the user or an earlier use of hare-update.

Finally, we prepare edit groups for each of the possible solutions, and present them to the user to choose from. Pretty cool!

Forking parsers

I’d like to explain how this rules engine works internally, but before we move on I’d like to point out a particularly important detail which may have escaped your notice: the specific needs of the parser used by hare-update.

The parser is forked from the Hare standard library, which includes a standards-conformant Hare parser which tracks the latest Hare language standard. However, we cannot use this parser directly for precisely that reason: we need to be able to parse Hare code targeting the version prior. In fact, we need a parser which can tolerate two versions of the grammar at once, to support codebases which may have been partially upgraded by hare-update or by manual intervention on the user’s part.

The process begins by forking the parser from the standard library, importing it into hare-update under the “vNEXT” module, and then going over the changes since the most recent stable release and adjusting the parser to be tolerant of both versions. After this, the parser is modified to add hooks to the nonterminals which are of interest to rules authors addressing breaking changes planned for this release cycle.

Here’s a peek at the instrumentation which invokes hooks for the call-expression nonterminal:

export fn call(lexer: *lex::lexer, lvalue: ast::expr) (ast::expr | error) = {
	// Invoke the parser hook
	on(lexer, nonterminal::CALL_EXPRESSION, &lvalue)?;

	// Continue parsing this expression
	want(lexer, ltok::LPAREN)?;
	// ...

There is another major change we need to make to the parser infrastructure to support hare-update’s unique use-case: we need to be able to save and restore the parser state. When we execute a hook, it will start consuming tokens in order to pattern match rules against the user’s code, which will leave the lexer in a state which is not suitable for it to pick up where it left off after the hook runs. So the lexer is updated accordingly:

export type restore_point = struct {
	off: io::off,
	state: lexer,
};

// Saves the state of a [[lexer]], to be restored later with [[restore]]. The
// underlying I/O source must be seekable.
export fn save(lex: *lexer) (restore_point | io::error) = {
	return restore_point {
		off = io::tell(lex.in)?,
		state = *lex,
	};
};

// Restores a lexer to a state previously recorded with [[save]].
export fn restore(lex: *lexer, rp: *restore_point) (void | io::error) = {
	io::seek(lex.in, rp.off, io::whence::SET)?;
	*lex = rp.state;
};

Following these changes, and the support code necessary for them, the parser infrastructure is ready for use in hare-update. Some of these changes are brought back to the standard library for reuse, but most of them are quite traumatic and necessitate the need for a permanent fork – for instance, it does not make sense for the stdlib parser to tolerate multiple language versions.

To simplify the process for future releases, rather than start this process anew from the latest stdlib parser, we’ll “fork the fork” and maintain it in parallel, backporting whatever language changes are required. The release process for hare-update thus becomes:

cp vNEXT/ v0_25_3/

This has been simplified for illustrative purposes – a bit more work will be required in practice but this captures the spirit of it. Some imports have to be rewritten, and a glue module – which abstracts the various parser versions somewhat so that hare-update can support many different language version pairs – also has to be updated.

Behind the rules

There are some additional, more intrusive changes to the parser infrastructure which were added to the forked parsers: namely, support for the hare-update DSL. The DSL is a superset of the Hare grammar and so the forked parser was taught about these new features to provide a convenient means of parsing this DSL. The tool which actually ingests the DSL and generates the corresponding code is relatively unremarkable as a result (and this blog post is long enough already), but it lives at cmd/hare-update-genrules in the hare-update source tree if you’d like a closer look.

Instead of going over this tool in detail, I’d like to focus on what the generated code does. Let’s look at a very simple rule:

@rule@("errors::nomem has been removed") :: "identifier" {
	@match@ { errors::nomem };

	const edit = @edit@ {
		@replace@($.start, $.end, "nomem");
	};

	@present@(edit, "Replace with nomem built-in");
};

Here’s the generated code:

fn rule_1_exec(
	lex: *lex::lexer,
	data: nullable *opaque,
	user: nullable *opaque,
) (void | parse::error) = {
	const ctx = rules::getcontext(user);
	const __location = lex::mkloc(lex);
	let __captures = rules::captures { ... };
	if (!rules::match_pattern(ctx, &__captures,
		"errors::nomem",
	)?) {
		return;
	};
	defer rules::captures_finish(v0_next.glue, &__captures);

	const edit = {
		let __eg = &rules::editgroup { rule = &rule_1, ... };
		rules::edit_replace(__eg, __captures.start, __captures.end, "nomem");
		yield __eg;
	};

	if (rules::present(ctx, edit, "Replace with nomem built-in")?) {
		rules::merge_edits(ctx, edit);
	};
};

There are several points of interest that I would like to study closer, to better explain how we write these rules, and how they work: how patterns are matched, how edits are designed, the flexibility of this system to apply arbitrarily complex rules and suggested edits, and how the user’s approved edits are ultimately applied to their code.

Pattern matching against Hare source code

We’ll start with the pattern matching implementation. The patterns have the following properties:

  1. A list of tokens, which are matched directly with the input.
  2. Nonterminal captures, which can parse and capture any nonterminal (e.g. an expression) at an arbitrary point in the pattern.
  3. Pseudo-nonterminals, like “balanced” and “location”, which were explained earlier.

The input is the lexer (and its current state) and a pattern string (a superset of Hare’s grammar), and the output is a boolean – affirming that a match was found – and a list of captures. To implement the match, we fire up a second lexer to process the pattern tokens and start advancing both lexers one token at a time.

Forgive the indirection through the “glue” abstraction in the following code – the match implementation is designed to be ignorant of which Hare version’s grammar it’s interpreting. It makes it a bit more difficult to read. If you see something like “glue.lex_lex”, it serves the same purpose as calling hare::lex::lex, doing so indirectly in order to call lexer implementations compatible with different Hare versions.

The core logic is as follows:

fn _match_pattern(
	ctx: *context,
	vars: *captures,
	pat: str,
) (bool | common::error) = {
	//
	// ...setup code ommitted...
	//
	for (true) {
		let ref_tok = glue.lex_lex(ref)!;
		switch (ref_tok.0) {
		case ltok::EOF =>
			break;
		case ltok::DOLLAR =>
			let var = parse_variable(glue, ref);
			var.start = glue.lex_mkloc(lex);

			if (var.name == "*") {
				ref_tok = glue.lex_lex(ref)!;
				assert(ref_tok.0 != ltok::EOF);
				scan_until(ctx, lex, ref_tok);
				continue;
			};

			match (glue.parse_nonterminal(lex, var.kind)) {
			case let data: nullable *opaque =>
				var.data = data;
				var.end = glue.lex_mkloc(lex);
			case common::error =>
				free(var.name);
				return false;
			};

			var.text = capture_gettext(ctx, &var);
			append(vars.vars, var)!;
			continue;
		case => void;
		};

		const tok = glue.lex_lex(lex)?;
		if (ref_tok.0 != tok.0) {
			glue.lex_unlex(lex, tok);
			return false;
		};
		switch (ref_tok.0) {
		case ltok::NAME =>
			const ref_name = ref_tok.1 as str;
			const name = tok.1 as str;
			if (name != ref_name) {
				glue.lex_unlex(lex, tok);
				return false;
			};
		case => void;
		};
	};
	//
	// ...cleanup code omitted...
	//
	return true;
};

In essence, we scan the “reference” lexer – ref, whose input is the pattern being matched – and look for special tokens, such as “$” to indicate a capture. If we don’t see any special tokens, we compare the latest reference token against the subject lexer – lex, whose input is the file hare-update is analyzing.

Captures store the lex::location of the start and end of the capture, which includes the line and column number and an offset from the start of the file, as well as the text of the capture, which is borrowed from an in-memory copy of the input file.

Applying accepted edits to your code

Moving on – once we’ve collected these variables, how do we make use of them to propose the necessary edits? Consider the following rule:

@rule@("time::unix has been deprecated") :: "call-expression" {
	const lvalue = data: *ast::expr;
	if (!match_access(lvalue, "time::unix")) {
		return;
	};

	@match@ { (${obj:"expression"}) };

	const edit = @edit@ {
		@replace@(lvalue.start, $.end, $obj.text);
		@append@($obj.end, ".sec");
	};
	@present@(edit, "Replace with time::instant.sec");
};

From the rule author’s point of view, an expression like $var is used to access a variable captured from the match. $obj expands into rules::getvar(&__captures, "obj"), which returns a *capture. This struct has a start and end lex::location and a text field. $ alone is used as shorthand for referencing the __captures object that stores the result of the entire match, usually to access $.start or $.end to retrieve the locations of the start and end of the entire match. Capturing nonterminals also generally parses the nonterminal and stores its AST node in capture.data, in case this is useful to the rules author, so that if you capture a ${:"call-expression"} for example you could access the list of arguments via the AST node.

We use these captures to help us prepare “edit groups”. Each edit in a group has the following type:

// An edit to a text file.
export type edit = struct {
	off: io::off,
	rem: size,
	ins: str,
};

This includes the file offset at which the edit applies, some number of bytes to remove at this offset, and some text to insert. The @edit@ macro outputs a compound expression which creates an edit group to store these edits, like so:

const edit = {
	let __eg = &rules::editgroup { rule = &rule_4, ... };
	// ...
	yield __eg;
};

The @insert@, @delete@, @append@, and @replace@ macros are convenience macros which create edits that perform the operation in question and add it to the current edit group. The @replace@ macro in this time::unix rule expands to the following:

rules::edit_replace(__eg, lvalue.start, __captures.end, getvar(&__captures, "obj").text);

And rules::edit_replace is simple enough:

// Replace the text at the given location.
export fn edit_replace(
	group: *editgroup,
	start: location,
	end: location,
	text: str,
) void = {
	assert(end.off > start.off);
	append(group.edits, edit {
		off = start.off,
		rem = (end.off - start.off): size,
		ins = strings::dup(text)!,
	})!;
};

After being @present@ed to the user for approval, the edits in the edit group are appended to a list of accepted edits for this file, which are applied when the file has been completely processed.

Arbitrarily complex support code

In addition to the relatively powerful support provided by the rules engine and its DSL, the fact that this DSL is a superset of Hare lends it even more flexibility. You can just write arbitrary Hare code at any point in your rules. The time::unix rule that we’ve been studying, for example, calls this match_access function, which is ultimately just plain Hare code:

fn match_access(expr: *ast::expr, ident: str) bool = {
	const access = match (expr.expr) {
	case let expr: ast::access_expr =>
		yield expr;
	case =>
		return false;
	};

	const value = match (access) {
	case let ident: ast::access_identifier =>
		yield ident;
	case =>
		return false;
	};

	const ident = parse::identstr(ident)!;
	defer ast::ident_free(ident);
	return ast::ident_eq(ident, value);
};

This accepts an expression and an identifier and determines that the expression is an access expression which specifies that identifier. Which is to say, given that foo::bar() and foo[10].bar() and (obj: *fn() void)() are all valid function call expressions – if we want to test that the lvalue of a function call is foo::bar (or time::unix, as it were), this helper will disambiguate between all of these options for us.

Here’s another example where supplementing rules with arbitrary Hare code can be more efficient – consider the following rule which handles the relocation of several dozen symbols from one standard library module to another:

// Sorted list of all standard library from time::chrono which were moved to
// time::date in this release.
const time_chrono_moved_idents: [](str, str) = [
	("chrono::LOCAL", "date::LOCAL"),
	("chrono::TAI", "date::TAI"),
	("chrono::UTC", "date::UTC"),
	// ...continues...
];

// Same as the above, with the identifier strings parsed into ast::ident objects
let time_chrono_moved: [](ast::ident, str) = [];

// Initialization code to do that parsing
@init fn init() void = {
	for (let id .. time_chrono_moved_idents) {
		const (id, replacement) = id;
		const id = parse::identstr(id)!;
		append(time_chrono_moved, (id, replacement))!;
	};
	assert(sort::sorted(time_chrono_moved,
			size((ast::ident, str)),
			&id_replacement_cmp));
};

@rule@("Some members time::chrono have been moved to time::date") :: "identifier" {
	const start = lex::mkloc(lex);
	const id = parse::ident(lex)?;
	const end = lex::mkloc(lex);

	const key = (id, "");
	const replacement = match (sort::search(
		time_chrono_moved, size((ast::ident, str)),
		&id, &ident_cmp)) {
	case let i: size =>
		yield time_chrono_moved[i].1;
	case void => return;
	};

	const replace = @edit@ {
		ensure_import(ctx, __eg, ["time", "date"]);
		@replace@(start, end, replacement);
	};
	@present@(replace, "Rename this symbol");
};

The last example I want to draw your attention to is another example used in this rule – users will have to import the time::date stdlib module in order for this rule to work correctly, hence the ensure_import(ctx, __eg, ["time", "date"]) line in this rule.

This ensure_import function is itself interesting, in part because it has to process and edit a part of the file distant from the subject of the rule. Let’s take a look at it in detail.

// List of ast::imports in the current file being parsed
let imports: []ast::import = [];
let imports_sorted = false;

// Register a parser hook when parsing imports to populate those variables
@init fn import_hook() void = {
	parse::register_hook(nonterminal::IMPORTS, &on_imports, null);
};

fn on_imports(
	lex: *lex::lexer,
	data: nullable *opaque,
	user: nullable *opaque,
) (void | parse::error) = {
	// Free imports stored from the last file we processed (it's a global, sorry)
	ast::imports_finish(imports);
	// Parse the current file's imports
	imports = parse::imports(lex)?;
	// If the user keeps their imports sorted, we shouldn't fuck it up for them.
	// But if they can't be bothered then we'll just add new imports wherever.
	imports_sorted = sort::sorted(imports, size(ast::import), &import_cmp);
};

// Ensure that the current file imports a given module, and add the necessary
// edit to an edit group if not.
fn ensure_import(
	ctx: *rules::context,
	eg: *rules::editgroup,
	ns: ast::ident,
) void = {
	for (let import .. imports) {
		if (ast::ident_eq(import.ident, ns)) {
			// This file already imports this module
			return;
		};
	};

	// Add the necessary edit to import the module
	const new = unparse::identstr(ns);
	defer free(new);
	const import = fmt::asprintf("use {};\n", new)!;
	defer free(import);

	let new_import = ast::import {
		ident = ns,
		bindings = void,
		...
	};

	let insert_before = 0z;
	if (imports_sorted) {
		insert_before = sort::lbisect(imports,
			size(ast::import), &new_import, &import_cmp);
	};

	const loc = imports[insert_before].start;
	rules::edit_insert(eg, loc, import);

	new_import.start = loc;
	new_import.end = loc; // XXX: this isn't correct, in case that matters later

	// Register a merge hook to add the module to the list of imported modules if
	// and when the user accepts the proposed edit
	rules::edit_onmerge(eg, &merge_import, alloc(new_import)!);
};

fn merge_import(eg: *rules::editgroup, user: nullable *opaque) void = {
	const import = user: *ast::import;
	append(imports, *import)!;
};

Note a minor detail here – rules::edit_onmerge allows you to provide a callback that runs once the user accepts a proposed edit. In this case we use the callback to ensure that we don’t add the same import again in a later edit. This needs to be done asynchronously because the edit group will be presented for the user’s approval, possibly among other solutions, and they may or may not accept it.

Pretty cool, right?

Applying the changes

Finally, after processing all of the rules and collecting the user’s desired edits, applying the list of approved changes is straightforward:

// Applies an edit to a [[document]]'s internal buffer.
export fn apply(doc: *document, edit: *edit) void = {
	if (edit.rem > 0) {
		const rem = edit.rem: io::off;
		const start = edit.off + doc.adjust;
		const end = edit.off + rem + doc.adjust;
		delete(doc.buffer[start..end]);
	};

	if (edit.ins != "") {
		const start = edit.off + doc.adjust;
		insert(doc.buffer[start], strings::toutf8(edit.ins)...)!;
	};

	doc.adjust -= edit.rem: io::off;
	doc.adjust += len(edit.ins): io::off;
};

// ...elsewhere...
edits_sort(ctx.edits);

for (let edit &.. doc.edits) {
	rules::apply(&doc, edit);
	nedit += 1;
};

Each edit is sorted by its offset, in ascending order, and applied in that order. At each edit, the total characters added and removed is added to an offset which is applied to the location of subsequent edits. Because the edits are sorted, this offset always tracks the total number of bytes which were added or removed prior to the current edit, so we know exactly where the edit should apply to account for the earlier modifications.

hare-update summarized

So, to sum this all up: hare-update is a tool you can expect to be available to assist you in updating your code after some breaking changes ship in the next Hare release, and subsequent releases thereafter.

It is a bit silly that, for all of the fuss we’ve made over making a language which promises to remain stable for as long as a century, we would find a tool like this useful. Of course, this unintuitive outcome is explicable if you consider that, in order to make a language which has a chance to be good for such a long time, we will have to find some way of addressing its shortcomings and mistakes prior to the feature freeze at 1.0, and balance that against the need to actually use this language for interesting tasks prior to 1.0 in order to understand what it’s good at and how it needs to be improved.

In any case, this tool will be useful more broadly, as there are many cases where Hare users will have to deal with breaking changes even in the best of circumstances: the standard library will not offer an indefinitely stable cryptography API, for example, deprecating and removing insecure algorithms as they become obsolete. Moreover, the downstream ecosystem of third-party tools and libraries are of course free to adopt whatever stability policy (or instability policy, perhaps) that they see fit, and if this tool can generalize to support them as well it could be making the lives of Hare users easier for a long time yet.

I hope that this tool will be powerful, easy to use, and, crucially, easy to maintain. I wanted to make the authoring of new rules easy, flexible, and fun, so that contributors who propose and implement breaking changes to Hare will be encouraged to add rules to ease the process of dealing with those changes later.

Enjoy!