Upcoming changes to Hare's event loop library July 30, 2025 by Drew DeVault
hare-ev is an important Hare library that provides an event loop for Hare programs, similar to libuv, which most Hare programs which perform asynchronous I/O depend on for that purpose. I’ve been working on some design improvements over the past couple of weeks which will introduce major breaking changes, but also bring many important features – I’d like to share these with you now.
Highlights include:
- More direct control over ev’s memory management
- Support for multiple uniform I/O requests in-flight at once
- Support for parallelization of certain non-uniform I/O requests, e.g. send and recv
- Idempotency of request cancellation and reuse of request objects
- Improved support for high-level I/O requests, e.g. DNS queries or HTTP requests
To assist in migrating your code to adapt to the changes, and to introduce these new features, let’s go over them in detail now.
Managing request lifetimes
The basic design principle of an I/O operation with hare-ev now entails the following steps:
- Prepare a request object for a given file, optionally setting a user data pointer
- Submit the request to the event loop
- Handle the results once completed
Each request type is a subtype of ev::req, and there are different request types for different kinds of I/O, such as ev::ioreq for reads and writes, ev::sockreq for connect or accept, and ev::timer for timer events. The user is now responsible for allocating these and managing their lifetimes – you can allocate them as a global, or on the heap, embed them in your data structures, or store them on the stack if the stack frame’s lifetime is at least as long as the required I/O operation. Moreover, after a request is completed or cancelled, you can reuse the request object for another request, often skipping step 1 if you want to use the same file and user data pointer.
Here’s an illustrative example that reads from stdin and writes it to stdout:
use ev;
use fmt;
use io;
use os;
type state = struct {
buf: [os::BUFSZ]u8,
stdin: ev::ioreq,
stdout: ev::ioreq,
};
export fn main() void = {
const loop = ev::newloop()!;
defer ev::finish(&loop);
// Register stdin and stdout with the event loop
const stdin = ev::register(&loop, os::stdin_file)!;
defer ev::unregister(stdin);
const stdout = ev::register(&loop, os::stdout_file)!;
defer ev::unregister(stdout);
// The lifetime of the buffer and the two I/O requests is the lifetime
// of the stack frame of "main".
const state = state {
buf = [0...],
stdin = ev::ioreq_init(stdin),
stdout = ev::ioreq_init(stdout),
};
// This makes the "user" parameter of the callbacks refer to &state
ev::req_setuser(&state.stdin, &state);
ev::req_setuser(&state.stdout, &state);
// Submit the read request for stdin
ev::read(&state.stdin, state.buf, &on_read_complete);
// Run until no further I/O is pending
// In this case that will be upon EOF from stdin, or if an error occurs
ev::run(&loop)!;
};
fn on_read_complete(
req: *ev::ioreq,
result: (size | io::EOF | io::error),
user: nullable *opaque,
) void = {
let state = user: *state;
const n = match (result) {
case let n: size =>
yield n;
case io::EOF =>
return;
case let err: io::error =>
fmt::errorln("Read error:", io::strerror(err))!;
return;
};
ev::write(&state.stdout, state.buf[..n], &on_write_complete);
};
fn on_write_complete(
req: *ev::ioreq,
result: (size | io::error),
user: nullable *opaque,
) void = {
let state = user: *state;
match (result) {
case let err: io::error =>
fmt::errorln("Write error:", io::strerror(err))!;
return;
case => void;
};
ev::read(&state.stdin, state.buf, &on_read_complete);
};
Note that each I/O request is reused after it is completed, and that the lifetimes of the I/O requests are tied to the lifetime of main’s stack frame. If allocating I/O requests that need to outlive their stack frame, other strategies would be called for, such as heap allocation.
Cancellation idempotency
Another improvement in this regard is that request cancellation is now idempotent: you may cancel a request several times, including after it was completed, without problems. This is possible because the event loop now holds a reference to the request object, which it use to reach in and clean up the cancellation pointers. This old hack:
fn some_callback(...) void = {
some_request = ev::req { ... }; // Clear this out so I don't footgun myself
// ...
};
Is no more.
Request queues and parallel requests
Each ev::req, upon submission, is added to a linked list that forms a queue (LIFO). When a file, socket, etc, is ready for an I/O operation, the oldest request is serviced first. In this manner you can, for example, have several successive read or write requests queued up on a socket, or have multiple consumers of a socket accepting new connections in a round-robin fashion.
There are separate queues for read-like, write-like, and “other” operations. Up to one read-like and one write-like operation can be in-flight at once, and “other” operations require exclusive use of the ev::file in question.
Read-like | Write-like | Other |
---|---|---|
read | write | connect |
readv | writev | timer |
recv | send | wait |
recvfrom | sendto | signal |
readable | writable | |
accept |
High-level I/O operations
High-level operations which require multiple underlying I/O operations, such as DNS queries, require a similar pattern of managing request lifetimes downstream. The new approach of subtyping ev::req allows for high-level operations to offload the burden of managing where their state is stored to the user, often reducing memory allocations as a consequence. ev::dns and ev::dial have been updated accordingly, and a similar approach (explained in more detail below) is likely to pan out for third-party modules as well.
Here’s an example of how this works from the user perspective, using ev::dns to perform an asynchronous DNS query to look up the A records for example.org:
use ev;
use evdns = ev::dns;
use fmt;
use net::ip;
use net::dns;
export fn main() void = {
const loop = ev::newloop()!;
defer ev::finish(&loop);
const req = evdns::req_init(&loop)!;
defer evdns::req_finish(&req);
const msg = dns::message {
header = dns::header {
id = 1337,
op = dns::op {
qr = dns::qr::QUERY,
opcode = dns::opcode::QUERY,
rd = true,
...
},
qdcount = 1,
...
},
questions = [
dns::question {
qname = ["example", "org"],
qtype = dns::qtype::A,
qclass = dns::qclass::IN,
},
],
...
};
evdns::query(&req, &msg, &on_response)!;
ev::run(&loop)!;
};
fn on_response(
req: *evdns::req,
r: (*dns::message | dns::error | nomem),
user: nullable *opaque,
) void = {
const msg = match (r) {
case let msg: *dns::message =>
yield msg;
case let err: dns::error =>
fmt::errorln("Error:", dns::strerror(err))!;
return;
case nomem =>
fmt::errorln("Error: out of memory")!;
return;
};
for (let answer &.. msg.answers) {
const rdata = answer.rdata as dns::a;
fmt::println(ip::string(rdata))!;
};
};
Here’s an example of using the updated ev::dial interface to request http://example.org:
use ev;
use ev::dial;
use fmt;
use io;
use net::uri;
use os;
use strings;
type state = struct {
sock: nullable *ev::file,
ioreq: ev::ioreq,
rbuf: [os::BUFSZ]u8,
};
export fn main() void = {
const loop = ev::newloop()!;
defer ev::finish(&loop);
// Set aside some state in main's stack frame lifetime
let state = state {
sock = null,
ioreq = ev::IOREQ_INIT,
rbuf = [0...],
};
// Disconnect on exit, if we successfully connected
defer match (state.sock) {
case let file: *ev::file =>
ev::close(file);
case null => void;
};
// Dial http://example.org (see also [[ev::dial]])
const uri = uri::parse("http://example.org")!;
defer uri::finish(&uri);
const dialreq = dial::dialreq_init(&loop, &state);
dial::dial_uri(&dialreq, "tcp", &uri, &on_dialed)!;
ev::run(&loop)!;
};
fn on_dialed(
req: *dial::dialreq,
r: (*ev::file | dial::error),
user: nullable *opaque,
) void = {
let state = user: *state;
const socket = match (r) {
case let file: *ev::file =>
state.sock = file;
yield file;
case let err: dial::error =>
fmt::errorln("Error dialing host:", dial::strerror(err))!;
return;
};
// Write out an HTTP request
const request = strings::toutf8("GET / HTTP/1.1\r\n"
"Host: example.org\r\n"
"Connection: close\r\n\r\n");
state.ioreq = ev::ioreq_init(socket, state);
ev::write(&state.ioreq, request, &on_wrote);
};
fn on_wrote(
req: *ev::ioreq,
r: (size | io::error),
user: nullable *opaque,
) void = {
let state = user: *state;
match (r) {
case size => void;
case let err: io::error =>
fmt::errorln("Error writing request:", io::strerror(err))!;
return;
};
ev::read(&state.ioreq, state.rbuf, &on_read);
};
fn on_read(
req: *ev::ioreq,
r: (size | io::EOF | io::error),
user: nullable *opaque,
) void = {
let state = user: *state;
match (r) {
case let n: size =>
io::write(os::stdout, state.rbuf[..n])!;
case io::EOF =>
return;
case let err: io::error =>
fmt::errorln("Error writing request:", io::strerror(err))!;
return;
};
ev::read(&state.ioreq, state.rbuf, &on_read);
};
It is possible for users to make their own high-level requests like ev::dns and ev::dial do, which encapsulate several I/O requests in a single, higher-level operation, but the details are beyond the scope of today’s blog post. Check out the haredocs for ev::req and the implementations of ev::dns and ev::dial to learn more.
Running the loop until the workload is complete
Another new addition to hare-ev, inspired by libuv, is the ability to run the event loop until all pending I/O requests are completed. This saves you from having to figure out when to call ev::stop or to keep an “exit” variable around somewhere, or keep track of I/O submissions from third-party libraries. With ev::run, just stop submitting I/O and once everything is done the event loop will exit.
Miscellaneous changes
Some of the other user-facing changes include:
- ev::write and ev::send respectively write or send the entire buffer, perhaps across several syscalls, before running their callback, so callbacks no longer need to handle partial writes.
- Signals handled via ev::signal now handle only one signal per request, requiring you to submit a new request during the signal handler callback if you want to continue handling that signal.
When can I play with it?
These changes will ship with the next release of hare-ev, whose releases track upstream Hare releases. The next release is likely to ship in Q4 2025 (0.25.3). Until then, if you track Hare’s master branch, you’re welcome to grab the latest master branch of hare-ev as well and start porting your programs. Send feedback to hare-users or ping me on IRC if you do!