Resource Management

Many communication protocols operate by sending individual messages or requests over a single, fifo communication channel. If processing a message takes a long time, it makes little sense to wait for the message to be fully processed before accessing the next message from the communication channel. Instead, one usually moves the message elsewhere in memory, freeing up the communication channel and allowing to pop the next message almost immediately. The actual, slow message processing can then be done by another thread or via non-blocking IO.

Unfortunately, memory is finite, so there has to be a limit on how many messages can be copied elsewhere. Once that limit is reached, the process can either crash (not good), or it must wait for messages to finish processing, essentially leaving us with the same problem as we started with.

In some sense, there is no way around that. However, we would prefer if the communication partner knew when this was about to happen; it could then throttle its more expensive messages, so that its cheaper messages could still be processed without delay. This is crucial to implementing logically independent communication channels on top of a single physical communication channel.

Here, we describe a simple way of maintaining independent message buffers for each logical communication channel. When receiving a message that will take a long time to process, a peers moves it into the message buffer for that kind of messages. This keeps the main communication channel responsive. If a message buffer has insufficient capacity for new messages, the other peer is informed in advance, so that it does not send messages that cannot be buffered.

Requirements

To describe our approach more precisely, we need to introduce some terminology. One peer — the client — sends messages to the other peer — the server. Messages belong to logical communication channels, and the server maintains a fifo buffer for each logical channel. Some messages do not belong to any logical channel — those are messages that can always be processed immediately and in negligible time, so they are never moved to any buffer. In particular, our approach adds its own message kinds for exchanging metadata about the buffers. All of these control messages do not belong to any channel — if buffer management required buffer capacity in the first place, things could quickly go wrong.

A sequence of boxes in four different colours on the left, the result of dividing them up by colour on the right. The one yellow box is missing on the right side.
Messages being moved from a shared fifo input channel to the buffers of their respective logical channels. The yellow message is a control message that does not get buffered at all.

We now present some properties we would like our solution to fulfil.

The server should be able to inform the client about its remaining buffer capacities for its logical channels, and keep these values up to date as it buffers and processes (thus freeing buffer space) messages. The client should be able to rely on this information: if the server promises buffer capability, it must deliver.

The server should be able to resize its buffers to cope with competing resource demands. Increasing buffer capacity is unproblematic, but we will see that decreasing buffer capacity requires cooperation with the client in some situations.

Both the client and the server should be able to voluntarily imposse and communicate limits on how many bytes they will still send or receive over a logical channel. Once that limit reaches zero, the other peer can consider the channel as being closed.

Finally, the client should be able to optimistically send messages even though their corresponding buffer might be full — by the time the messages arrive at the server, the buffer space might have become available after all. The server must be able to reject and drop such optimistically transmitted messages, however. When it does, it must inform the client, because the client always needs to maintain accurate (if delayed) information about all buffer states.

Solution Overview

To satisfy these requirements, our solution builds on the concept of guarantees. The server sends guarantees of available buffer space to the client per logical channel; the client tracks its available guarantees and knows that all of its messages that do not exceed the guarantees will be buffered.

The first of several diagrams depicting the state kept by a server and a client. States are arranged in two rows — one for the server, and one for the client — with time progressing from left to right. Initially, the server has a buffer with six remaining slots. The server has already given guarantees for four of these slots, two guarantees remain unissued. The client has a budget of four remaining guarantees to work with. After sending a three-byte message, the remaining guarantees of the client decrease to one. The server buffers the received message, its number of issuable guarantees remains unchanged.This diagram shows the statekeeping for only a single logical channel. The full session state consists of an independent copy for every different logical channel a protocol defines.
Statekeeping for server and client when the client sends a message. The server tracks for how many unoccupied buffer slots it has not yet issued guarantees, the client tracks how many guarantees it has available. Sending a message reduces the guarantees available to the client.

When the server increases a buffer’s capacity, it gives that many guarantees (measured in bytes) for the corresponding logical channel to the client. When establishing the connection, the client has no guarantees, and the server typically starts by sending guarantees equal to its initial buffer capacities. Conceptually, the server begins its operation by increasing its buffer capacities from zero to their actual starting amounts.

A server-client diagram. Initially, the server has zero issuable guarantees, and a buffer of size zero. The client starts with zero remaining guarantees. In the next step, the server increases its buffer size to five, leaving it with five unissued guarantees. Then, it issues those five guarantees. In the resulting state, the server has zero issuable guarantees left, whereas the available guarantees of the client increase to five.
The server increases its buffer capacity and then issues as many guarantees.

The second way of giving guarantees occurs when the server has processed a buffered message and thus frees up buffer space. It then communicates the amount of buffer space that was freed up, and for which logical channel. The server need not communicate this immediately, it is free to send only the occasional update that aggregates guarantees that stem from processing several messages from the same logical channel.

A server-client diagram. The server starts out with a buffer containing two empty slots, a message of size two, a message of size one, and a final message of size two. It has zero issuable guarantees, so the client has two remaining guarantees that correspond to the two unused buffer slots. The server processes its rightmost buffered message, leaving it with two more messages, and four empty buffer slots. For the two new open slots, no guarantees have been given yet, so the server’s issuable guarantees increase to two. Next, the server processes another message, increasing the issuable guarantees to three, and leaving only a single, two-byte message in its buffer. Finally, the server sends three guarantees to the client: the issuable guarantees go do to zero, and the client’s available guarantees go up from two to five.
The server processes a message, it later processes another message, and then decides to issue guarantees for the freed buffer slots.

When the server wishes to reduce the capacity of some buffer, it simply processes messages from that buffer without informing the client. This decreases the overall amount of guarantees in the system by the correct amount.

A server-client diagram. The server starts out with a buffer containing two empty slots, and two messages of two bytes each. It has zero issuable guarantees, putting the client at two remaining guarantees. The server then processes a message and decreases its total buffer size by the message’s size. All other state remains unchanged.
Processing a message without issuing guarantees for it allows the server to shrink its buffer.

This technique is only applicable when the server has some buffered messages; it does not allow it to reduce buffer capacity for empty buffers. But the server cannot simply decrease the buffer size and then inform the client: while that information is travelling to the client, the client might send messages on this logical channel, fully expecting them to be buffered by the server.

To solve this problem, we introduce a mechanism for the client to absolve the server of some absolute amount of its unused guarantees on some logical channel, and we add a way for the server to ask for such absolution. Asking for absolution takes the form of specifying the logical channel and the number of guarantees the server would like the client to keep. Upon receiving this request, the client absolves the server of exactly the amount needed to reach the desired number of guarantees. If the client already has fewer guarantees by the time the request for absolution arrives, the client simply ignores the request.

A server-client diagram. The server starts with seven empty buffer slots and zero issuable guarantees; the client starts with seven remaining guarantees. In the first step, the server asks for absolution down to a number of three remaining guarantees. When the client receives the request, it answers by absolving four guarantees, and reduces its remaining guarantees by four down to three. The server receives the absolution and then shrinks its buffer by four slots down to three.
Buffer downscaling without any concurrency issues: the server asks for absolution, the client grants it.
A server-client diagram. The server starts with nine buffer slots, two of which are occupied by a message. Its issuable guarantees are zero, the client starts with seven remaining guarantees. In the first step, the server asks for absolution down to four remaining guarantees, and concurrently, the clients sends a message of size one. Hence, the client’s remaining guarantees are six when the server’s request for absolution arrives. The server receives the one-byte message and buffers it. The client then absolves two guarantees, reducing its remaining guarantees to four. The server receives the absolution and can shrink its buffer by two slots, leaving it with a total buffer capacity of seven (three slots occupied by the two buffered messages, and four unused slots).
Concurrent to the server asking for absolution, the client sends a message. The protocol is designed so that nothing goes wrong.

Next, we consider the ability to close logical channels. More precisely, we consider generalizations of closing: the communication of an upper bound of pending usage.

At any point, the client may communicate an upper bound on how many bytes of messages it will send to a logical channel. Both peers can track this number and subtract from it whenever the client sends more bytes accross the logical channel, or provides an absolution for the channel. The client must not send any messages that turn the tracked number negative. The client may send new upper bounds over time, but only if they strictly tighten the limit. Any bounds that would allow the client to send more bytes than promised in a previous bound are forbidden.

Completely analogously, the server may communicate an upper bound on how many more bytes of messages it might still accept over a logical channel. After accepting that many bytes, all future bytes must be dropped. In particular, this bound thus also gives an upper bound on how many more guarantees the server might issue in the future. The server may communicate new upper bounds over time, but only if they stictly tighten the limit.

Taken together, these techniques make for a stable system where the client never overwhelms the buffer capacity of the server. As a final addition, however, we want to allow the client to optimistically send data even though it might have no corresponding guarantees. The server may, after all, have freed up buffer space by the time the optimistically transmitted messages arrive.

A server-client diagram. The server starts with a buffer containing one empty slot, a message of size two, and a message of size four. Its issuable guarantees are one, the client starts with zero remaining guarantees. In the first step, the server processes the four-byte message, leaving it with five empty buffer slots and as many issuable guarantees; the client state remains unchanged. Next, the client optimistically sends a three-byte message, leaving its remaining guarantees at negative three. The server receives and buffers the message, its issuable guarantees remain unchanged at five, despite now having only two free buffer slots. Finally, the server issues five guarantees, leaving it with zero issuable guarantees. The client’s remaining guarantees increase by five to positive two.
The client optimistically sends a message, pushing its available guarantees below zero. The server has buffer space available; it simply buffers the message without taking any special action.

If the server has insufficient buffer capacity when an optimistically transmitted message arrives, the server drops it without any processing. This requires us to add a certain degree of retransmission logic to the protocol, since the client must be informed of the message dropping.

To keep the complexity of the retransmission logic minimal, we adopt a simplistic solution: when the server drops an optimistically11The server is still not allowed to drop messages for which it had previously guaranteed buffer capacity. transmitted message, it must keep dropping all messages on the same logical channel, until it receives an apology message for that logical channel from the client. The server informs the client that it has started to drop messages, at which point the client can send an apology and can then resume normal operation.

An extra-wide server-client diagram. The server starts with a buffer containing one empty slot and one message of size six. The issuable guarantees are zero, the client’s remaining guarantees are one. Unlike the previous diagrams, the server has a small emoji to indicate whether it is dropping messages; initially, the emoji is happy. In the first step, the client optimistically sends a message of three bytes, putting it at negative two remaining guarantees. The server cannot buffer the message, so it drops the message. Its state remains unchanged, except for the emoji turning angry. Next, the client sends a message of size one, putting it at negative three remaining guarantees. The server drops the message, its state remains completely unchanged. In the next step, the server sends a message to notify the client of the dropping. The emoji turns slightly less angry, otherwise the server state remains unchanged. Upon receiving the notification, the client sets its remaining guarantees to one. Finally, it sends an apology. Receiving the apology resets the server’s emoji to the initial, happy expression.
When the server receives an optimistic message it cannot buffer, it drops it and all further messages, even any message for which it has sufficient buffer capacity. Only after receiving an apology does it switch state and become able to accept further messages. The client, when notified of message dropping, increments its counter of remaining guarantees by the number of message bytes that got dropped.

This approach comes with two subtleties. First, the server must never buffer partial messages — if it can buffer all but one byte of a message, it must still drop the full message and all subsequent ones until the apology. Second, the server must issue guarantees for all the optimistic message bytes that it did manage to buffer, before informing the client that it has started to drop messages. This is necessary so that the client can reconstruct exactly which of its messages got dropped, and which still made it through.

An extra-wide server-client diagram. The server starts with a happy expression, and a buffer containing four empty slots and one message of size three. The issuable guarantees are three, the client’s remaining guarantees are one. In this diagram, the client also maintains a buffer, initially consisting of six unused slots. In the first step, the client optimistically sends a message of two bytes, putting it at negative one remaining guarantees. The client places the message in its buffer, and tracks that only one byte of the message was optimistic. The server buffers the message, staying happy and remaining with three unissued guarantees. Its buffer now contains two free slots, a message of size two, and a message of size three. Next, the client sends a message of size three, placing a copy in its buffer, and putting it at negative four guarantees. The server has to drop this message, turning angry but otherwise keeping its state. Then, the server issues a single guarantee, putting its issuable guarantees at two. When the client receives this guarantee, it not only increases its remaining guarantees to negative three, but it also removes the two-byte message from its buffer, leaving the buffer with three unused slots and the three-byte message. Next, the server sends a notification that it dropped messages. The client empties its buffer of the three-byte message, and increases its remaining guarantees by that amount back to zero. It then replies with an apology, turning the server happy again. In this final state, the happy server has two issuable guarantees and a buffer containing two unused slots — the client’s message of size two, and the size-three message that occupied the buffer from the very beginning. The client ends the interaction with an empty buffer and zero remaining guarantees.
Before the server announces that it is dropping messages, it issues guarantees for the optimistic messages bytes it did manage to buffer. The client maintains a queue of its optimistically transmitted messages, and tracks how many of their bytes it knows to have been processed. When the client receives guarantees for all bytes of a message, it can be dequeued.
In this example, the client empties its queue, whereas a realistic implementation would probably keep the queue to (transparently) retransmit all dropped messages before sending new ones. Note further that the server has to issue one guarantee before sending the dropping notification, but it would also have been allowed to issue more guarantees — a realistic server would have issued all three issuable guarantees.

Message Types

The following pseudo-types summarise the different kinds of control messages that this approach introduces. The parameter C is a type with one value for each logical channel. Each message pertains to exactly one logical channel, specified by its channel field.

The server makes a binding promise of amount many bytes of available buffer capacity to the client.
The client allows the server to reduce its total buffer capacity by amount.
struct Absolve
The server asks the client to send an Absolve message such that the client’s remaining guarantees will be target.
struct Plead
The client promises to the server an upper bound on the number of bytes of messages that it will send on some logical channel.
The server promises to the client an upper bound on the number of bytes of messages that it will still receive on some logical channel.
The server notifies the client that it has started dropping messages and will continue to do so until it receives an Apologise message. The server must send any outstanding guarantees of the logical channel before sending a AnnounceDropping message.
The client notifies the server that it can stop dropping messages on this logical channel.
struct Apologise

Data Handles

In this section, we discuss a common special case of resource control: that of maintaining numeric identifiers for shared context between two peers. For example, a peer might send a large piece of data, give it a numeric identifier, and then reference the data through its identifier rather than repeatedly including the data in future messages.

Depositing data like this with the other peer requires them to spend resources (memory), so immediately questions of resource control and throttling arise. We first sketch a lightweight approach to maintaining data handles without considering resource concerns, and then show how to implement that approach on top of logical channels such that resource concerns are addressed.

Requirements

We consider a client who asks a server to bind some data of some type to numeric resource handles. The same protocol might use multiple types of data, each with its own independent handle types (and independent resource control).

Since both the client and the server must keep track of all resource handles, they should both be able to remove bindings. We call this operation freeing a resource handle.

Note that we only discuss resource handles that live during a particular protocol run between two peers. We do not consider the problem of persisting peer-specific state across sessions.

Managing Resource Handles

We use consecutive natural numbers as resource handles. For each handle type, both peers keep track of the least number that has not yet been assigned as a handle of that type. In order to bind a value to a resource handle, the client simply transmits the data and its type. The least number that had not been assigned before becomes the resource handle of the transmitted data. Both peers add the new mapping into some data structure they can use to later dereference the resource handle into its value.

The main issue in maintaining resource handles is that of coordinating freeing over an asynchronous communication channel. Suppose one peer removes a handle-to-value binding from their local mapping, sends a message to inform the other peer, but that peer has concurrently sent some message that pertains to the resource handle in question. Everything breaks! Hence, freeing must be a three-step process:

  1. one peer sends a message that proposes to free a resource handle (and, in doing so, that peer commits to not referring to that handle in any of its future messages),
  2. the other peer, upon receiving that proposal, marks the corresponding handle-to-value binding in its local mapping for deletion22We explain below why the binding cannot be deleted immediately. and sends a confirmation message,
  3. the first peer, upon receiving the confirmation, can then also mark the handle-to-value binding in its local mapping for deletion.

There is no need to use separate message types for proposals and confirmations of freeing: peers react to a proposal to free by sending their own proposal to free the same resource handle. Receiving a proposal to free a resource handle after having sent such a proposal oneself confirms that the handle-to-value binding can be removed from the local mapping. A nice side effect of this approach is that it gracefully handles concurrent proposals to free the same resource handle by both peers.

Why do the peers merely mark mappings for deletion rather than immediately removing them? Because they might still buffer unprocessed messages for some logical channels that reference the resource handle being freed. To deal with this, a peer can keep a counter with every resource handle that gets incremented whenever a message that references the resource handle is moved to a buffer. Whenever such a message has been fully processed, the counter is decremented again. The peer locally deletes a binding if it is marked for deletion while its counter is zero, or if its counter reaches zero after it has been marked for deletion.

Resource Control

Adding handle-to-value bindings to local mappings requires space, so the server should have a way to throttle the client. We use a simple solution: for each handle type, the messages for binding new resource handles are sent over their own logical channel. Throttling that logical channel controls the number of resource handles that can be created.

Crucially, guarantees for the logical channel for issuing resource handles of some handle type must guarantee not only that the messages will be buffered, but that they will be processed and the resource handles will be boundIf bindings are to be stored in a complex data structure with a fragmented memory layout, the server must issue guarantees based on conservative approximations of memory usage.. This ensures that the client knows which resource handles are safe to reference.

The client may send optimistic messages on such a logical channel. The client may even reference these optimistically bound resource handles in arbitrary other messages. When the server receives a message that references a resource handle that is greater than the greatest resource handle it has bound of that handle type, it must first check whether it has unprocessed messages for binding resource handles of this type, and process as many of them as possible.

If processing those messages binds the resource handle in question, the server can then resume processing the optimistic reference to that resource handle as if nothing had happened. If, however, it then still has not bound the resource handle in question, then the message for binding the resource handle must have been dropped, and the server has already sent a AnnounceDropping message, from which the client can infer that its optimistic reference to the resource handle could not be processed either. The server then simply drops the message without any further processing.

Message Types

The following pseudo-types summarise the messages for maintaining resource handles. The parameter H is a type with one value for each handle type. Each message pertains to exactly one handle type, specified by its handle_type field. BindHandle messages belong to logical channels determined by their handle_type field, FreeHandle messages are control messages that do not belong to any logical channel.

The client directs the server to bind the next unused numeric resource handle to the given value.
struct BindHandle
An actual protocol probably wants to define dedicated message types for the Bind messages of every handle type, each with their own associated type of values.
A peer asks the other peer to free a resource handle.
struct FreeHandle
This is needed for symmetric protocols where peers act as both client and server simultaneously and bind resource handles to the same handle types.
Indicates whether the peer sending this message is the one who created the handle (true) or not (false).