Or at least I assume that's what the call I missed because [reasons this margin is too small to contain] was about, based on (i) the voicemail that said They'll Call Back Tomorrow, and (ii) the continued absence of the relevant test results in the NHS app.
I... think I am going to suggest that they ask my GP to issue a bloods request form, for me to pick up from the surgery and take up the hill to phlebotomy. Because! this is ridiculous! blood loss remains my job!!!
Other things today has contained include: TOKEN RIDICULOUS PUZZLE; Very Picturesque Bread; the Child assigning us all Pronouns and Genders and Sexualities more-or-less at random (from an LGBTQIA+ sticker book); PAKIDGES many and various Including another book on pain and box sets for the last two seasons of Elementary; lots of ridiculous windows in the general vicinity of Bank. I am very tired.
Not many people outside of infectious disease specialists may realize it, but the order Mononegavirales is really bad news for human health. Inside that one you can find measles (the fashionable infection of 2026, damn it all), RSV (always with us), mumps, rabies, and even Ebola, which I very much hope does not become a hot item in any year.
There are plenty of differences between all these (there are eleven families in this order), but something that the Mononegavirales species have in common is the existence of “viral factories” (VFs). These are concentrated blobs of viral proteins that form in infected cells and serve to crank out the pieces of new viral particles for further infection. They are, in fact, phase-separated condensates (which shows again how useful that physical behavior is across different systems - I wrote about these most recently here). But there’s been a mystery about them, as this paper explains well. It’s generally believed, with good reason, that such condensates can only form when the concentration of the proteins that make it up get over a certain threshold. But when an infection is just starting out, there doesn’t seem to be any way for that to be possible. You’d need viral factory condensates to make that much protein, and you can’t condense to get such VFs unless the protein is already there - or can you?
The authors show the way out of this paradox. For RSV, viral factories are formed by the viral nucleoprotein (N), the viral phosphoprotein (P) and also contain the “large” protein (L) and its cofactors, the viral RNA polymerase, and various RNA transcripts. But there are “pre-replication centers” (PRCs) that form before these VFs are able to completely assemble, and these are imaged here for the first time. They are the seeds of the VF condensate formation, what is basically a feed-forward process: protein replication starts at a lower and less efficient level, but these viral proteins are strongly recruited to the PRCs in turn, which makes them even more productive, which makes more protein, and. . .you get the idea! Before long you have the full-fledged viral factories that have been known for some time as a hallmark of RSV-infected cells. This is how the condensates get bootstrapped from low-concentration beginnings.
An unexpected result was that when you look at individual RSV particles (virions) themselves, some of them are much more “PRC-competent” than others. Indeed some of the virions are actually pretty terrible at replication, because they don’t have pre-formed PRCs ready to go in them when they infect a cell. It looks very much like an RSV infection in a whole animal is driven by the virions that do have the PRCs assembled for delivery; the others turn more or less into bystanders (although what viral proteins they do produce probably get recruited over to those other strongly-binding PRCs from other virions that have hit the same cell).
But there’s a lot of cell-to-cell heterogeneity in an RSV infection, and these results suggest why: some of these cells have been hit by far more PRC-active virions and some of them haven’t. This raises a lot of interesting questions, for sure. What exactly are the factors that make PRCs assemble more in some virions than others? Do the PRCs themselves vary in their ability to nucleate viral factories in turn, and if so, what factors drive those differences? A larger question is evolutionary: you’d think that there would be a selection advantage in having efficient PRC formation and that over time you just wouldn’t see those less efficient virions at all. This makes you wonder if there really is an effective selection mechanism at the genetic level or if there’s some random process that’s mixing things up at a slightly later stage.
And moving beyond the Mononegavirales order, there are plenty of other viruses that have to deal with the starting-from-scratch problem when they first infect a cell. Indeed, there are many other kinds that seem to form condensates during their attacks on cells. Do they also do some kind of condensate-seeding trick to get things going? Or will that possibly turn out to be a trick that just the crazily-infectious ones have hit on? And as the authors note, there are certainly also implications for condensate formation in general, as we work out the sequences and interactions that make this feed-forward process work so well. Onward. . .
The SAFEARRAY is a unique-owner object. Once the owner calls SafeArrayDestroy, the array is gone. You can extend the lifetime of the memory with SafeArrayAddRef, but the data itself has vanished. The purpose of extending the lifetime of the memory is not to let you keep using the array, but rather to avoid accessing freed memory in case an array is destroyed out from under you.
If you don’t want the owner to call SafeArrayDestroy, you have to convince the current owner to let you take ownership. Then you become the one responsible for calling SafeArrayDestroy, and you can do that when you are finished with the array.
One way I thought of was to have the caller pass a VARIANT as an in/out parameter. On the way in, it contains a SAFEARRAY. You can then detach the parray from the variant and reset the variant’s type to VT_EMPTY. Detaching the parray lets you take ownership of the array, and setting the type to VT_EMPTY tells the caller that it no longer has an array.
Recall that the original intent was to allow the method to continue operating on the array after returning, while still avoiding the cost of having to create a separate copy of the array. Taking the parameter as an in/out variant makes it clear that the method might decide to change the array to something else, so if the caller still needs the data in the array, it had better pass a copy of the data, so that it can still use the original. So in that case, you didn’t really avoid the copy; you simply moved the copy into the caller. But in the case that the caller was going to destroy the array anyway, it could put the array in the variant, and if the method destroys the array as part of the method call, then no big deal. The array was getting destroyed anyway.
This isn’t a great solution, but at least it’s something.
Sabrina Goldfarb, an engineer on GitHub Copilot, noticed something odd. She was getting great results from AI tools at GitHub, but the senior engineers around her kept saying the tools were terrible. Same models, wildly different experiences.
“I kept hearing the more senior engineers at my company being like, this is terrible, right? I’m not getting the right outputs,” she explains in her course. “And I was just like, why? If I’m getting really good outputs, why are you not getting really good outputs?”
So she dug in. And what she found is that prompt engineering isn’t magic. It’s a learnable technique.
Here’s the thing: senior engineers actually have a massive advantage with these tools once they learn the basics. They know what questions to ask. They understand the edge cases, the architecture decisions, the hundred small things required to ship real software. A junior developer might accept the first output an AI gives them. A senior developer knows what’s missing.
The gap isn’t about AI aptitude. It’s just technique. And the techniques work whether you’ve been coding for 20 years or 20 days.
Here’s a small example of what she means. In her course, she asks an AI to build a Prompt Library app with a simple request: save and delete functionality, clean and professional, HTML/CSS/JavaScript. The result? The AI added search functionality nobody asked for, an export button that wasn’t requested, and a save button that didn’t work.
Same task, more specific prompt (spelling out exactly what each button should do, what to store, what not to add) and suddenly it works. “The quality of the question directly relates to the quality of the answer,” she says. “This shouldn’t be surprising to us, right? This is the same thing with us as humans.”
The Fundamentals Actually Matter
The research on this is striking. There’s a technique called chain-of-thought prompting that boils down to adding “let’s think step-by-step” to your prompts. That’s it. Five words.
In one study Sabrina references, accuracy on reasoning tasks jumped from 17.7% to 78.7% just by adding that phrase.
“I cannot think of five words in the English language that could possibly help more with your prompts,” she says.
This is the unsexy truth about AI-assisted development: the fundamentals matter more than the features. Prompting patterns, context management, knowing when to let an agent run versus when to just write the code yourself. Get these right and every tool gets better.
A Path Through All of This
We didn’t want to create courses about AI theory or speculation. We wanted to show you exactly how working engineers are using these tools right now, in production, to get real work done.
If you’ve never learned to prompt properly, start with Sabrina Goldfarb’s Practical Prompt Engineering. Zero-shot, one-shot, few-shot techniques, chain-of-thought prompting, structured outputs. Three hours and 43 minutes that will change how you interact with every AI tool you use.
If you want to understand how agents actually work, Scott Moss from Netflix walks you through building agents from scratch. Not using a framework. Building the thing yourself so you understand exactly what’s happening when you hand off work to an AI agent.
If you’re already using Cursor or Claude Code but feel like you’re fighting the tools, Steve Kinney from Temporal shows you his professional AI dev setup. When to use inline edits versus background agents. How to set up guardrails. How to get unstuck when agents go off track. As one student put it: “He gives a lot of good tips and a realistic view of the capabilities of AI tools.” That realistic view is what separates useful instruction from hype.
If you want to connect AI to your actual workflows, Brian Holt from Databricks built an MCP course because he’s actively using the Model Context Protocol in his work. One student, Daniel W., said the course “set me off on my journey to create my company’s workflow MCP server, which could be used by other devs within my work community.” That’s the goal: not theoretical knowledge, but tools you use the next day.
If you want to understand the fundamentals beneath all of this, Will Sentance’s Hard Parts of Neural Networks takes you under the hood of how AI models are trained. Hand-build neural networks. Understand how prediction actually works. One student said the course “made some of the concepts in the field of AI less intimidating while building great mental models for understanding.” This depth matters because AI keeps evolving and understanding the foundations helps you adapt to whatever comes next.
Why This Matters for Senior Engineers
The companies hiring right now want people who can prompt effectively, who understand when to use agents versus when to write code themselves, who can debug when AI tools hallucinate or go off track. They want engineers who understand the technology, not just use it blindly.
Senior engineers already have the hard part: the judgment, the taste, the understanding of what good software looks like. The prompting techniques are the easy part. A few hours of learning the fundamentals, and all that experience becomes leverage.
Check out the full AI Learning Path and start with Practical Prompt Engineering. The developers who’ve spent years learning what to build are exactly the ones who’ll benefit most from learning how to ask for it.
A customer had a method that received a SAFEARRAY, and they called SafeArrayAddRef to add a reference to it so that they could continue using the array after the method returned, but they found that if they tried to use the array later, the safearray->pData was NULL, and if they called SafeArrayAddRef again, it also gave them a null data pointer. They wanted to know if this was expected behavior. “Does COM invalidate a SAFEARRAY when a method returns, ignoring the reference count?”
Scripting SAFEARRAYs are not reference-counted objects. When somebody says to destroy them, they are destroyed.
The SafeArrayAddRef function lets you extend the memory lifetime of the array descriptor and the array data, but the useful lifetime ends when the array is destroyed. The purpose of the SafeArrayAddRef function is to prevent you from operating on freed memory if somebody destroys the array out from under you.
In this case, what happens is that the caller provides a SAFEARRAY, and you call SafeArrayAddRef to extend its memory lifetime. When you return, the caller decides that it was a temporary array, so it calls SafeArrayDestroy. This renders the array contents useless, but since you added a reference (to the descriptor and the data), the memory for them is not freed, even though they don’t contain anything useful any more.
After the array has been destroyed, your calls to SafeArrayAddRef continue to extend the lifetime of the array descriptor and the data in it, but the descriptor had already been emptied out when the array was destroyed, so there is no data in the array any more. You are extending the lifetime of no data, which is why SafeArrayAddRef produces a null pointer as the data pointer.
To access the original memory, you need to do it through the pointer returned from the original call to SafeArrayAddRef.
But even that won’t help you, because the memory for the array data is zeroed out when the array is destroyed. You have a pointer to a bunch of zeroes.
Next time, we’ll look at ways of solving the customer’s problem, now that we understand why their approach didn’t work.
The second attempt at a present for my mother has arrived Several Whole Days before I am next going to see her! Hurrah! (About ten days after I'd received a notification that the previous attempt was ready to ship, and I'd be hearing more from the courier Drekly, I... realised I had heard nothing more from the courier. Apparently the parcel evaporated, but the company sent the order back to the workshop as a priority job...)
I successfully exchanged blood for a bowel prep kit! The blood results have not yet shown up in the NHS app, but fingers crossed for them coming through... drekly.
Allotment! Post-bloods I took myself to the plot to empty the compost pail, and accidentally did a whole pile of weeding, thereby establishing that the garlic chives have overwintered successfully (thus far) even if they're looking a bit bedraggled; that I do in fact have a lot of garlic I failed to harvest last year that's coming up merrily now (which I am contemplating redistributing in aid of maybe getting bigger bulbs out of it...); and that there are going to be So Many Beetroot. (Largely self-seeded.) (I did accidentally eat some of the garlic chives, Contra Bowel Prep Instructions, because apparently I Ought Not Be Trusted At The Allotment when I'm on a low-residue diet, BUT I successfully did NOT eat ANY of the spinach or rocket or lamb's lettuce.)
I consolidated enough of my Book Piles to unearth the coffee table! AND THUS we have begun a puzzle, which I am greatly enjoying.
Tinned pears. Tinned pears are always a Treat that is a Small Luxury, and they are especially so this week. ...it is possible that I am going to go through my entire stash.
Here’s a paper that illustrates an important topic in med-chem, one that an awful lot of ink and pixels have been spilled on over the years. When we talk about affinity of a drug to a target, the binding constants that we measure have a lot of thermodynamics packed inside them. Like every other chemical reaction and interaction, the favorable ones show a decrease in overall Gibbs free energy for the system (delta-G), but one should never forget that the equation for that energy change has two terms.
You have enthalpy (delta-H), which consists of a lot of the things that we typically think of driving binding interactions (acid-base pairs, hydrogen bonding, pi-electron interactions, and so on), but there’s also that temperature-and-entropy term (T delta-S). Entropy is a bit more slippery concept, but one way to start thinking about it (although not the whole story) is order and disorder. Compare the starting state and the end state of your process: in which one of them are the components more orderly (fixed in their conformations and positions, for example) or disorderly (able to move around more freely)? As the reaction proceeds, how does the total amount of that order and disorder change? “More disorderly” is by itself energetically favored, as a look around your surroundings will generally demonstrate. That tends to hold whether you’re looking at your your chemical reactions, your bookshelf, your laundry, inside your refrigerator, or at the state of your nation’s political system.
But totaling up that entropy in a binding event is no small matter. You have to look at the ligand that’s binding, of course, and you’d think that much of the time it’s going to lose entropy as it binds (since it’s snuggling into position in the binding site itself, as opposed to floating around out there in solution). But that “floating around in solution” brings you to consider the water molecules that it’s surrounded by out there. If they’re forming a fairly orderly solvation shell around your ligand, that’s going to be broken up as it moves into the binding site, and you might pick up some favorable increased entropy that way. But then there’s that binding site! What’s the entropic state of the protein target before and after binding - more ordered overall, or not? Remember that distant domains might be changing position, not just the areas around the binding site, and they all have water molecules around them, too. The binding site itself may have some key water molecules involved in its structure, and the changes there can run the whole range of positive or negative entropic effects depending on the situation. There are a lot of different single-water-molecule situations with proteins! It is indeed a pain in the rear, to use a thermodynamic term of the art.
In many situations, enthalpic effects and entropic effects seem to be working at cross purposes to each other. This “entropy-enthalpy compensation” is what people have been arguing about for at least the last thirty years, because it sometimes seems like some perverse but inescapable law of nature and sometimes like just an artifact of how we’re viewing the problem. And it does have to be said that the two don’t cancel each other out all the time, or we’d have no way to optimize the binding of our drug candidates at all!
The paper linked above is looking at an old tricyclic drug, doxepin, and its (rather strong) binding to the histamine-1 receptor. Like a lot of other simple tricyclics of that general class, it binds to all sorts of other stuff as well, as do its metabolites, making it a messy proposition in vivo. You can see the list at that link. But it has had many years of use as an antihistamine, antipsychotic, anxiolytic, sleeping aid, and so on, although it's largely fading into the past in most of these areas. My first thought when I saw the structure was "I'll bet that stuff can put you on the floor", and I believe that's an accurate statement.
You’ll note because of that double bond that there are two isomers, Z and E doxepin (from the good ol’ German “zusammen” and “entgegen” - if you keep digging in organic chemistry you’ll eventually hit a German layer). The Z reproducibly binds better than the E (two- to five-fold better depending on your assay) but they’re both down in the lower nanomolar range. What the present paper finds, on close examination by isothermal calorimetry, is that the Z isomer’s binding is almost entirely enthalpy-driven with only a very small change in the entropy term. The E isomer, though, is notably less enthalpically favorable, but makes up a lot of that with an improved entropy term. And there’s why we keep talking about entropy-enthalpy compensation!
Put simply, maybe too simply, the Z isomer has better interactions with the protein itself, but those remove a lot of its conformational flexibility. Meanwhile, the E isomer doesn’t have as strong an enthalpy hand to play, but since it doesn’t lose as much flexibility while binding it doesn’t take the loss-of-entropy hit along the way like the Z isomer had to. So the two of them end up much closer than you otherwise might have guessed.
Studies on mutant receptors showed that a particular tyrosine hydroxyl group in the receptor is a big player in these differences. If you mutate that one to a valine, the two isomers bind almost identically, and with almost identical values for their entropy and enthalpy terms, to boot. It’s pointed toward the tricyclic ring of the structure (but isn’t making a hydrogen bond with the oxygen up there, if that’s what you were thinking). Your first guess might also have been something to do with the basic nitrogen down at the other end of the molecule, but that would also have come up short; things don’t seem to differ much down there for the two isomers.
Subtle details all the way down! But that’s medicinal chemistry, and that’s just one of the many reasons why it ain’t easy. . .
In part 1, we talked about how single flight mutations allow you to update data, and re-fetch all the relevant updated data for the UI, all in just one roundtrip across the network.
We implemented a trivial solution for this, which is to say that we threw caution (and coupling) to the wind and just re-fetched what we needed in the server function we had for updating data. This worked fine, but it was hardly scalable, or flexible.
In this post, we’ll accomplish the same thing, but in a much more flexible way. We’ll define some refetching middleware that we can attach to any server function. The middleware will allow us to specify, via react-query keys, what data we want re-fetched, and it’ll handle everything from there.
We’ll start simple, and keep on adding features and flexibility. Things will get a bit complex by the end, but please don’t think you need to use everything we’ll talk about. In fact, for the vast majority of apps, single flight mutations probably won’t matter at all. And don’t be fooled: simply re-fetching some data in a server function might be good enough for a lot of smaller apps as well.
But in going through all of this we’ll get to see some really cool TanStack, and even TypeScript features. Even if you never use what we go over for single flight mutations, there’s a good chance this content will come in handy for something else.
Our First Middleware
TanStack Query (which we sometimes refer to as react-query, it’s package name) already has a wonderful system of hierarchical keys. Wouldn’t it be great if we could just have our middleware receive the query keys of what we want to refetch, and have it just… work? Have the middleware figure out how to refetch does seem tricky, at first. Sure, our queries have all been simple calls (by design) to server functions. But we can’t pass a server function reference up to the server; functions are not serializable. How could they be? You can send strings and numbers (and booleans) across the wire, serialized as JSON, but sending a function (which can have state, close over context, etc) makes no sense.
Unless they’re TanStack Start server functions, that is.
It turns out the incredible engineers behind this project customized their serialization engine to support server functions. That means you can send a server function to the server, from the client, and it will work fine. Under the covers, server functions have an internal ID. TanStack picks this up, sends the ID, and then de-serializes the ID on the other end.
To make this even easier, why don’t we just attach the server function (and the argument it takes) right in to the query options we already have defined. Then our middleware can take the query keys we want re-fetched, look up the query from TanStack Query internals (which we’ll dive into) and just make everything work.
Note the new meta section. This allows us to add any random metadata that we want to our query. Here we send over a reference to the getEpicsList server function, and the arg it takes. If this duplication makes you uneasy, stay tuned. We’ll also update the summary query (for the counts) the same way, though that’s not shown here.
Let’s build this middleware piece by piece.
// the server function and args are all `any`, for now, // to keep things simple we'll see how to type them in a bittype RevalidationPayload = {
refetch: {
key: QueryKey;
fn: any;
arg: any;
}[];
};
type RefetchMiddlewareConfig = {
refetch: QueryKey[];
};
exportconst refetchMiddleware = createMiddleware({ type: "function" })
.inputValidator((config?: RefetchMiddlewareConfig) => config)
.client(async ({ next, data }) => {
const { refetch = [] } = data ?? {};
We define an input to the middleware. This input will automatically get merged with whatever input is defined on any server function this middleware winds up attached to.
We define our input as optional (config?) since it’s entirely possible we might want to sometimes call our server function and simply not refetch anything.
Now we start our client callback, which runs directly in our browser. We’ll first grab the array of query keys we want refetched.
const { refetch = [] } = data ?? {};
Then we’ll get our queryClient and the cache attached to it, and define the payload we’ll send to the server callback of our middleware, which will do the actual refetching.
If you’ve never touched TanStack’s middleware before and are feeling overwhelmed, my middleware post might be a good introduction.
Our queryClient is already attached to the main TanStack router context, so we can get the router, and just grab it.
Remember before when we added that __revalidate payload to our query options, with the server function, and arg? Let’s look in our query cache for each key, and retrieve the query options for them.
The check if (!entry) return; protects us from refetches being requested for queries that don’t exist in cache—ie, if they’ve never been fetched in the UI. If that happens, just skip to the next one. We have no way to refetch it if we don’t have the serverFn.
You could expand the input to this middleware and send up a different payload of query keys, along with the actual refetching payload (including server function and arg) for queries you absolutely want run, even if they haven’t yet been requested. Perhaps you’re planning on redirecting after the mutation, and you want that new page’s data prefetched. We won’t implement that here, but it’s just a variation on this same theme. These pieces are all very composable, so build whatever you happen to need!
This code then grabs the meta object, and puts the properties onto the payload we’ll send to the server.
const revalidatePayload: any = entry?.meta?.__revalidate ?? null;
if (revalidatePayload) {
revalidate.refetch.push({
key,
fn: revalidatePayload.serverFn,
arg: revalidatePayload.arg,
});
}
Try not to let the various any types bother you; I’m omitting some type definitions that would have been straightforward to define, in order to help prevent this long post from getting even longer.
Calling next triggers the actual invocation of the server function (and any other middleware in the chain). The sendContext arg allows us to send data from the client, up to the server. The server is allowed to call next with a sendContext payload that sends data back to the client.
const result = await next({
sendContext: {
revalidate,
},
});
The result payload is what comes back from the server function invocation. The context object on it will have a payloads array, returned from the .server callback just below, with entries containing a key (the query key), and result (the actual data). We’ll loop it, and update the query data accordingly.
We’ll fix the TS error covered up with // @ts-expect-error momentarily.
We immediately call next(), which runs the actual server function this middleware is attached to. We pass a payloads array in sendContext. This governs what gets sent back to the client callback (that’s how .client got the payloads array we just saw it looping through).
Then we run through the revalidate payloads sent up from the client. The client sent them via sendContext, and we read them from the context object (send context, get it?). We then call all the server functions, and add to that payloads array.
// @ts-expect-errorfor (const entry of result.context?.payloads ?? []) {
This line runs in the .client callback, after we call next(). Essentially, we’re trying to read properties sent back to the client, from the server (via the sendContext payload). This runs, and works properly. But why don’t the types line up?
I covered this in my Middleware post linked above, but our server callback can see what gets sent to it from the client, but the reverse is not true. This knowledge just inherently does not go in both directions; the type inference cannot run backwards.
The solution is simple: just break the middleware into two pieces, and make one of them a middleware dependency on the other.
const prelimRefetchMiddleware = createMiddleware({ type: "function" })
.inputValidator((config?: RefetchMiddlewareConfig) => config)
.client(async ({ next, data }) => {
const { refetch = [] } = data ?? {};
const router = await getRouterInstance();
const queryClient: QueryClient = router.options.context.queryClient;
// same// as// beforereturnawait next({
sendContext: {
revalidate,
},
});
// those last few lines are removed
})
.server(async ({ next, context }) => {
const result = await next({
sendContext: {
payloads: [] asany[],
},
});
// exactly the same as beforereturn result;
});
exportconst refetchMiddleware = createMiddleware({ type: "function" })
.middleware([prelimRefetchMiddleware]) // <-------- connect them!
.client(async ({ next }) => {
const result = await next();
const router = await getRouterInstance();
const queryClient: QueryClient = router.options.context.queryClient;
// and here's those last few lines we removed from abovefor (const entry of result.context?.payloads ?? []) {
queryClient.setQueryData(entry.key, entry.result);
}
return result;
});
It’s the same as before, except everything in the .client callback after the call to next() is now in its own middleware. The rest is in a different middleware, which is an input to this one. Now when we call next in refetchMiddleware, TypeScript is able to see the data that’s been sent down from the server, since that was done in prelimRefetchMiddleware, which is an input to this middleware, which allows TypeScript to fully see the flow of types.
Wiring it Up
Now we can take our server function for updating an epic, remove the refecthes, and add our refetch middleware.
We set it up to call from our React component with the useServerFn hook, which handles things like errors and redirects automatically.
const runSave = useServerFn(updateEpic);
Remember when I said that inputs to middleware are automatically merged with inputs to the underlying server function? We can see that first hand when we call the server function.
(unknown[] is the correct type for react-query query keys)
Now we can call it, and specify the queries we want refetched.
When we run it, it works. Both the list of epics, and also the summary list correctly update with our changes, without any new requests in the network tab. When testing single flight mutations, we’re not really looking for something to indicate that it worked, but rather a lack of new network requests, for updated data.
Improving Things
Query keys are hierarchical in react-query. You might already be familiar with this. Normally, when updating data, it would be common to do something like:
Which refetches any queries whose key starts with["epics", "list"]. Can we do something similar in our middleware? Like, just pass in that key prefix, and have it find, and refetch whatever’s there?
Let’s do it!
Getting the matching keys will be slightly more complicated. Each key we pass up will potentially be a key prefix, matching multiple entries, so we’ll use flatMap to find all matches, with the nifty cache.findAll method.
Our solution still isn’t ideal. What if we page around in our epics page (up to page 2, up to page 3, then back down). Our solution will find page 1, and our summary query, but also pages 2 and 3, since they’re now in cache. But pages 2 and 3 aren’t really active, and we shouldn’t refetch them, since they’re not even being displayed.
Let’s change our code to only refetch active queries. Detecting whether a query entry is actually active is as simple as adding the type argument to findAll.
This works. But when you think about it, those other, inactive queries should probably be invalidated. We don’t want to waste resources refetching them immediately, since they’re not being used; but if the user were to browse back to those pages, we probably want the data refetched. Tanstack Query makes that easy, via the invalidateQueries method.
We’ll add this to the client callback of the middleware we feed into.
Loop the query keys we passed in, and invalidate any of the inactive queries (the active ones have already been refetched), but without refetching them.
We tell TanStack Query to invalidate (but not refetch) any inactive queries matching our key.
This works perfectly. If we browse up to pages 2 and 3, and then back to page 1, then edit a todo, we do in fact see our list, and summary list update. If we then page back to page 2, and 3, we’ll see network requests fire to get fresh data.
Icing on the Cake
Remember when we added the server function, and the arg it takes to our query options?
It’s just a simple helper that takes in your query key, server function and arg, and returns back some of our query options: our queryKey (to which we add whatever argument we need for the server function), the queryFn which calls the server function, and our meta object.
This works, but it’s not great. We have any types everywhere, which means the argument we pass to our server function is never type checked. Even worse, the return value of our queryFn is not type checked, which means our queries (like this very epics list query) now return any.
Let’s add some typings. Server functions are just functions. They take a single object argument, and if the server function has defined an input, then that argument will have a data property for that argument. That’s a lot of words to say what we already know. When we call a server function, we pass our argument like this
We’ve constrained our server function to an async function which takes a data prop on its object arg, and we’ve used that to statically type the argument. This is good, but we get an error when we use this on server functions which have no arguments.
If you’re normal, you’re probably happy with that. And you should be. But if you’re weird like me, you might wonder if you can’t make it perfect. Ideally it would be cool if we could pass a statically typed argument when using a server function that takes an input, and when using a server function with no input, pass nothing.
TypeScript has a feature exactly for this: overloaded functions.
This post is already far too long, so I’ll post the code, and leave deciphering it as an exercise for the reader (and likely a future blog post).
The parameter is now checked. It errors with the wrong type.
...refetchedQueryOptions(["epics", "list"], getEpicsList, "")
// Argument of type 'string' is not assignable to parameter of type 'number'.
It errors if you pass no argument as well.
...refetchedQueryOptions(["epics", "list"], getEpicsList)
// Argument of type 'RequiredFetcher<undefined, (page: number) => number, Promise<{ id: number; name: string; }[]>>' is not assignable to parameter of type '"This server function requires an argument!"'.
That last error isn’t the clearest, but if you read to the end you get a pretty solid hint as to what’s wrong, thanks to this dandy little helper.
type ValidateServerFunction<Provided, Expected> = Provided extends Expected ? Provided : "This server function requires an argument!";
And it works with a server function that takes no arguments. Again, a full explanation of this TypeScript will have to wait for a future post.
Concluding Thoughts
Single flight mutations are a great tool for speeding up updates within your web app, particularly when it’s a performance boost to avoid follow-up network requests for data after an initial mutation. Hopefully this post has shown you the tools needed to put it all together.
I feel like “streaming” will be more and more of a concept in 2026. We canstream HTML, but rarely do. With AI APIs these days, we’re seeing streaming much more commonly. We want to see that LLM kick out an answer word-by-word, as the perceived performance is better that way. What about a JSON response, can we stream that? Seems harder since JSON needs to be fully valid to be decoded. Krasimir Tsonev shows how it can be done.
Last time, we learned about the difference between SafeArrayAccessData and SafeArrayAddRef. I noted that SafeArrayAddRef was a latecomer that was added to an existing API and that one of the design concerns was to minimize the impact upon existing code. When extending an existing API, a major concern is what the new feature means for people who were using the earlier version of the API.
One design principle for extending an API is “pay for play”: Programs can call the new API to get access to new features, but programs that choose not to do so are unaffected, and the old code continues to work as it did before. It is acceptable to add additional requirements for people who want to use the new feature, such as, “If you intend to reverse the polarity of a widget, you must pass the AllowPolarityReversal flag when creating the widget.” Pre-existing code won’t pass that flag, but they also won’t be trying to reverse the polarity.
For SAFEARRAY, the story is a little trickier because the code that created the SAFEARRAY is not the code that is calling SafeArrayAddRef. Therefore, you cannot impose new requirements on the caller of SafeArrayCreate because you don’t control that code. The whole point of SafeArrayAddRef is to allow a function to defend itself from malicious behavior in the code that created the SAFEARRAY.
Another design issue is that when you add a new feature to an API, you want to make it easy for people who are already using that API to use the feature. If somebody asks, “How do I solve this problem?”, “Make these major changes to the underlying architecture of your program” will not be received well.
In the case of SAFEARRAY, the problem is compounded by the fact that the SAFEARRAY structure is itself public, so we have to assume that people are accessing the members in it without going through the wrapper functions like SafeArrayGetDim or SafeArrayGetElemsize. There is nowhere to put the reference count without breaking those people.
So how do you record a reference count when there is nowhere to record a reference count?
You have to maintain the reference counts externally.
The system maintains two process-wide tables table to track reference counts, one for tracking data block reference counts and another for tracking array descriptor reference counts. The table is indexed by the pointer to the data block or array descriptor, and the value is the reference count, if not zero. If a reference count drops to zero, then it is erased from the table. That way, the table contains reference counts only for actively-referenced items.
Okay, that ends our digression. Next time, we’ll try to answer a customer’s question about SafeArrayAddRef.
Once upon a time, there was SAFEARRAY, the representation of an array used by IDispatch and intended for use as a common mechanism for interchange of data between native code and scripting languages such as Visual Basic. You used the SafeArrayCreate function to create one, and a variety of other functions to get and set members of the array.
On the native side, it was cumbersome having to use functions to access the members of an array, so there is also the SafeArrayAccessData function that gives you a raw pointer to the array data. This also locks the array so that the array cannot be resized while you still have the pointer, because resizing the array could result in the memory moving. The idea here is that you lock the data for access, do your bulk access, and then unlock it. As an additional safety mechanism, an array cannot be destroyed while it is locked.
This was the state of affairs for a while, until the addition of the SafeArrayAddRef function in the Window XP timeframe. I don’t know exactly the story, but from the remarks in the documentation, it appears to have been introduced to protect against malicious scripts.
Suppose you’re writing a scripting engine, and a script performs an operation on an array. Your scripting engine represents this as a SAFEARRAY, and your engine starts operating with the array. You then issue a callback back into the script (for example, maybe you are the native side of a for_each-type function), and inside the callback, the script tries to destroy the array. After the callback returns, you have a use-after-free vulnerability in the scripting engine because it’s operating on an array that has been destroyed.
You could update the scripting engine to perform a SafeArrayAccessData on the array, thereby locking it and preventing the array from being resized or destroyed while the native code is using it. But that also means that the callback won’t be able to, say, append an element to the array. The script that the callback is running would encounter a DISP_E_ARRAYISLOCKED error when trying to append. If your scripting engine ignores errors from SafeArrayReDim, then the script’s attempt to extend the array silently fails, and that will probably break the internal script logic. As for destruction, if your script engine ignores errors from SafeArrayDestroy, then the array will be leaked. But if your scripting engine meticulously checks for those errors, then the script will get an unexpected exception.
For a failure to destroy a locked array, I guess the scripting engine could put the array in a queue of arrays whose destruction has been deferred, and then, I guess, check every few seconds to see if the array is safe to destroy? But for a failure to extend a locked array, the scripting engine is kind of stuck. It can’t “try again later” because the script expects the appended element to be present.
To solve the problem while creating minimal impact upon existing code, the scripting team invented SafeArrayAddRef. This is similar to SafeArrayAccessData in that it returns you a raw pointer to the array data, but it does not lock the array object. The array object can still be resized or destroyed successfully, thereby preserving existing semantics. What it does is add a reference to the array data (the same data that you received a pointer to). Only when the last reference is released is the data freed.
For a resize, that means that new memory is allocated, and the values are copied across, but the old memory is not freed until a corresponding number of SafeArrayReleaseData and SafeArrayReleaseDescriptor calls have been made. (The AddRef adds a reference to both the data and descriptor, and you have to release both of them in separate calls.)
Note that even though the memory is not freed, it is nevertheless zeroed out. This avoids problems with objects that have unique ownership like BSTR. If the memory hadn’t been zeroed out, then when the array is resized, there would be two copies of the BSTR, one in the new array data, and an abandoned one in the old array data. The code that called SafeArrayAddRef still has a pointer to the old array data. The new resized data might change the BSTR, which frees the old string, but the old data still has the BSTR and will result in a use-after-free.
Next time, a brief digression, before we use this information to answer a customer question about SafeArrayAddRef.
Bonus chatter: Note however that if the code that called SafeArrayAddRefwrites to the old data, the any data in that memory block is not cleaned up. So don’t write a BSTR or Unknown or anything else that requires cleanup, because nobody will clean it up. (This is arguably a design flaw in SafeArrayAddRef, but what’s done is done, and you have to deal with it.)
// If there is an active primary Gadget, then do a bunch of stuff,
// but no matter what, make sure it is no longer active when finished.
var gadget = widget.GetActiveGadget(Connection.Primary);
if (gadget != null) {
try {
⟦ lots of code ⟧
} finally {
widget.SetActiveGadget(Connection.Primary, null);
}
}
One thing that is cumbersome about this pattern is that the cleanup code is far away from the place where the cleanup obligation was created, making it harder to confirm during code review that the proper cleanup did occur. Furthermore, you could imagine that somebody makes a change to the GetActiveGadget() call that requires a matching change to the SetActiveGadget(), but since the SetActiveGadget() is so far away, you may not realize that you need to make a matching change 200 lines later.
var gadget = widget.GetActiveGadget(Connection.Secondary);
if (gadget != null) {
try {
⟦ lots of code ⟧
} finally {
widget.SetActiveGadget(Connection.Secondary, null);
}
}
Another thing that is cumbersome about this pattern is that you may create multiple obligations at different points in the code execution, resulting in deep nesting.
var gadget = widget.GetActiveGadget(Connection.Secondary);
if (gadget != null) {
try {
⟦ lots of code ⟧
if (gadget.IsEnabled()) {
try {
⟦ lots more code ⟧
} finally {
gadget.Disable();
}
}
} finally {
widget.SetActiveGadget(Connection.Secondary, null);
}
}
Can we get scope_exit ergonomics in C#?
You can do it with the using statement introduced in .NET 8 and a custom class that we may as well call ScopeExit.
public ref struct ScopeExit
{
public ScopeExit(Action action)
{
this.action = action;
}
public void Dispose()
{
action.Invoke();
}
Action action;
}
Now you can write
var gadget = widget.GetActiveGadget();
if (gadget != null) {
using var clearActiveGadget = new ScopeExit(() => widget.SetActiveGadget(null));
⟦ lots of code ⟧
if (gadget.IsEnabled()) {
using var disableGadget = new ScopeExit(() => gadget.Disable());
⟦ lots more code ⟧
}
}
, Although many objects implement IDisposable so that you can clean them up with a using statement, it’s not practical to have a separate method for every possible ad-hoc cleanup that could be needed, and the ScopeExit lets us create bespoke cleanup on demand.¹
¹ There might be common patterns like “If you Open(), then you probably want to Close()” which could benefit from a disposable-returning method, but you still have to wrap the result.
public ref struct OpenResult
{
public bool Success { get; init; }
public OpenResult(Widget widget)
{
this.widget = widget;
Success = widget.Open();
}
public void Dispose()
{
if (Success) {
widget.Close();
}
}
private Widget widget;
}
The Windows Runtime has interfaces IAsyncAction and IAsyncOperation<T> which represent asynchronous activity: The function starts the work and returns immediately, and then it calls you back when the work completes. Most language projections allow you to treat these as coroutines, so you can await or co_await them in order to suspend execution until the completion occurs.
There are also progress versions of these interfaces: IAsyncActionWithProgress<P> and IAsyncOperationWithProgress<T, P>. In addition to having a completion callback, you can also register a callback which will be called to inform you of the progress of the operation.
The usual usage pattern is
// C++/WinRT
auto operation = DoSomethingAsync();
operation.Progress([](auto&& op, auto&& p) { ⟦ ... ⟧ });
auto result = co_await operation;
// C++/CX
IAsyncOperation<R^, P>^ operation = DoSomethingAsync();
operation->Progress = ref new AsyncOperationProgressHandler<R^, P>(
[](auto op, P p) { ⟦ ... ⟧ });
R^ result = co_await operation;
// C#
var operation = DoSomethingAsync();
operation.Progress += (op, p) => { ⟦ ... ⟧ };
var result = await operation;
// JavaScript
var result = await DoSomethingAsync()
.then(null, null, p => { ⟦ ... ⟧ });
The JavaScript version is not too bad: You can attach the progress to the Promise and then await the whole thing. However, the other languages are fairly cumbersome because you have to declare an extra variable to hold the operation, so that you can attach the progress handler to it, and then await it. And in the C++ cases, having an explicitly named variable means that it is no longer a temporary, so instead of destructing at the end of the statement, it destructs when the variable destructs, which could be much later.
Here’s my attempt to bring the ergonomics of JavaScript to C++ and C#.
Some time ago, I discussed custom dialog classes. You can specify that a dialog template use your custom dialog class by putting the custom class’s name in the CLASS statement of the dialog template. A customer tried doing that but it crashes with a stack overflow.
// Dialog template
IDD_AWESOME DIALOGEX 0, 0, 170, 62
STYLE DS_SHELLFONT | DS_MODALFRAME | WS_POPUP | WS_CAPTION
CAPTION "I'm so awesome"
CLASS "MyAwesomeDialog"
FONT 8, "MS Shell Dlg", 0, 0, 0x1
BEGIN
ICON IDI_AWESOME,IDC_STATIC,14,14,20,20
LTEXT "Whee!",IDC_STATIC,42,14,114,8
DEFPUSHBUTTON "OK",IDOK,113,41,50,14,WS_GROUP
END
// Custom dialog class procedure
// Note: This looks ugly but that's not the point.
LRESULT CALLBACK CustomDlgProc(HWND hwnd, UINT message,
WPARAM wParam, LPARAM lParam)
{
if (message == WM_CTLCOLORDLG) {
return (LRESULT)GetSysColorBrush(COLOR_INFOBK);
}
return DefDlgProc(hwnd, message, wParam, lParam);
}
void Test()
{
// Register the custom dialog class
WNDCLASS wc{};
GetClassInfo(nullptr, WC_DIALOG, &wc);
wc.lpfnWndProc = CustomDlgProc;
wc.lpszClassName = TEXT("MyAwesomeDialog");
RegisterClass(&wc);
// Use that custom dialog class for a dialog
DialogBox(hInstance, MAKEINTESOURCE(IDD_AWESOME), hwndParent,
CustomDlgProc);
}
Do you see the problem?
The problem is that the code uses the CustomDlgProc function both as a window procedure and as a dialog procedure.
When a message arrives, it goes to the window procedure. This rule applies regardless of whether you have a traditional window or a dialog. If you have a standard dialog, then the window procedure is DefDlgProc, and that function calls the dialog procedure to let it respond to the message. If the dialog procedure declines to handle the message, then the DefDlgProc function does some default dialog stuff.
Creating a custom dialog class means that you want a different window procedure for the dialog, as if you had subclassed the dialog. The custom window procedure typically does some special work, and then it passes messages to DefDlgProc when it wants normal dialog behavior.
If you use the same function as both the window procedure and the dialog procedure, then when the function (acting as a window procedure) passes a message to DefDlgProc, the DefDlgProc function will call the dialog procedure, which is alsoCustomDlgProc. That function doesn’t realize that it’s now being used as a dialog procedure, so it is expected to return TRUE or FALSE (depending on whether it decided to handle the message). It thinks it is still a window procedure, so it passes the message to DefDlgProc, and the loop continues until you overflow the stack.
The idea behind custom dialog classes is that you have some general behavior you want to apply to all the dialogs that use that class. For example, maybe you want them all to use different default colors, or you want them all to respond to DPI changes the same way. Instead of replicating the code in each dialog procedure, you can put it in the dialog class window procedure.
But even if you are using a custom dialog class, your dialog procedure should still be a normal dialog procedure. That dialog procedure is the code-behind for the dialog template, initializing the controls in the template, responding to clicks on the controls in the template, and so on.
Remember, Microspeak is not necessarily jargon exclusive to Microsoft, but it’s jargon that you need to know if you work at Microsoft.
When something has gone horribly wrong and requires immediate attention, one way to describe it is to say that it is on fire. The obvious metaphor here is that the situation is so severe that it is as if the office building or computer system was literally on fire.
Here are some citations I found.
I’ll be back in Redmond on Monday. Is anything on fire?
This person is just checking in to see if there are any emergencies.
I think the Nosebleed branch is still on fire.
This person is saying that they think that the Nosebleed branch is still in very bad shape. My sense that being on fire is worse than being on the floor. If a branch is on the floor, then that probably means that there’s a problem with the build or release process. But if the branch is on fire, it suggests that they have identified some critical issue in the branch, and everybody is scrambling to figure it out and fix it.
While looking for citations, I found the minutes for a meeting titled “What’s on Fire Meetings”, which I guess is a regular meeting to report on whatever disaster is currently unfolding this time.
I even found some citations from my own inbox.
That’s my top item once I can wrap up the work I’m doing for the Nosebleed feature, but Nosebleed is always on fire.
Even the fires are on fire.
There is a channel on our team called “Fires” which is where people report on anything on fire and collaborate on putting out that fire. Putting out fires is the preferred way to say that someone is trying to fix whatever is on fire.
Bonus chatter: Note that this is not the same as saying that a person is “on fire”, which is slang for saying that they are doing exceptionally well.
What happens is that the 16-bit Windows kernel shuts down, and then the 32-bit virtual memory manager shuts down, and the CPU is put back into real mode, and control returns to win.com with a special signal that means “Can you start protected mode Windows again for me?”
The code in win.com prints the “Please wait while Windows restarts…” message, and then tries to get the system back into the same state that it was in back when win.com had been freshly-launched.
One of the things it has to do is to reset any command line options that had been passed to win.com. This is largely clerical work, but it is rather cumbersome because win.com was written in assembly language. And some global variables need to be reset back to the original values.
You might recall that .com files are implicitly given all of the remaining available convention memory when they launch. Programs can release that memory back to the system if they want to make it available to other programs. In win.com‘s case, it releases all the memory beyond its own image back to the system so that there is a single large contiguous block of memory for loading protected-mode Windows.
If somebody had allocated memory in the space that win.com had given up for protected-mode Windows, then convention memory will be fragmented, and the “try to get the system back into the same state that it was in back when win.com had been freshly-launched” is not successful because the expected memory layout was “one giant contiguous block of memory”. In that case, win.com says, “Sorry, I can’t do what you asked” and falls back to a full reboot.
Otherwise, everything looks good, and win.com jumps back to the code that starts protected-mode Windows, and that re-creates the virtual machine manager, and then the graphical user interface launches, and the user sees that Windows has restarted.
Bonus chatter: A common trick in assembly language back in this era when you counted every byte was to take the memory that holds functions that will no longer be called and reuse them as uninitialized data. It’s free memory!
In the case of win.com, the original code reused the first bytes of the entry point as a global variable since the entry point executes only once. Once you get past the entry point, it’s dead code, so you can put a global variable there! Fortunately, the “fast-restart” case doesn’t jump all the way back to the entry point, so the fact that those instructions were corrupted is not significant.
Bonus bonus chatter: Otul Osan also noted that the fast-restart wasn’t perfect: If you try two fast-restarts in a row, the second one crashes. I wasn’t able to reproduce this. I was able to fast-restart four times in a row without incident. My guess is that some device driver did not reset itself properly, so when the system restarted, the second instance of the driver saw a slightly weird device, and the weirdness finally caught up to it at shutdown. (Maybe it corrupted some memory that didn’t cause problems until shutdown.)