[syndicated profile] frontendmasters_feed

Posted by Chris Coyier

We published an edition of What You Need To Know about Modern CSS last year (2024), and for a while I really wasn’t sure if only a year later we’d have enough stuff to warrant and new yearly version. But time, and CSS, have rolled forward, and guess what? There is more this year than there was last. At least in this somewhat arbitrary list of “things Chris thinks are valuable to know that are either pretty fresh or have enjoyed a boost in browser support.”

Animate to Auto

What is this?

We don’t often set the height of elements that contain arbitrary content. We usually let elements like that be as tall as they need to be for the content. The trouble with that is we haven’t been able to animate from a fixed number (like zero) to whatever that intrinsic height is (or vice versa). In other words, animate to auto (or other sizing keywords like min-content and the like).

Now, we can opt-in to being able to animate to these keywords, like:

html {
  interpolate-size: allow-keywords;
  /* Now if we transition 
     "height: 0;" to "height: auto;" 
     anywhere, it will work */
}

If we don’t want to use an opt-in like that, alternatively, we can use the calc-size() function to make the transition work without needing interpolate-size.

.content {
  height: 3lh;
  overflow: hidden;
  transition: height 0.2s;
  
  &.expanded {
    height: calc-size(auto, size);
  }
}

Why should I care?

This is the first time we’ve ever been able to do this in CSS. It’s a relatively common need and it’s wonderful to be able to do it so naturally, without breaking behavior.

And it’s not just height (it could be any property that takes a size) and it’s not just auto (it could be any sizing keyword).

Support

Browser SupportJust Chrome.
Progressive EnhancementYes! Typically, this kind of animation isn’t a hard requirement, just a nice-to-have.
PolyfillNot really. The old fallbacks including things like animating max-height to a beyond-what-is-needed value, or using JavaScript to attempt to measure the size off-screen and then doing the real animation to that number. Both suck.

Usage Example

Popovers & Invokers

These are separate and independently useful things, and really rather HTML-focused, but it’s nice to show them off together as they complement each other nicely.

What is this?

A popover is an attribute you can put on any HTML element that essentially gives it open/close functionality. It will then have JavaScript APIs for opening and closing it. It’s similar-but-different to modals. Think of them more in the tooltip category, or something that you might want more than one of open sometimes.

Invokes are also HTML attributes that give us access to those JavaScript APIs in a declarative markup way.

Why should I care?

Implementing functionality at the HTML level is very powerful. It will work without JavaScript, be done in an accessible way, and likely get important UX features right that you might miss when implementing yourself.

Support

Browser SupportPopovers are everywhere, but invokers are Chrome only at time of publication.

There are sub-features here though, like popover="hint" which has slightly less support so far.
Progressive EnhancementNot so much. These type of functions typically need to work, so ensuring they do with a polyfill instead of handling multiple behaviors is best.
PolyfillYep! For both:

Popovers Polyfill
Invokers Polyfill

Usage Example

Remember there are JavaScript APIs for popovers also, like myPopover.showPopover() and secondPopover.hidePopover() but what I’m showing off here is specifically the HTML invoker controls for them. There are also some alternative HTML controls (e.g. popovertarget="mypopover" popovertargetaction="show") which I suppose are fine to use as well? But something feels better to me about the more generic command invokers approach.

Also — remember popovers pair particularly well with anchor positioning which is another CSS modern miracle.

@function

What is this?

CSS has lots of functions already. Think of calc(), attr(), clamp(), perhaps hundreds more. They are actually technically called CSS value functions as they always return a single value.

The magic with with @function is that now you can write your own.

@function --titleBuilder(--name) {
  result: var(--name) " is cool.";
}

Why should I care?

Abstracting logic into functions is a computer programming concept as old as computers itself. It can just feel right, not to mention be DRY, to put code and logic into a single shared place rather than repeat yourself or complicate the more declarative areas of your CSS with complex statements.

Support

Browser SupportChrome only
Progressive EnhancementIt depends on what you’re trying to use the value for. If it’s reasonable, it may be as simple as:

property: fallback;
property: --function();
PolyfillNot really. Sass has functions but are not based on the same spec and will not work the same.

Usage Example

Other Resources

if()

What is this?

Conceptually, CSS is already full of conditional logic. Selectors themselves will match and apply styles if they match an HTML element. Or media queries will apply if their conditions are met.

But the if() function, surprisingly, is the first specific logical construct that exists soley for the function of applying logical branches.

Why should I care?

Like all functions, including custom @functions like above, if() returns a single value. It just has a syntax that might help make for more readable code and potentially prevent certain types of code repetition.

Support

Browser SupportChrome only
Progressive EnhancementIt depends on the property/value you are using it with. If you’re OK with a fallback value, it might be fine to use.

property: fallback;
property: if(
style(--x: true): value;
else: fallback;
);
PolyfillNot really. CSS processes tend to have logical constructs like this, but they will not re-evaluate based on dynamic values and DOM placement and such.

Usage Example

Baking logic into a single value like this is pretty neat!

.grid {
  display: grid;
  grid-template-columns:
    if(
      media(max-width > 300px): repeat(2, 1fr);
      media(max-width > 600px): repeat(3, 1fr);
      media(max-width > 900px): repeat(auto-fit, minmax(250px, 1fr));
      else: 1fr;
    ); 
}

The syntax is a lot like a switch statement with as many conditions as you need. The first match wins.

if(
  condition: value;
  condition: value;
  else: value;
)

Conditions can be:

  • media()
  • supports()
  • style()

field-sizing

What is this?

The new field-sizing property in CSS is for creating form fields (or any editable element) that automatically grows to to the size of their contents.

Why should I care?

This is a need that developers have been creating in JavaScript since forever. The most classic example is the <textarea>, which makes a lot of sense to be sized to as large as the user entering information into it needs to be, without having to explicitly resize it (which is difficult at best on a small mobile screen). But inline resizing can be nice too.

Support

Browser SupportChrome and looks to be coming soon to Safari.
Progressive EnhancementYes! This isn’t a hard requirement usually but more of a UX nicety.
PolyfillThere is some very lightweight JavaScript to replicate this if you want to.

Usage Example

Custom Selects

What is this?

Styling the outside of a <select> has been decently possible for a while, but when you open it up, what the browser renders is an operating-system specific default. Now you can opt-in to entirely styleable select menus.

Why should I care?

Support

Browser SupportChrome only
Progressive Enhancement100%. It just falls back to a not-styled <select> which is fine.
PolyfillBack when this endeavor was using <selectlist> there was, but in my opinion the progressive enhancement story is so good you don’t need it.

Usage Example

First you opt-in then you go nuts.

select,
::picker(select) {
  appearance: base-select;
}

text-wrap

What is this?

The text-wrap property in CSS allows you to instruct the browser that it can and should wrap text a bit differently. For example, text-wrap: balance; will attempt to have each line of text as close to the same length as possible.

Why should I care?

This can be a much nicer default for large font-size elements like headers. It also can help with single-word-on-the-next-line orphans, but there is also text-wrap: pretty; which can do that, and is designed for smaller-longer text as well, creating better-reading text. Essentially: better typography for free.

Support

Browser Supportbalance is supported across the board but pretty is only Chrome and Safari so far.
Progressive EnhancementAbsolutely. As important as we might agree typography is, without these enhancements the text is still readable and accessible.
PolyfillThere is one for balance.

Usage Example

Resources

linear() easing

What is this?

I think this one a little confusing because linear as a keyword for transition-timing-function or animation-timing-function kinda means “flat and boring” (which is sometimes what you want, like when changing opacity for istance). But this linear() function actually means you’re about to do an easing approach that is probably extra fancy, like having a “bouncing” effect.

Why should I care?

Even the fancy cubic-bezier() function can only do a really limited bouncing affect with an animation timing, but the sky is the limit with linear() because it takes an unlimited number of points.

Support

Browser SupportAcross the board
Progressive EnhancementSure! You could fall back to a named easing value or a cubic-bezier()
PolyfillNot that I know of, but if fancy easing is very important to you, JavaScript libraries like GSAP have this covered in a way that will work in all browsers.

Usage Example

.bounce {
  animation-timing-function: linear(
    0, 0.004, 0.016, 0.035, 0.063, 0.098, 0.141 13.6%, 0.25, 0.391, 0.563, 0.765,
    1, 0.891 40.9%, 0.848, 0.813, 0.785, 0.766, 0.754, 0.75, 0.754, 0.766, 0.785,
    0.813, 0.848, 0.891 68.2%, 1 72.7%, 0.973, 0.953, 0.941, 0.938, 0.941, 0.953,
    0.973, 1, 0.988, 0.984, 0.988, 1
  );
}

Resources

shape()

What is this?

While CSS has had a path() function for a while, it only took a 1-for-1 copy of the d attribute from SVG’s <path> element, which was forced to work only in pixels and has a somewhat obtuse syntax. The shape() function is basically that, but fixed up properly for CSS.

Why should I care?

The shape() function can essentially draw anything. You can apply it as a value to clip-path, cutting elements into any shape, and do so responsively and with all the power of CSS (meaning all the units, custom properties, media queries, etc). You can also apply it to offset-path() meaning placement and animation along any drawable path. And presumably soon shape-outside as well.

Support

Browser SupportIt’s in Chrome and Safari and flagged in Firefox, so everywhere fairly soon.
Progressive EnhancementProbably! Cutting stuff out and moving stuff along paths is usually the stuff of aesthetics and fun and falling back to less fancy options is acceptable.
PolyfillNot really. You’re better off working on a good fallback.

Usage Example

Literally any SVG path can be converted to shape().

.arrow {
  clip-path: shape(
    evenodd from 97.788201% 41.50201%, 
    line by -30.839077% -41.50201%, 
    curve by -10.419412% 0% with -2.841275% -3.823154% / -7.578137% -3.823154%, 
    smooth by 0% 14.020119% with -2.841275% 10.196965%, 
    line by 18.207445% 24.648236%, hline by -67.368705%, 
    curve by -7.368452% 9.914818% with -4.103596% 0% / -7.368452% 4.393114%, 
    smooth by 7.368452% 9.914818% with 3.264856% 9.914818%, 
    hline by 67.368705%, line by -18.211656% 24.50518%, 
    curve by 0% 14.020119% with -2.841275% 3.823154% / -2.841275% 10.196965%, 
    curve by 5.26318% 2.976712% with 1.472006% 1.980697% / 3.367593% 2.976712%, 
    smooth by 5.26318% -2.976712% with 3.791174% -0.990377%, line by 30.735919% -41.357537%, 
    curve by 2.21222% -7.082013% with 1.369269% -1.842456% / 2.21222% -4.393114%, 
    smooth by -2.21222% -7.082013% with -0.736024% -5.239556%, 
    close
  );
}

The natural re-sizeability and more readable syntax is big advantage over path():

More Powerful attr()

What is this?

The attr() function in CSS can pull the string value of the matching HTML element. So with <div data-name="Chris"> I can do div::before { content: attr(data-name); } to pull off an use “Chris” as a string. But now, you can apply types to the values you pull, making it a lot more useful.

Why should I care?

Things like numbers and colors are a lot more useful to pluck off and use from HTML attributes than strings are.

attr(data-count type(<number>))

Support

Browser SupportChrome only
Progressive EnhancementIt depends on what you’re doing with the values. If you’re passing through a color for a little aesthetic flourish, sure, it can be a enhancement that fallback to something else or nothing. If it’s crucial layout information, probably not.
PolyfillNot that I know of.

Usage Example

Reading Flow

What is this?

There are various ways to change the layout such that the visual order no longer matches the source order. The new reading-order property allow us to continue to do that while updating the behavior such that tabbing through the elements happens in a predictable manner.

Why should I care?

For a long time we’ve been told: don’t re-order layout! The source order should match the visual order as closely as possible, so that tabbing focus through a page happens in a sensible order. When you mess with the visual order and not source order, tabbing can become zig-zaggy and unpredictable, even causing scrolling, which is a bad experience and a hit to accessibility. Now we can inform the browser that we’ve made changes and to follow a tabbing order that makes sense for the layout style we’re using.

Support

Browser SupportChrome only
Progressive EnhancementNot particularly. We should probably not be re-ordering layout wildly until this feature is more safely across all browsers.
PolyfillNo, but if you were so-inclined you could (hopefully very intelligently) update the tabindex property of the elements to a sensible order.

Usage Example

.grid {
  reading-flow: grid-rows;
}

Re-ordering a grid layout is perhaps of the most common things to re-order, and having the tabbing order follow the rows after re-arranging is sensible, so that’s what the above line of code is doing. But you’ll need to set the value to match what you are doing. For instance if you are using flexbox layout, you’d likely set the value to flex-flow. See MDN for the list of values.

Resources

Stuff to Keep an Eye On

  • “Masonry” layout, despite having different preliminary implementations, is not yet finalized, but there is enough movement on it it feels like we’ll see that get sorted out next year. The most interesting development at the moment is the proposal of item-flow and how that could not only help with Masonry but bring other layout possibilities to other layout mechanisms beyond grid.
  • The CSS function random() is in Safari and it’s amazing.
  • The CSS property margin-trim is super useful and we’re waiting patiently to be able to use it more than just Safari.
  • The sibling-index() and sibling-count() functions are in Chrome and, for one thing, are really useful for staggered animations.
  • For View Transitions, view-transition-name: match-element; is awfully handy as it prevents us from needing to generate unique names on absolutely everything. Also — Firefox has View Transitions in development, so that’s huge.
  • We should be able to use calc() to multiply and divide with units (instead of requiring the 2nd to be unitless) soon, instead of needing a hack.
  • We never did get “CSS4” (Zoran explains nicely) but I for one still think some kind of named versioning system would be of benefit to everyone.
  • If you’re interested in a more straightforward list of “new CSS things” for say the last ~5 years, Adam Argyle has a great list.

Great Stuff to Remember

[syndicated profile] acoup_feed

Posted by Bret Devereaux

Hey folks, Fireside this week! Next week we should be back to start looking at the other half of labor in the peasant household, everything that isn’t agriculture. Also, here are some cats:

Catching that perfectly timed Percy-yawn, while Ollie (below) is doing his best Percy impression with those narrowed eyes.

For this week’s musing, I want to address something that comes up frequently in the comments, particularly any time we discuss agriculture: the ‘Mathusian trap.’ Now of course to a degree the irony of addressing it here is that it will still come up in the comments because future folks raising the point won’t see this first, but at least it’ll be written somewhere that I can refer to.

To begin, in brief, the idea of a Malthusian trap derives from the work of Thomas Robert Malthus (1766-1834) and his work, An Essay on the Principle of Population (1798). In essence the argument goes as follows (in a greatly simplified form): if it is the case that the primary resources to sustain a population grow only linearly, but population grows exponentially, then it must be the case that population will, relatively swiftly, approach the limits of resources, leading to general poverty and immiseration, which in turn provide the check that limits population growth.

As an exercise in logic Malthus’ point is inescapable: if you accept his premises and run the experiment long enough you must reach his conclusion. In short, given an exponentially growing population and given resources that only grow linearly and given an infinite amount of time, you have to reach the Malthusian ‘trap’ of general poverty and population checked only by misery. So far as that goes, fine.

The problem is assuming any of those premises were generally correct in any given point in history.

I find this comes up whenever I point out that certain social and political structures – the Roman Empire most notably – seem to have produced better economic conditions for the broad population or that other structures – Sparta, say – produced worse ones: someone rolls in to insist that because the Malthusian trap is inevitable the set of structures doesn’t matter, as a better society will just produce an equally miserable outcome shortly thereafter with a larger population. And then I response that Malthus is not actually always very useful for understanding these interactions, which prompts disbelief because – look just above – his logic is airtight given his premises and his premises are at least intuitive.

Because here’s the thing: Malthus was very definitely and obviously wrong. Malthus was writing as Britain (where he wrote) was beginning to experience the initial phases of the demographic transition, which begins with a period of very rapid population growth as mortality declines but birth rates remain mostly constant. Malthus generalizes those trends, but of course those trends do not generalize; to date they have happened exactly once in every society where they have occurred. Instead of running out of primary resources, world population is expected to peak later this century around 10.5 billion and we already can grow enough food for 10.5 billion people. The next key primary resource is energy and progress on renewable energy sources is remarkable; at this point it seems very likely that we will have more power-per-person available at that 10.5 billion person peak than we do today. Living standards won’t fall, they’ll continue to rise, assuming we avoid doing something remarkably foolish like a nuclear war. Even climate change – which is a very real problem – will only slow the rate of improvement under most projections, rather than result in an actual decline.

So while Malthus’ logic is ironclad and his premises are intuitive, as a matter of fact and reality he was wrong. Usefully wrong, but wrong. The question becomes why he was wrong. And the answer is that basically all of his premises are at least a little wrong.

The first, as we’ve noted, is that Malthus is extrapolating out a rate of population growth based on an unusual period: the beginning of rapid growth in the second stage of the demographic transition – and then he is extrapolating that pattern out infinitely in time in every direction. And that is a mistake, albeit an easy one to make: to assume that the question of population under agrarian production is an effectively infinite running simulation which has already (or very soon will) reach stability.

Here’s the thing (this is a very rough chronology): human beings (Homo sapiens) appeared about 300,000 years ago. We started leaving the cradle of Africa around 130,000 years ago, more or less and only filled out all of the major continents about 15,000 years ago. The earliest beginnings of agriculture are perhaps 20,000 years old or so, but agriculture reached most places in the form Malthus would recognize it much later. Farming got to Britain about 6,500 years ago. Complex states with large urban populations are 5,000 or so years old. Large sections of the American Great Plains and the Eurasian Steppe were grazing land until the last 150 years.

In short, it is easy to assume, because human lives are so short, that the way we have been living – agrarian societies – are already effectively ‘infinitely’ old. But we’re not! Assuming we do not nuke ourselves or cook the planet, in the long view pre-industrial agriculture will look like a very brief period of comparatively rapid development between hundreds of thousands of years of living as hunter-gathers and whatever comes after now. To Malthus, whose history could stretch no further back than the Romans and no further forward than the year in which he wrote, his kind of society seemed to have existed forever. It seemed that way to the Romans too. But we’re in a position to see both before agrarian economies and also after them; we’re not smarter, we just have the luck of a modestly better vantage.1

In short, while we might assume that given infinite time, exponential population growth will outpace any gains made to production but you shouldn’t assume infinite time because we are actually dealing with a very finite amount of time. Farmers, whose demographics concern us here, appear around 20,000 years ago and begin filling up the Earth, spreading out to bring new farmland under the plow (displacing, often violently, lower population density societies as they did so) and that process was arguably nearing completion but not yet complete when the second agricultural and first industrial revolutions fundamentally changed the basis of production. As we’ve discussed, estimates of global population in the deep past are deeply fraught, but there is general agreement that population globally has increased more or less continuously since the advent of farming; it never stalled out at any point. In short, the Malthusian long run is so long that it almost doesn’t matter.

But if we limit our view to a specific region or society, that changes things. We certainly do see, if not Malthusian traps, what we might term ‘Malthusian interactions’ apparent in history. Rising population density and trade connectivity help spread disease, which lead to major downward corrections in population like the Antonine Plague, the Plague of Justinian, the Black Death and the diseases of the Columbian Exchange. Notably though, these sudden downward corrections are at best only somewhat connected to population growth and resource scarcity: lower nutrition may play a role, but travel, trade lanes, high density cities and exposure to novel pathogens seems to play a larger role. It’s not clear that something like the Black Death would have been dramatically less lethal if the European population were 10 or 15% less; it seems quite clear the diseases of the Columbian exchange cared very little for how well fed the populations they devastated were. Still, we see the outline of what Malthus might expect: downward pressure on wages before the population discontinuity and often upward pressure afterwards (most clearly visible with the Black Death in Europe).

So does Malthus rule the ‘small print’ as it were? Perhaps, but not always. For one, it is possible, even in the pre-modern world, to realize meaningful per capita gains in productivity due to new production methods like new farming techniques. It is also possible for greater connectivity through trade to enable greater production by comparative advantage. It is also possible for capital accumulation in things like mills or draft animals to generate meaningful increases in production. And of course some political and economic regimes may be more or less onerous for the peasantry. Any of these things moving in the right direction can effectively create some ‘headroom’ in production and resources. Some of that ‘headroom’ is going to get extracted by the tiny number of elites at the top of these societies, but potentially not all of it.

This is what I often refer to as a society moving between equilibria (a phrasing not original to me), from a state condition of lower production (a low equilibrium) to a stable condition of higher production (a high equilibrium).

Now in the long run when just thinking about food production, the Malthusian interaction ought to catch up with us in the long run. The population increases, but the available land supply cannot keep pace – new lands brought under the plow are more marginal than old lands and so on – and so the surplus food per person steadily declines as the population grows until we’re back where we started. Except there are two problems here.

The first is that can take a long time even in a single society, region or state because even under ideal nutrition standards, these societies increase in population slowly compared to the rapid sort of exponential growth Malthus was beginning to see in the 1700s. It can take so long that exogenous shocks – invasion, plague, or new technology enabling a new burst of ‘headroom’ – arrive before the ceiling is reached and growth stops. Indeed, given the trajectory of pre-modern global population, that last factor must have happened quite a lot, since even the population of long-settled areas never quite stabilizes in the long term.

All of which is to say, in the time frame that matters – the time scale of states, regimes, economic systems and so on, measured in centuries not millennia – some amount of new ‘headroom’ might be durable and indeed we know it ended up being so, lasting long enough for us to get deep enough into the demographic transition that we could put Malthus away almost entirely.

The second thing to note is that not all material comforts are immediately related to survival and birth rates. To take our same society where some innovation has enabled increased production: the population rises, but no new land enters cultivation. That creates a segment of the population who can be fed, but who need not be farmers: they can do other things. Of course in actual pre-modern societies, it is most the elite who decide what other things these fellows do and many of those things (warfare, monumental construction, providing elite extravagance) do very little for the common folks.

But not always. Sometimes that new urban population is going to make stuff, stuff which might flow to consumers outside of the elite. We certainly seem to see this with sites of large-scale production of things like Roman coarseware pottery. Or, to take something from my own areas, it is hard not to notice that the amount of worked metal we imagine to be available for regular people for things like tools seems to rise as a function of time. Late medieval peasants do seem to have more stuff than early medieval or Roman peasants in a lot of cases. Wages – either measured in silver or as a ‘grain wage’ – may not be going up, but it sure seems like some things end up getting more affordable because there are more people making them.

And of course some of that elite investment might also be generally useful. Of course as a Roman historian, the examples of things like public baths and aqueducts, which provided services available not merely to the wealthy but also the urban poor, spring immediately to mind. And so even if the amount of grain available per person has stayed the same, the number of non-farmers as a percentage of the society has increased, making non-grain amenities easier for a society to supply. And naturally, social organization is going to play a huge role in the degree to which that added production does or does not get converted into amenities for non-elites.

In short it is possible for improvements to provide quality of life improvements even if a new Malthusian ceiling is reached. It is the difference between getting 3,000 calories in a wood-and-plaster building with a terracotta roof, a good collection of coarseware pottery and clean water from an aqueduct versus getting 3,000 calories in a wood-and-mud hut with a thatched roof, no pottery at all and having to pump water at the local well. In a basic Malthusian analysis, these societies are the same, but the lived experience is going to be meaningfully different.

Notionally, of course, you might argue that if population continued to rise we’d eventually reach the end of those fixed resources too: we’d run out of clay and metal ores and fresh water sources and so on, except that of course there are 8.2 billion of us and we haven’t yet managed to run out – or even be seriously constrained – by any of those things. We haven’t even managed to run out of oil or coal and again, at the rate at which renewable energy technology is advancing, it looks like we may never run out of oil, so much as it just won’t be worth anyone’s time pulling the stuff out of the ground.2

None of which is to say that Malthus is useless. Malthusian interactions do occur historically. But they do not always occur because the sweep of history is not infinitely wrong and developments which produce significant carrying capacity ‘headroom’ actually happen, on balance, somewhat faster than societies manage to reach the limit of that capacity.

Ollie gazing gloriously into the sun of a new day, while Percy, in shadow, plots his downfall.

On to Recommendations:

First off, the public classics project Peopling the Past has turned five! Congratulations to them. Peopling the Part runs both a blog and a podcast both highlighting the ways that scholars, especially early career scholars, study people in the (relatively deep) past, with an emphasis on highlighting interesting work and the methods it uses. It’s a great project to follow if you want a sense of how we know things about the past and the sort of work we continue to do to understand more, with an especially strong focus on archaeology.

Meanwhile over on YouTube and coinciding a bit with our discussion of Malthus, Angela Collier has a video on why “dyson spheres are a joke,3 in the sense that they were quite literally proposed by Freeman J. Dyson as a joke, a deliberate ‘send up’ of the work of some of his colleagues he found silly, rather than ever being a serious suggestion for science fiction super-structures.

Where this cuts across our topic is that Dyson, writing in 1960, explicitly cites “Malthusian pressures” as what would force the construction of such a structure and it serves as a useful reminder that until well into the 1980s and 1990s, there were quite a lot of ‘overpopulation’ concerns and it was common to imagine the future as involving extreme overpopulation and resource scarcity. I wouldn’t accuse Dyson of this view (he is, as noted, writing a paper as satire), but I think it is notable that these panics continued substantially on the basis of assumptions that the demographic transition – which was already pretty clearly causing population growth in Europe to begin to slow significantly by the 1950s and 1960s – was, in effect, a ‘white people only’ phenomenon, fueling often very racially inflected fears about non-white overpopulation. You can see this sort of racist-alarmist-panic pretty clearly in Paul Ehrlich’s The Population Bomb (1968), appropriately skewered in the If Books Could Kill episode on it.

Of course as noted is that what actually happened is that it turns out the demographic transition does not care about race or racists and happens to basically all societies as they grow wealthier and more educated – indeed, it has often happened faster in countries arriving to affluence late – with the result that it now appears that the ‘population bomb’ will never happen.

For this week’s book recommendation, I am going to recommend Rebecca F. Kennedy, C. Sydnor Roy and Max L. Goldman, Race and Ethnicity in the Classical World: An Anthology of Primary Sources in Translation (2013). Students often ask questions like ‘what did the Greeks and Romans think about race?’ and the complicated answer is they thought a lot of things. That can come as a surprise to moderns, as we’re really used to the cultural hegemony of ‘scientific racism’ and the reactions against it. But it is in fact somewhat unusual that a single theory of race – as unfounded in actual reality as all of the others – is so dominant globally as an ideology that people either hold or push against. Until the modern period, you were far more likely to find a confusing melange of conflicting theories (advanced with varying degrees of knowledge or ignorance of distant peoples) all presented more or less equally. Consequently, the Greeks and Romans didn’t think one thing about race, but had many conflicting ideas about where different peoples fit and why.

That makes an anthology of sources in translation an ideal way to present the topic and that is what Kennedy, Roy and Goldman have done here. This is very much what it says ‘on the tin’ – a collection of translated primary sources; the editorial commentary is kept quite minimal and the sources do largely speak for themselves. The authors set out roughly 200 different passages – some quite short, some fairly long – from ancient Greek and Roman writers that touch on the topic of race or ethnicity. Those passages are split in two ways: the book is divided into two sections, the first covering theories and the second covering regions. In the first section, the reader is given examples of some of the dominant strains of how Greeks and Romans thought about different peoples and what made them different – genealogical theories, environmental theories (people become different because they are molded by different places), cultural models and so on. The approach is a brilliant way to hammer home to the reader the lack of any single hegemonic model of ‘otherness’ in this period, while also exposing them to the most frequent motifs with which the ancients thought about different peoples.

Then the back two-thirds of the book proceed in a series of chapters covering specific regions. Presenting, say, almost 20 passages on the peoples of ‘barbarian’ Europe (Gaul, Germany, Britain) together also helps the reader get a real sense of both the range of ways specific regions were imagined but also common tropes, motifs and stereotypes that were common among ancient authors.

The translations in the volume are invariably top-rate, easy to read while being faithful to the original text. The editorial notes are brief but can help put passages in the context of the larger works they come from. The book also features reprints of a series of maps showing the world as described by the Greeks and Romans, a useful way to remember how approximate their understanding of distant places and their geographic relations could be. Overall, the volume is useful as a reference text – when you really need to find the right passage to demonstrate a particular motif, stereotype or theory of difference – but is going to be most valuable to the student of antiquity who wants to begin to really get a handle on the varied ways the Greeks and Romans understood ethnic and cultural difference.

Reviewers Behaving Badly

Sep. 19th, 2025 04:53 pm
[syndicated profile] in_the_pipeline_feed

I’ve never been on the receiving end of the sorts of manuscript peer reviews detailed in this article, but I know for sure that they’re out there. Examples shown include things like “This manuscript was not worth my time so I did not read it and recommend rejection”, “What the authors have done is an insult to science” and “This young lady is lucky to have been mentored by the leading men in the field”. Completely unacceptable.

The point of reviewing an article for publication is to offer constructive criticism, not ad hominem zingers. I mean, even if a manuscript is an insult to science, you can tell the authors what you think is wrong with it and why you don’t think it should be published. I realize that takes longer than insulting them, but there you have it. There really are worthless manuscripts out there, God knows, but just saying “This is worthless” doesn’t do anything to help solve the problem. Tell the authors, tell the editors what the problems are. And if the paper isn’t down in that category but (in your view) has significant problems, well, tell the authors what those problems are without mocking them.

As the article mentions, cultural factors can blur the line between plainspoken criticism and insults, but the examples above (and many others quoted) definitely cross the line in anybody’s culture. I have (for example) told authors that their paper is (in my view) not ready for publication until they cite some extremely relevant literature, but I didn’t go on to add my suspicions that they were avoiding doing so to try to make their own work look more novel, or perhaps that they were just too slapdash to have realized that there was any such precedent at all. At most, I might say something like this in the “Notes to the Editors” section that the authors don’t see. Another common problem is poor English on the part of the authors, but that doesn’t call for insults, either: just note that the paper needs polishing up, perhaps giving a few examples of what you mean. All of us who have had to get by in second (or third!) languages are familiar with the problem of sounding unintelligent in them, but just as we don’t want others to make that assumption about us, we shouldn’t turn around and do the same.

I’ve also given “Do not publish” reviews that are more “Do not publish here” when I think that a paper is not a good fit for the journal that it’s been sent to. Given today’s landscape, I think that the old-fashioned category of “Not fit to be published at all” is long dead - there are so many journals out there, many of them hungry for manuscripts and/or author fees, that anything at all can be published somewhere. But most of the time I end up recommended publication after some fixes (and I try not to be one of those reviewers who suggest something that means nine more months of experimental work).

It’s the anonymity that breeds the nastiness, for sure. I have said unkind things about published work here on the blog, of course, but by gosh I say it under my own name with my email address attached. You shouldn’t use reviewer anonymization, in my view, just to say things to authors that you wouldn’t tell them to their faces. As the article says, a key test is for authors in turn to ask themselves, when they get unfavorable comments, whether these things will help them revise their paper or strengthen their results, or whether all they do is shake their confidence (or piss them off, I will add myself). There may be some of each, naturally. But you shouldn’t be afraid to call out unprofessional comments with the editors themselves.

A lot of people who make it a point to talk about how they tell it like it is and how they aren’t afraid to hurt anyone’s feelings are actually trying to give themselves licenses to behave like assholes, because that’s the part that they really enjoy. We have our share of those in the research world, perhaps an outright statistical surplus. But that doesn’t mean we have to give them what they want.

[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

In our study of the case of the invalid handle error when a handle is closed while a thread is waiting on it, Frederic Benoit described a scenario in which a handle was duplicated and given to another thread. That other thread would operate on the handle and then close it. All of this while the main thread was waiting on the original handle. Is it legal to close a duplicate of a handle that another thread is waiting on?

Yes, this is legal.

The prohibition is against closing a handle while another thread is waiting on that handle. It’s a case of destroying something while it is in use. More generally, you can’t close a handle while another thread is reading from that handle, writing to that handle, using that handle to modify a process or thread’s priority, signaling that handle, whatever.

But if one thread is waiting on the original handle and another thread closes a duplicate, that is not the same handle, so you didn’t break the rule. In fact, closing a duplicate while another thread is waiting on the original is not an uncommon scenario.

Consider this: Suppose there is a helper object whose job it is to set an event handle when something has completed. For example, maybe it’s something similar to ID3D12Fence::SetEventOnCompletion. When you give it the event handle, the object has to duplicate the handle to ensure that it can still get the job done even if the caller later closes the handle. Eventually, the thing completes, and the object calls SetEvent() with the duplicated handle and then closes the duplicate.

Meanwhile, your main thread has done a Wait­For­Multiple­Objects to wait a block of signals.

There is nothing wrong with the helper object closing its private copy of the handle. The point is it didn’t close your copy of the handle, which means that the handle being waited on is not closed while the wait is in progress.

The post Can I close a duplicate handle while I’m waiting on the original? appeared first on The Old New Thing.

[syndicated profile] tocutistocure_feed

Posted by docpark

A day in my life

Today, I got up at 630, made some coffee, and Zoomed in on our morning report at our main campus hospital. I have a patient there who I will be operating on tomorrow and wanted to know the current status of the patient. Once the report was over, I brushed my teeth and drove into my hospital which is a regional community hospital. I had an angiogram for a patient who was having a problem with blood flow to the leg. The cath lab was ready to go at 0800, and I was done by 0900, where I quickly ran over to to my office for clinic. My fellow who was doing her community rotation helped me with the angiogram, and then came over to clinic where I saw 27 patients from 0900 to 1600hrs, two of them virtually. At 1600hrs, I had a hospital committee meeting where I am the chief of surgery for my community hospital, and at 1700, was done. I ate a snack as I finished up some paperwork, and got on another Zoom meeting of my institute with over a hundred people to have an update meeting. I then drive to my golf club, and got on the range and hit golf balls for 30 minutes, then got on the putting green and practiced for another 30 minutes. Then I drove home and had dinner with my family. I watched highlights of a football (American) game I recorded over the weekend, while reading email, then sat down to write this before I showered and went to bed.

This week I have 9 cases scheduled -several angiograms and interventions, a leg bypass, a few fistula creations, and a laparoscopic procedure (I’m one of the few vascular surgeons who do laparoscopic surgery). As I sit in bed, I listen to a journal article read to me by the voice of Gwyneth Paltrow (it’s AI) -I find it easier than actually reading the thing, and then I watch a few TikToks, read Reddit, and then go to sleep around 2300h. Cycle starts again in the morning, but will wake at 0530 to get to the main campus hospital to perform an operation. Arriving at main campus on a Wednesday, we have a combined grand rounds with the whole Surgery Department prior to operating.

Lifestyle
On weekends, when I am not on call, I still catch up on my patients from a report from my trainees or my nurse practitioner who makes rounds. I even do this sometimes when I’m out of town. Usually, I play competitive golf with members at my club -the more pressure the better. I find competition to be relaxing. Afterwords, I come home and write, read a little, and watch sports depending on the season or golf. My writing is sometimes work-related, sometimes in my journal. I kept a personal blog for over ten years on golfism.org. I am working on a novel -have been for a decade but not making much progress. I read mostly nonfiction but will listen to audiobooks of science fiction -currently marching through all the Dune prequels written by Brian Herbert, the son of Frank Herbert, the author of Dune and its original sequels. I am working on a grand unifying theory of circulation.

Procedures
As a vascular surgeon, I perform operations in the traditional open fashion, and endovascular procedures which are a done with imaging from x-ray. Occasionally, I do laparoscopic surgery. The open surgical procedures include operations on the aorta and its branches, and on arteries in the legs, arms, and neck. I also work on veins throughout the body. The patient arrives with a set of conditions, a prior history, and an examination, and given a problem, you evaluate it with various tests which can be blood tests, vascular tests, imaging studies like X-ray, Ultrasound, Vascular Lab Studies, CT scans and MRI’s. This is called the workup -getting data to plan a procedure. Knowledge of anatomy and physiology and biomechanics of flow are crucial to put together a plan that will be successful in treating the disease with low complication rate and good durability. The procedures require a great deal of planning and often I include my colleagues within my department and those in other specialties to get their insights for making a plan that accounts for the reason for operation, plan for operation, contingency plans, and recovery in the hospital, and healing outside the hospital. You can see some of these cases on my blog, vascsurg.me.

The image above shows a common femoral artery aneurysm presenting as a pulsatile mass in the right groin. The first image on left is an arteriogram (a sketch of one) that I would get prior to surgery. The patient is also suffering from pain in the right leg due to a lack of blood flow because his superficial femoral artery (SFA) is occluded and his profunda femoral artery (PFA) is open but has a blockage at its origin where the aneurysm ends. I plan the surgery and execute it. During surgery, things may pop up -good things like finding an otherwise pristine SFA filled with plaque. Removing the plaque, it becomes a great conduit for replacing the aneurysm and avoids using an expensive graft which can become infected -your own tissues fight off infection better than graft.

In the F1 Movie, Brad Pitt’s character describes a sense of pure driving, being in the flow, being completely at peace on the road. The best moments in surgery, I reach a flow state where actions follow one after the other. It’s a form of spiritual ecstasy, to be completely focused and present. Even better is having the patient do well -to be able to walk without pain and the fear of possibly losing a leg or dying.

Who should not do vascular surgery. By definition, anyone not trained in vascular surgery. Successful vascular surgeons come in all shapes and sizes, but they share common traits -grit, focus, some intelligence, and hand-eye coordination. That would mean those who give up easily, have trouble with focus, are unintelligent, and have poor dexterity should not go into vascular surgery. The saddest cases are when the desire to be something does not match up with the reality. It is possible for non-vascular surgeonsto make a living doing a focused practice around varicose veins for example, but a good vascular surgeon is hard to create. Also, you should not do this for money or prestige, there are easier ways to get money or prestige.

Who should go into vascular surgery. Anyone who thinks they might like it should certainly look into it. The best way is to directly observe a vascular surgeon at work. That is the whole purpose of the rotations in medical school. Sadly, many medical schools do not offer much time in a surgery rotation and vascular surgery exposure is inconsistent. Our society has been working hard for over a decade to improve this and we are seeing it in the excellent applicants to our training programs. The best candidates are driven people with a track record of academic excellence, but the qualities that make a good surgeon are harder to define. Desire alone is insufficient and sadly academic excellence, while it will get you into the door, doesn’t predict who will be a great surgeon. There has to be grit -an ability to persist despite hardship. There has to be a nimble mind that can solve problems quickly. And there has to be the physical hand skills that define surgery but somehow have been dropped from the initial evaluation of candidates for surgery.

Who should not go into surgery. Based on my answers above, those quick to give up, are unintelligent, and poorly coordinated should not go into surgery. I would add to this lazy, dishonest, and sociopathic. No criminals please.

There is no perfect answer to this. I knew a fellow who did not score well on tests and was rejected from medical school five years in a row, but eventually got in and completed a residency in a surgical subspecialty and has a very successful practice. While he was being rejected from medical school, he spent five years in the lab, and he could do open heart surgery on dogs very well, was coauthor on numerous papers, and his surgical skill was excellent -like if you were stuck taking tennis lessons from a professional for five years but never playing an actual game. There are also many examples of people who were told too late that they were no good for surgery.

What you should not do is listen to just a single person who has a poor opinion of you. You should examine the situation and decide if there is some truth to the issue, but you need at least three opinions. For example, I would like to be a professional golfer. I can get at least three people to tell me honestly that this is a bad idea. I would like to be a writer. I can get at least three people to tell me honestly this is a good idea. You get the picture. In medical school you will rotate and work with many people and you will have grades and feedback. You need to get honest opinions as you move forward. You need to study hard and get great grades because no matter what you do, your patients will be depending on you.

[syndicated profile] tocutistocure_feed

Posted by docpark

Un día en mi vida

Hoy me levanté a las seis y media, me hice un cafecito y entré a un reporte matutino por Zoom en el hospital principal. Allá tengo un paciente al que voy a operar mañana y quería saber cómo seguía. Apenas terminó el reporte, me lavé los dientes y arranqué para mi hospital, que es un hospital regional comunitario.

Tenía un angiograma para un paciente con problema de circulación en la pierna. El laboratorio estaba listo a las ocho, y a las nueve ya había terminado. De ahí corrí al consultorio para pasar consulta. Mi fellow, que estaba rotando en comunidad, me ayudó en el angio y después se vino al consultorio. Entre nueve de la mañana y cuatro de la tarde vimos 27 pacientes, dos de ellos virtuales.

A las cuatro tuve una reunión de comité del hospital, donde soy jefe de cirugía, y a las cinco terminé. Me comí algo rápido mientras hacía papeleo, y luego me conecté a otra reunión por Zoom con más de cien personas de mi instituto, para una actualización. Una parte la escuché manejando hacia el club de golf. Llegué, pegué bolas en el rango media hora, luego practiqué putt otra media hora. Después me fui a la casa, cené con mi familia, vi los mejores momentos de un partido de fútbol americano que había grabado el fin de semana, revisé correos, y me puse a escribir esto antes de ducharme e irme a dormir.

Esta semana tengo nueve casos programados: varios angiogramas e intervenciones, un bypass en la pierna, unas fístulas y un procedimiento laparoscópico (soy de los pocos cirujanos vasculares que hacen laparoscopía). Ya en la cama, pongo un artículo científico que me lo lee Gwyneth Paltrow en voz de inteligencia artificial—me resulta más fácil que leerlo yo mismo. Luego miro unos TikToks, leo un poco en Reddit y me duermo como a las once de la noche. Al día siguiente vuelvo a empezar, pero ese día me toca madrugar a las cinco y media para llegar al hospital principal y operar. Los miércoles allá hacemos ateneos con todo el departamento de cirugía antes de entrar a sala.


Estilo de vida

Los fines de semana, cuando no estoy de turno, igual reviso a mis pacientes con reportes que me mandan los residentes o mi nurse practitioner. A veces hasta lo hago cuando estoy fuera de la ciudad. Normalmente juego golf competitivo en el club—entre más presión, mejor; a mí la competencia me relaja. Después llego a la casa, escribo, leo algo, y veo deportes o golf, según la época del año.

La escritura a veces es de trabajo, a veces personal. Llevo más de diez años con un blog personal en golfism.org. También estoy trabajando en una novela—llevo como diez años con eso, pero avanzo despacio. Leo más que todo no ficción, pero en audiolibro oigo ciencia ficción. Ahora voy en todos los precuelas de Dune que escribió Brian Herbert, el hijo de Frank Herbert. Además, estoy dándole vueltas a una teoría unificada de la circulación.


Procedimientos

Como cirujano vascular hago operaciones abiertas tradicionales, procedimientos endovasculares (guiados por rayos X), y de vez en cuando cirugía laparoscópica. Las abiertas incluyen la aorta y sus ramas, arterias de piernas, brazos y cuello. También trabajo con venas en todo el cuerpo.

Un paciente llega con sus antecedentes y un problema, y uno lo estudia con exámenes: de sangre, estudios vasculares, imágenes como rayos X, ecografía, TAC o resonancia. Eso es el workup: reunir datos para planear. Saber anatomía, fisiología y mecánica del flujo es clave para armar un plan que funcione bien y con pocas complicaciones. Muchas veces incluyo a colegas de mi departamento y de otras especialidades para diseñar no solo la cirugía, sino también los planes alternos y la recuperación. Algunos casos los muestro en mi blog vascsurg.me.


Quién no debería hacer cirugía vascular

Por definición, cualquiera que no esté entrenado. Los cirujanos vasculares exitosos pueden ser muy distintos entre sí, pero todos comparten unas cosas: verraquera, concentración, algo de inteligencia, y buena coordinación mano-ojo. Entonces, alguien que se rinde fácil, no se concentra, no entiende, o es torpe con las manos, no debería meterse en esto.

Tampoco lo hagas por plata o prestigio; hay formas más fáciles de conseguir eso.


Quién sí debería

Si te pica la curiosidad, deberías explorarlo. Lo mejor es ver a un cirujano vascular trabajando. Para eso son las rotaciones en medicina, aunque tristemente muchas facultades no dan mucha exposición a cirugía, y menos a vascular.

Los mejores candidatos son personas con disciplina, buenas notas, pero sobre todo con verraquera. Solo las ganas no bastan. Necesitas mente ágil, resistencia al fracaso, y destrezas con las manos. Eso último curiosamente casi no lo miden al principio, pero es fundamental.


Quién no debería irse por cirugía en general

Además de los que mencioné—los que se rinden, los que son torpes o no entienden—agregaría los perezosos, deshonestos o con mala entraña. Nada de criminales, por favor.


Consejo final

No te dejes llevar solo por la opinión de una persona. Siempre busca mínimo tres opiniones sinceras. Te doy un ejemplo: yo quisiera ser golfista profesional. Tres personas me dicen sin rodeos que es mala idea. En cambio, quiero ser escritor, y tres me dicen que es buena idea. ¿Sí ves?

En medicina vas a rotar con mucha gente y vas a tener notas y retroalimentación. Escucha, pero compara. Y siempre estudia duro y busca buenas calificaciones, porque al final tus pacientes van a depender de ti.

Mirror Life Worries

Sep. 18th, 2025 04:50 pm
[syndicated profile] in_the_pipeline_feed

I wrote here a few years ago about the idea of completely enantiomeric “mirror proteins”, in the context of how they could benefit crystallography. These of course are made up of mirror-image enantiomers of the individual amino acids, but are otherwise the same (and cannot be differentiated by “non-chiral” means - they have the same molecular weights and other large-scale properties).

There’s been more talk (and worry) in the last few years about the possibility of extending this idea to mirror nucleic acids, mirror carbohydrates, on and on to the idea of making an enantiomeric living cell: “mirror life”. That would be a mighty ambitious thing to try, but it also could carry some risks that are unlike anything we’ve had to think about before. Here’s an article from 2024 on this, and there’s a detailed accompanying report on the idea of making mirror-bacteria. Just recently, Nature has highlighted a conference in Manchester on this same topic, and published this editorial from one of the researchers in the field.

As those stories indicate, no one is even close to making such things. But there are plenty of model systems along the way, and the question is where the potential dangers of this sort of work start to outweigh the scientific benefits. So let’s talk about both of those briefly. One outstanding question (for well over a century now) is why all life on Earth uses the same “handedness” of the chiral biomolecules (carbohydrates, amino acids and their associated proteins, etc.) One immediate answer is because all life on Earth stems from a common ancestor that used these, and that is almost certainly correct (albeit extremely hard to prove!) But that just leads to another question: why these ones and not the mirror images?

There seems to be no a priori reason, and indeed, in abiotic samples like carbonaceous meteorites we find both enantiomers of such compounds. There have been many rather esoteric physics-based proposals on how one enantiomeric series might be slightly more stable than another (and thus increasing its chemical odds) but none of these are even close to definitive. So was this an accident? If so, if there are living creatures using vaguely similar biochemistry on other worlds, are they broadly distributed half-and-half, or what? You open up a lot of tricky origin-of-life questions with these lines of inquiry, and mirror-image cells (or simply mirror-image models of them) could be a way to answer them.

On the downside, we don’t really know how our immune systems might respond to complex mirror-image biomolecules. They might just slide by invisibly, but they might well not - after all, there are a lot of ways to do molecular recognition. Moving past that, could a mirror-image cell survive in the wild? No one’s sure: if it has enough intracellular machinery to make its own key constituents, it could probably use the achiral building blocks that are lying around everywhere and keep going with them. And a big problem with that is that something like an enantio-bacterium would presumably have no natural enemies, and would presumably be nonresponsive to antibiotics and other defenses that bacteria use to keep each other in line. So the possible downsides are rather large - but no one knows how possible they are.

I doubt if anyone is interested in my own take, but for what it’s worth I think that we are sufficiently far from producing any actual organisms that I am not worried about this research. But I think it is prudent to think about what could eventually happen, and perhaps set some tripwires for the future. For now, though, I think that this is interesting and challenging research, and I think it should go on.

[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

We learned that Explorer uses the COM Surrogate as a sacrificial process when hosting thumbnail extraction plug-ins.

A customer had a thumbnail extraction plug-in for which preparing to perform the thumbnail extraction was expensive. But the way Explorer uses thumbnail extractors is that it loads an extractor, asks it to extract a thumbnail from a file, and then frees the extractor. There’s no real chance for one extractor to hand off cached state to the next extractor, so the second extractor has to start from scratch.

Is there a way to tell Explorer to load all the extractors into the same process and then tell each one, “Okay, transfer your state to the next guy,” and tell the last guy, “I’m done extracting, you can clean up now”?

No, there is no way to give Explorer precise instructions on how it should use thumbnail extractors, but that doesn’t mean you can’t build it yourself.

What you can do is create two COM objects. One is an in-process object that is the actual plug-in. The other is a multi-use out-of-process local server that does the work. The in-process plug-in forwards the thumbnail extraction requests to the local server. Since the local server is multi-use, all the plug-ins share a single server. and this allows the expensive resources to be shared.

In other words, when each surrogate process creates an in-process thumbnail extractor, don’t do the extraction in the surrogate process:

Surrogate 1   Surrogate 2
Plug-in

Resources
  Plug-in

Resources

Instead, put the extraction in the shared local server, so that one server does all the extracting and can reuse the expensive resources.

 
Surrogate 1   Local server   Surrogate 2
 
 
Plug-in
 
 
 
 
 
 
 
↙
Extractor
↘
 
Factory
Resource
 
↘
Extractor
↙
 
 
 
 
 
 
 
Plug-in
 
 

The local server can follow the traditional pattern of shutting down if all COM objects have been destroyed and no new ones have been created for 30 seconds.

The factory object is registered with COM via Co­Register­Class­Object as with any other COM local server. Register it as multiple-use so that all the plug-ins will share the same server.

The factory object can obtain the resources the first time it is asked to create an extractor, and then pass a shared reference to the resource to the extractor. Once 30 seconds elapse without any extractor, the server can shut down by revoking the factory (which causes it to release the resources).

Related reading: Forcing plug-ins to run in separate processes: How can I convert a third party in-process server so it runs in the COM surrogate?

The post How can I get my shell thumbnail extractors to run in the same process? appeared first on The Old New Thing.

[syndicated profile] frontendmasters_feed

Posted by Preethi Sam

Circular menu design exists as a space-saver or choice, and there’s an easy and efficient way to create and animate it in CSS using offset and animation-composition. Here are some examples (click the button in the center of the choices):

I’ll take you through the second example to cover the basics.

The Layout

Just some semantic HTML here. Since we’re offering a menu of options, a <menu> seems appropriate (yes, <li> is correct as a child!) and each button is focusable.

<main>
  <div class="menu-wrapper">
    <menu>
      <li><button>Poland</button></li>
      <li><button>Brazil</button></li>
      <li><button>Qatar</button></li>
      <!-- etc. -->
    </menu>
    <button class="menu-button" onclick="revolve()">See More</button>
  </div>
</main>

Other important bits:

The menu and the menu button (<button id="menu-button">) are the same size and shape and stacked on top of each other.

Half of the menu is hidden via overflow: clip; and the menu wrapper being pulled upwards.

main { 
  overflow: clip;
}
.menu-wrapper { 
  display: grid;
  place-items: center;
  transform: translateY(-129px);
  menu, .menu-button {
    width: 259px;
    height: 129px;
    grid-area: 1 / 1;
    border-radius: 50%;
  }
}

Set the menu items (<li>s) around the <menu>’s center using offset.

menu {
    padding: 30px;
    --gap: 10%; /* The in-between gap for the 10 items */
}
li {
  offset: padding-box 0deg;
  offset-distance: calc((sibling-index() - 1) * var(--gap)); 
  /* or 
    &:nth-of-type(2) { offset-distance: calc(1 * var(--gap)); }
    &:nth-of-type(3) { offset-distance: calc(2 * var(--gap)); }
    etc...
  */
}

The offset (a longhand property) positions all the <li> elements along the <menu>’s padding-box that has been set as the offset path.

The offset CSS shorthand property sets all the properties required for animating an element along a defined path. The offset properties together help to define an offset transform, a transform that aligns a point in an element (offset-anchor) to an offset position (offset-position) on a path (offset-path) at various points along the path (offset-distance) and optionally rotates the element (offset-rotate) to follow the direction of the path. — MDN Web Docs

The offset-distance is set to spread the menu items along the path based on the given gap between them (--gap: 10%).

ItemsInitial value of offset-distance
10%
210%
320%

The Animation

@keyframes rev1 { 
  to {
    offset-distance: 50%;
  } 
}

@keyframes rev2 { 
  from {
    offset-distance: 50%;
  } 
  to {
    offset-distance: 0%;
  } 
}

Set two @keyframes animations to move the menu items halfway to the left, clockwise, (rev1), and then from that position back to the right (rev2)

li {
  /* ... */
  animation: 1s forwards;
  animation-composition: add; 
}

Set animation-time (1s) and animation-direction (forwards), and animation-composition (add) for the <li> elements

Even though animations can be triggered in CSS — for example, within a :checked state — since we’re using a <button>, the names of the animations will be set in the <button>’s click handler to trigger the animations.

By using animation-composition, the animations are made to add, not replace by default, the offset-distance values inside the @keyframes rulesets to the initial offset-distance values of each of the <li>.

ItemsInitial Valueto
10%(0% + 50%) 50%
210%(10% + 50%) 60%
320%(20% + 50%) 70%
rev1 animation w/ animation-composition: add
Itemsfromback to Initial Value
1(0% + 50%) 50%(0% + 0%) 0%
2(10% + 50%) 60%(10% + 0%) 10%
3(20% + 50%) 70%(20% + 0%) 20%
rev2 animation w/ animation-composition: add

Here’s how it would’ve been without animation-composition: add:

ItemsInitial Valueto
10%50%
210%50%
320%50%

The animation-composition CSS property specifies the composite operation to use when multiple animations affect the same property simultaneously.

MDN Web Docs

The Trigger

const LI = document.querySelectorAll('li');
let flag = true;
function revolve() {
  LI.forEach(li => li.style.animationName = flag ? "rev1" : "rev2");
  flag = !flag;
}

In the menu button’s click handler, revolve(), set the <li> elements’ animationName to rev1 and rev2, alternatively.

Assigning the animation name triggers the corresponding keyframes animation each time the <button> is clicked.

Using the method covered in this post, it’s possible to control how much along a revolution the elements are to move (demo one), and which direction. You can also experiment with different offset path shapes. You can declare (@keyframes) and trigger (:checked, :hover, etc.) the animations in CSS, or using JavaScript’s Web Animations API that includes the animation composition property.

[syndicated profile] frontendmasters_feed

Posted by Chris Coyier

Nice reminder about JavaScript evolving to be more useful from Trevor I. Lasn:

// Old way (pre-2021)
if (user.name === null || user.name === undefined) {
  user.name = 'Anonymous';
}

// Or using the nullish coalescing operator (??)
user.name = user.name ?? 'Anonymous';

// New way (ES2021 and later)
user.name ??= 'Anonymous';

The final line there uses what is called the “The nullish coalescing assignment operator assignment operator” in case you need to impress people at parties.

[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

A customer had a major client that was encountering this error message:

INFOPATH.EXE

The system detected an overrun of a stack-based buffer in this application. The overrun could potentially allow a malicious user to gain control of this application.

(Note: Not actually InfoPath. I just substituted them for the actual program because InfoPath is no longer supported, so I hope they won’t feel bad that they’re being used as the culprit here.)

The customer’s client was extremely concerned by the admittedly alarmist text.

The customer explained that this is an operating system level error that just happens to be showing up in this case with InfoPath, but in searching the device’s history, they found at least one instance of the error occurring with a different program. This message arises from any number of so-called “fast fail” conditions that an application reports to the operating system, such as an unhandled exception or an assertion failure. Even though many different failure conditions are being reported, the message to the user always describes it as a potential buffer overflow, even though that might not be the actual reason.

The customer asked if there was any official Microsoft statement that they could point their client to.

This is a common request to the engineering group. We call it “customer-ready” text: Text that can be shared directly with a customer.

So here is some customer-ready text explaining the STATUS_STACK_BUFFER_OVERRUN status code description.

The message is used when the program self-detects that something has gone wrong. It was originally used for buffer overrun detection but is now used for general-purpose failure detection. The error message has not been updated to accommodate the expanded usage. A more accurate description would be “This program terminated itself after encountering an internal error.”

The post Translating the <CODE>STATUS_<WBR>STACK_<WBR>BUFFER_<WBR>OVERRUN</CODE> status code into customer-ready text appeared first on The Old New Thing.

[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

Some time ago, people noticed that buried in the Windows Bluetooth drivers is the hard-coded name of the Microsoft Wireless Notebook Presenter Mouse 8000. What’s going on there? Does the Microsoft Wireless Notebook Presenter Mouse 8000 receive favorable treatment from the Microsoft Bluetooth drivers? Is this some sort of collusion?

No, it’s not that.

There is a lot of a bad hardware out there, and there are a lot of compatibility hacks to deal with it. You have CD-ROM controller cards that report the same drive four times or USB devices that draw more than 500mW of power after promising they wouldn’t. More generally, you have devices whose descriptors are syntactically invalid or contain values that are outside of legal range or which are simply nonsensical.

Most of the time, the code to compensate for these types of errors doesn’t betray its presence in the form of hard-coded strings. Instead, you have “else” branches that secretly repair or ignore corrupted values.

Unfortunately, the type of mistake that the Microsoft Wireless Notebook Presenter Mouse 8000 made is one that is easily exposed via strings, because they messed up their string!

The device local name string is specified to be encoded in UTF-8. However, the Microsoft Wireless Notebook Presenter Mouse 8000 reports its name as Microsoft⟪AE⟫ Wireless Notebook Presenter Mouse 8000, encoding the registered trademark symbol ® not as UTF-8 as required by the specification but in code page 1252. What’s even worse is that a bare ⟪AE⟫ is not a legal UTF-8 sequence, so the string wouldn’t even show up as corrupted; it would get rejected as invalid.

Thanks, Legal Department, for sticking a ® in the descriptor and messing up the whole thing.

There is a special table inside the Bluetooth drivers of “Devices that report their names wrong (and the correct name to use)”. If the Bluetooth stack sees one of these devices, and it presents the wrong name, then the correct name is substituted.

That table currently has only one entry.

The post Why is the name of the Microsoft Wireless Notebook Presenter Mouse 8000 hard-coded into the Bluetooth drivers? appeared first on The Old New Thing.

[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

A customer had an app with a plugin model based on vendor-supplied COM objects that are loaded into the process via Co­Create­Instance.

Main process
Engine

Plugin

These COM objects run in-process, but the customer realized that these plugins were a potential source of instability, and they saw that you can instruct COM to load the plugin into a sacrificial process so that if the plugin crashes, the main program is unaffected.

What they want is something like this:

Main process   Surrogate process
Engine Plugin

But how do you opt a third-party component into the COM Surrogate? The third party component comes with its own registration, and it would be rude to alter that registration so that it runs in a COM Surrogate. Besides, a COM Surrogate requires an AppId, and the plugin might not have one.

The answer is simple: Create your own object that is registered to run in the COM Surrogate. Define an interface for that custom object, say, ISurrogateHost and give that interface a method like LoadPlugin(REFCLSID pluginClsid, REFIID iid, void** result) which calls Co­Create­Instance on the plugin CLSID and requests the specified interface pointer from it. (If you want to support aggregation, you can add a punkOuter parameter.)

Main process   Surrogate process
Engine
 
 

 
 
Host

Plugin

The Load­Plugin method runs inside the surrogate, so when the plugin loads in-process, it loads into the surrogate process.

The host can return a reference to the plugin directly to the main app engine, so it steps out of the way once the two sides are connected. The purpose of the host is to set up a new process.

In fact, you don’t even need to invent that special surrogate interface. There is already a standard COM interface that does this already: ICreateObject. It has a single method, uncreatively named CreateObject that takes exactly the parameters we want, including the punkOuter.

Your surrogate host object would go like this, using (rolls dice) the WRL template library.

struct SurrogateHost :
    Microsoft::WRL::RuntimeClass<
        Microsoft::WRL::RuntimeClassFlags<
            Microsoft::WRL::RuntimeClassType::ClassicCom |
            Microsoft::WRL::RuntimeClassType::InhibitWeakReference>,
    ICreateObject, Microsoft::WRL::FtmBase>
{
    STDMETHOD(CreateObject)(
        REFCLSID clsid,
        IUnknown* outer,
        REFIID iid,
        void** result)
    {
        return CoCreateInstance(clsid, outer,
            CLSCTX_INPROC_SERVER, riid, result);
    }
};

In the engine, where you would normally do

hr = CoCreateInstance(pluginClsid, outer, CLSCTX_INPROC_SERVER,
        riid, result);

you instead create a surrogate host:

Microsoft::WRL::ComPtr<ICreateObject> host;
hr = CoCreateInstance(CLSID_SurrogateHost, nullptr,
        CLSCTX_LOCAL_SERVER, IID_PPV_ARGS(&host));

and then each time you need an object, you ask the surrogate host to do it:

hr = host->CreateObject(pluginClsid, outer, riid, result);

You can even get fancy and decide that some plugins are sus and should run in a surrogate, whereas others are trustworthy and may run inside the main process.

if (is_trustworthy(pluginClsid)) {
    // Let this one load into the main process
    hr = CoCreateInstance(pluginClsid, outer, CLSCTX_INPROC_SERVER,
            riid, result);
} else {
    // Boot this one to the surrogate process
    hr = host->CreateObject(pluginClsid, outer, riid, result);
}

Reusing the host object means that a single surrogate process is used for all plugins. If you want each plugin running in a separate surrogate, then create a separate host for each one.

The post How can I convert a third party in-process server so it runs in the COM surrogate? appeared first on The Old New Thing.

[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

Consider the following:

void f(int, int);
void f(char*, char*);

void test(std::tuple<int, int> t)
{
    std::apply(f, t); // error
}

The compiler complains that it cannot deduce the type of the first parameter.

I’m using std::apply here, but the same arguments apply to functions like std::invoke and std::bind.

From inspection, we can see that the only overload that makes sense is f(int, int) since that is the only one that accepts two integer parameters.

But the compiler doesn’t know that std::apply is going to try to invoke its first parameter with arguments provided by the second parameter. The compiler has to choose an overload based on the information it is given in the function call.¹

Although the compiler could be taught the special behavior of functions like std::apply and std::invoke and use that to guide selection of an overload, codifying this would require verbiage in the standard to give those functions special treatment in the overload resolution process.

And even if they did, you wouldn’t be able to take advantage of it in your own implementations of functions similar to std::apply and std::invoke.

template<typename Callable,
            typename Tuple>
auto logapply(Callable&& callable,
              Tuple&& args)
{
    log("applying!");
    return std::apply(
        std::forward<Callable>(callable),
        std::forward<Tuple>(args));
}

The standard would have to create some general way of expressing “When doing overload resolution, look for an overload of the callable that accepts these arguments.”

Maybe you can come up with something and propose it to the standards committee.

In the meantime, you can work around this with a lambda that perfect-forwards the arguments to the overloaded function.

void test(std::tuple<int, int> t)
{
    std::apply([](auto&&... args) {
        f(std::forward<decltype(args)>args)...);
    }, t);
}

This solves the problem because the type of the lambda is, well, the lambda. The overload resolution doesn’t happen until the lambda template is instantiated with the actual parameter types from the tuple, at which point there is now enough information to choose the desired overload.

Now, in this case, we know that the answer is int, int, so the lambda is a bit wordier than it could have been.

void test(std::tuple<int, int> t)
{
    std::apply([](int a, int b) {
        f(a, b);
    }, t);
}

However, I presented the fully general std::forward version for expository purposes.

¹ You can see this problem if we change the overloads a little:

void f(int, int);
void f(char*, int);

auto test(int v)
{
    return std::bind(f, std::placeholders::_1, v);
}

At the point of the bind, you don’t know whether the result is going to be invoked with an integer or a character pointer, which means that you don’t know whether you want the first overload (that takes two integers) or the second overload (that takes a character pointer and an integer).

The post Why can’t <CODE>std::apply</CODE> figure out which overload I intend to use? Only one of then will work! appeared first on The Old New Thing.

More Legislation to Watch

Sep. 16th, 2025 02:13 pm
[syndicated profile] in_the_pipeline_feed

Biocentury has a story on a legislative move that I haven’t seen anyone else covering. The House of Representatives recently passed its version of the National Defense Authorization Act (NDAA), and an amendment was added to it incorporated the terms of the Securing American Funding and Expertise From Adversarial Research Exploitation (SAFE) Act. That one would bar any federal funding for researchers who work with institutions that are deemed “hostile foreign entities”.

What might those be, you ask? Well, most Chinese universities would probably end up on that list, because it covers those that have participated in any sort of talent-recruitment efforts in the past ten years, or if they have worked in any areas that could be considered “dual use”, i.e. with potential for military/security applications. Those are roomy categories, and they have a lot of eye-of-the-beholder in them as well. So this could bring on some significant disruption in all kinds of international collaborations that US groups might be involved in.

This legislation might remind some readers of the “Biosecure Act” that did not make it through in the previous legislative session, and according to the Biocentury piece that one, in a somewhat revised version, could make it back into the final version of the NDAA as well. That’s because the Senate will have its own version of the bill, then a joint committee will work it over to come up with a version that will pass both chambers. That’s not going to happen until the end of the year, so a lot could go on between now and then. God knows, a lot seems to go on every flippin’ day in US politics recently, so I would not wish to predict what will or will not make it into a bill like this in December.

But it’s a good bet that something like these proposals will. That’s the way the wind is obviously blowing - America First, tough on China, tough on everybody and everything, yanking universities and federally-funded researchers into line no matter what they have to say about it. In years past, legislation like the SAFE Act would have really stood out as a worrisome development, but now? If someone’s looking to disrupt and hinder US federally-funded research by cutting back work with China, well, they’re going to have to get in line. We’ve been demolishing, disorganizing, and demoralizing all of that on our own all year now.

[syndicated profile] littletinythings_feed

New comic!

Hey folks!

Just a heads up that I'm currently moving shop, from Hivemill to my own account (via Big Cartel), so for now you can't purchase GGaR books but hopefully the new shop will be up sooner than later! (gotta transfer everything to White Squirrel, who will be handling storage and shipping, then put all the products up on the new shop's page!)


Tuberculosis Defenses

Sep. 15th, 2025 03:17 pm
[syndicated profile] in_the_pipeline_feed

Mycobacteria, I think I can say without fear of contradiction, are a real pain in the behind, scientifically speaking. Don’t get me wrong: bacteria in general are no fun to develop drugs against. Whatever fun was available in that area drained away by the early 1970s at the latest as the major classes of antibiotics were discovered and as the bacteria themselves set about busily developing resistance to them. The rate of drug discovery in the area has famously slowed down, with most of the advances being improvements on existing scaffolds. Meanwhile, the bacterial resistance problem has not slowed down appreciably at all, and the intersection of these two trends has been the subject of a lot of worry and a lot of warnings over the years.

But mycobacteria in particular are a tough problem to crack. Gram-positive bacteria are generally the easiest to kill with drug therapies because of their single-membrane structures, although please note that this “ease” is on a relative scale. Most readers will have heard of MRSA (“mer-sa” in the lingo), which is the source of some extremely unwelcome infections that are very hard to treat and in which the underlying Staphylococcus aureus organism is Gram-positive. These strains are able to resist a broad spectrum of beta-lactam-based antibiotics, although there are (for now) some other types that are still useful for treatment (linezolid, clindamycin, vancomycin and others).

Gram-negative bacteria, though, have a double-membrane structure with a thin peptidoglycan cell wall in between, and that’s a more formidable barrier to getting antibiotics inside them at all. These membranes are well stocked with efflux-pump proteins, and those are a big part of the problem. Many are the compounds that can kill off efflux-pump-crippled engineered bacteria in the lab, but unfortunately none of us are going to be infected with any of those. And the great majority of such compounds, when exposed to real Gram-negative pathogens, barely even ruffle their bacteria hair. Finding a really active compound against these is a real accomplishment.

And so is finding one against the Mycobacteria. Those guys have an arrangement all their own: a cell membrane, on top of which is a periplasmic space capped by a layer of peptidoglycan gunk, and on top of that is a layer of arabinoglycan (a unique feature). On top of that is yet another unique feature, though, a double layer of mycolic-acid-based gorp with various surface lipids and proteins imbedded in it. This is a really tough gauntlet to run for a small-molecule antibiotic, and it’s fortunately that most Mycobacteria are not pathogenic. The bad part is that the ones that are cause tuberculosis and leprosy, with the former being present in maybe a third of the entire world population (!) In many of these people the M. tuberculosis infection is just sitting around latent in the lung tissue, growing very slowly. This growth rate is seen in culture, too - even if you have the right medium for them (and many of the common ones don’t work), it can take weeks to grow visible waxy colonies of the things. As a human pathogen (and we’re the only animal that’s a reservoir for them), the bacteria are extremely resistant to being killed by macophages because of that coating.

There are antibiotics that work, although of course there are now plenty of resistant strains out there, particularly in garden spots like the Russian prison system. Resistance is showing up and increasing in countries around the world, though, and finding new antibiotics is a real world health priority. I thought this paper made an interesting contribution to that. The authors are doing wide-ranging structural studies on model peptides to see what factors are more likely to get these compounds past those thick multi-level defenses.

A first takeaway is that peptides themselves can actually get through at all - the prevailing idea has been that you need smaller and more hydrophobic molecules to have a real chance. A second lesson is that the best modification to make is cyclization, although you do need to pay particular attention to the overall ring size and the structures that you’re using to close the rings. But this seems to be the category that showed the most notable success, and the differences between the cyclized compounds and their linear counterparts is often impressive. The second-best strategy is N-methylation of the peptide, but that has a lot of variability in it, for reasons that are not really clear (or at least aren’t to me). The paper demonstrates improvement on an antibiotic candidate by adopting these features, and also shows that removing them from an existing compound (griselimycin) significantly weakens its activity.

We need plenty of these sorts of insights to deal with drug-resistant tuberculosis, because the only reason that it’s not an even bigger problem is that slow growth rate mentioned above and thus its relatively slow spread through the human population. But that tends to bring on complacency, because it’s not just ripping through the population in real time like a new respiratory virus (you remember those, right?) The last thing we need is another plague, even a slow one.

Profile

mathemagicalschema: A blonde-haired boy asleep on an asteroid next to a flower. (Default)
schema

January 2019

S M T W T F S
   12345
6789101112
13141516171819
20212223242526
2728293031  

Most Popular Tags

Page Summary

Style Credit

  • Style: Midnight for Ciel by nornoriel

Expand Cut Tags

No cut tags
Page generated Sep. 22nd, 2025 12:39 am
Powered by Dreamwidth Studios