... which dovetails neatly with the bits I just got out of The Painful Truth (Monty Lyman) about the bidirectional relationship between insomnia and pain, where each worsens the other but insomnia worsens pain more. (It's bedtime, so I'm not going to pick the book back up to get you those onward references just now.) With n = 5232, and their conditions including "cancer, chronic pain, irritable bowel syndrome, and stroke", "CBT-I was associated with significantly improved outcomes" (for insomnia severity, and moderately improved outcomes for sleep efficiency and sleep onset latency).
This is a sort of paper that you don’t see too much if you most read chemistry and biology journals. It’s not presenting lab results in either of those fields per se, but asking what it means to be a chemist at all. It’s quite interesting, but I do have to raise an immediate point before discussing the rest of the work: it’s based on an in-depth survey of ideas and attitudes among chemists in both academia and industry, early- and late-stage careers both represented, but the total number of participants is in the end just 43 people, ranging from undergrads to people with years in industry.
Now, I’ve watched social-science survey work from the inside and I know what a pain it is to gather data. So this really does represent a lot of work. On the other hand, the questions asked in the interviews cover a lot of ground (see below), and you’d really want more people with an even wider range of backgrounds to draw big conclusions from the responses. That said, I think that even as it is, the results this team got are very much worth thinking about.
Here are the questions. They asked the student participants the following:
1. Do you consider yourself a chemistry person? Why or why not? (I believe that this is the first time I have encountered the phrase “chemistry person” - DL)
2. What do you think are the characteristics of a chemistry person?
3. What factors do you think make somebody more like a chemistry person or less like a chemistry person?
4. Are there any characteristics of a chemistry person that don’t apply to you? Some more than others?
And they asked the professional participants these questions:
1. Do you consider the colleagues that you work with as chemists? Do you think of them as chemists? (I have to say, I don’t see the distinction between these two phrasings - DL)
2. What do you think are the characteristics of chemists?
2a: Do you think those characteristics apply to all chemists (physical, analytical, inorganic, biochemist, organic, theoretical, chemistry education, etc.)?
2b: Do you think these characteristics would all apply to chemists regardless of the sector they’re in (e.g. academia/industry/government/non-profit)?
2c: Would you say you share those characteristics with other chemists>
2d: Are there any characteristics of a chemist that don’t apply to you? Some more than others?
How would I have answered these questions? I would say that some of my colleagues are most certainly chemists, but others are equally certainly one kind of biologist or another, just for starters I think that the characteristics of chemists are that they work at the molecular level and treat things as molecules. If your work exclusively deals with single atoms and their behavior, I think that is where things shade over into physics, and if you work on whole organisms without having to think much about the molecular sizes and shapes and interactions involved, you have shaded over into biology. And yes, I think that molecular biologists, over the years, have become more and more “chemist-like”. I think these apply pretty generally across the different sorts of chemistry - for example, inorganic chemists can be the folks who are in between physicists and (say) geologists or mineralogists. And I think that these do apply across different sectors - I define chemists by what they do and how they think about it, not by what sort of organization they work in. I do indeed think that I share these characteristics with other chemists, and I can’t think of anything in particular that doesn’t apply to me.
So what did the interviewees agree defines a “real chemist”? These turn out to be scientists who have a chemistry degree, who work at an atomic/molecular scale, who work in a lab (or use software for computation and modeling), among other traits. Biochemists and chemical engineers get the short end of these exclusionary definitions, and one that I found particularly interesting was that for many of the participants, being a “real chemist” means being in academia.
That one goes back a long ways. It’s been fading out a bit, but there are still a lot of folks in academia - many of whom are training and influencing students - who hold this opinion at some level. If you went into industry, then you must not have had what it takes to be a professor, right? It’s for sure that there have traditionally been many graduate school PIs who have tried to steer their “best” students towards academic careers, feeling that a robust academic family tree brings some glory along with it.
Of course, as the authors note, groups define themselves both by emphasizing their own common characteristics and by excluding those who don’t share them. “Well, I’ll tell you what we’re not. . .” And they wonder if chemistry’s “central science” position doesn’t exacerbate this sort of territory-marking. I’m reminded of Ben Franklin’s crack about colonial New Jersey, facing a drain of its population towards New York in the north and Philadelphia to the south: it was, he said, like a keg that had been tapped at both ends. Perhaps chemistry has been tapped at its biology end and at its physics end?
But trying to enforce boundaries like this can lead to lessening the impact of the field. I make this point in presentations to academic audiences, that being concerned with what’s “real chemistry” ends up cutting off its opportunities for intellectual expansion. Don’t write off “hypenated” fields because of some weird concern about purity - claim them for the good ol’ central science and make it more central than ever. Some take this advice and some don’t.
On top of all this is of course the traditional mental picture of the “real” chemist as a white male. As one of those myself, who wears glasses and has a beard yet, I am in fact Living the Stereotype. But the biology-has-more-women and physics-has-more-men situation puts chemistry as a field in an unwelcome position, and that goes both for academia and for industry (and probably even more for the latter). Non-white scientists of course get their own pressures from several directions. As the paper notes, the problem with this sort of thing isn’t so much that it correlates with the demographics of the field as it exists today, but that it helps maintain them. Any members of less-represented groups that speak up about these issues run the risk of being seen as even less like members of the crowd than they were before.
All in all, the exclusionary mode is not doing chemistry any favors. People who are interested in this science and who are good at it should be able to pursue it no matter what they look like or what their backgrounds are. And intellectually they should be able to push the boundaries of it into different areas of research without being labeled as “not real chemists” for doing so. Do you want to spend more time feeling like realinside members of a pure and defined club, or do you want to discover things?
Microsoft Production Studios (commonly known inside Microsoft simply as Microsoft Studios) is a large broadcast studio nestled in the trees on Microsoft’s main Redmond Campus.¹ Here’s a video hosted by Luke Burbank, local radio personality who is a frequent host for Microsoft internal videos.
I myself was interviewed at Microsoft Studios for a short video which (I am told) was used as an interstitial by the live streaming team at the Microsoft Build 2023 conference to promote my talk at that conference (co-hosted by Clint Rutkas).
The Microsoft Studios building is very unimpressive from the outside, but once you get inside, you find yourself in a high-tech broadcast studio. After going through hair and make-up, I was taken for my interview to a large mostly-empty black room with a giant LED wall backdrop and lots of television cameras backed by what I’m sure is extremely expensive electronic audio and video equipment.²
I’m told that the Microsoft Studios building was being designed at the time of the infamous Windows 98 on-stage USB blue screen³ They modified their design to include a room next to the broadcast room to stage any computer equipment that would be used during a live broadcast. The equipment would be set up and tested before being turned over to the program hosts. They don’t want a repeat of the disaster of experiencing a blue screen error during a live broadcast. So far, it has worked.
¹ Not to be confused with the Channel 9 Studio. I’ve recorded there, too!
² The instructions for dressing for the interview noted, “Your feet may be visible in some camera angles, so wear appropriate footwear.” “Aha,” I thought. “They said nothing about pants!”⁴
³ Some time ago, I wrote a technical explanation of what went wrong. TL;DR: For the live demo, they bought a scanner from a local electronics store and never tested it before going on stage. The scanner had a bug.
Some time ago, I described Windows 3.0’s WinHelp as “a program for browsing online help files.” But Windows 3.0 predated the Internet, and these help files were available even if the computer was not connected to any other network. How can it be “online”?
The term “online” originally meant “immediately available on a computer”. For example, if you are working on a system with hierarchical storage, the “online” files are the ones that are accessible right now, and the “offline” files are the ones that have been archived to tape and will take some time to retrieve and make online.
The term “online help” refers to the fact that the help files are readily available on your computer. You don’t have to go dig through your shelves looking for a manual.
Back in the day, a computer that was accessible via a network or some other remote connection was generally called “up” rather than “online”. Officially, “up” referred to whether the computer was running at all, but since these types of computers (mainframes or timesharing systems) had as their sole purpose to be connected to by other computers, being “up” was useless if they weren’t also open to connections.
It does mean that we have the somewhat paradoxical terminology that online help is available offline.
But it’s not really a paradox because the terms “online” and “offline” are referring to different things. In the phrase “online help”, it’s referring to the help. The help files are online (readily accessible via computer). But “available offline” is referring to your computer (can connect to other computers).
Your computer is offline (relative to other computers). The help is online (relative to your computer).
Bonus chatter: Of course, now that many systems have migrated the help files themselves to Web sites, you now have online help that is not available when offline.
wrl\event.h(348,165): error C2516: 'Microsoft::WRL::Details::RemoveReference<TCallback>::Type': is not a legal base class
with
[
TCallback=HRESULT (__cdecl MyClass::* )(ABI::Windows::UI::ViewManagement::InputPane *,ABI::Windows::UI::ViewManagement::IInputPaneVisibilityEventArgs *)
]
(compiling source file 'test.cpp')
wrl\internal.h(96,19):
see declaration of 'Microsoft::WRL::Details::RemoveReference<TCallback>::Type'
with
[
TCallback=HRESULT (__cdecl MyClass::* )(ABI::Windows::UI::ViewManagement::InputPane *,ABI::Windows::UI::ViewManagement::IInputPaneVisibilityEventArgs *)
]
wrl\event.h(348,165):
the template instantiation context (the oldest one first) is
test.cpp(134,30):
see reference to function template instantiation 'Microsoft::WRL::ComPtr<TDelegateInterface> Microsoft::WRL::Callback<ABI::Windows::Foundation::ITypedEventHandler<ABI::Windows::UI::ViewManagement::InputPane*,ABI::Windows::UI::ViewManagement::InputPaneVisibilityEventArgs*>,HRESULT(__cdecl MyClass::* )(ABI::Windows::UI::ViewManagement::InputPane *,ABI::Windows::UI::ViewManagement::IInputPaneVisibilityEventArgs *)>(TLambda &&) noexcept' being compiled
with
[
TDelegateInterface=ABI::Windows::Foundation::ITypedEventHandler<ABI::Windows::UI::ViewManagement::InputPane*,ABI::Windows::UI::ViewManagement::InputPaneVisibilityEventArgs*>,
TLambda=HRESULT (__cdecl MyClass::* )(ABI::Windows::UI::ViewManagement::InputPane *,ABI::Windows::UI::ViewManagement::IInputPaneVisibilityEventArgs *)
]
wrl\event.h(460,45):
see reference to function template instantiation 'Microsoft::WRL::ComPtr<TDelegateInterface> Microsoft::WRL::Details::DelegateArgTraits<HRESULT (__cdecl ABI::Windows::Foundation::ITypedEventHandler_impl<ABI::Windows::Foundation::Internal::AggregateType<ABI::Windows::UI::ViewManagement::InputPane *,ABI::Windows::UI::ViewManagement::IInputPane *>,ABI::Windows::Foundation::Internal::AggregateType<ABI::Windows::UI::ViewManagement::InputPaneVisibilityEventArgs *,ABI::Windows::UI::ViewManagement::IInputPaneVisibilityEventArgs *>>::* )(ABI::Windows::UI::ViewManagement::IInputPane *,ABI::Windows::UI::ViewManagement::IInputPaneVisibilityEventArgs *)>::Callback<ABI::Windows::Foundation::ITypedEventHandler<ABI::Windows::UI::ViewManagement::InputPane*,ABI::Windows::UI::ViewManagement::InputPaneVisibilityEventArgs*>,ABI::Windows::Foundation::ITypedEventHandler<ABI::Windows::UI::ViewManagement::InputPane*,ABI::Windows::UI::ViewManagement::InputPaneVisibilityEventArgs*>,Microsoft::WRL::NoCheck,T>(TLambda &&) noexcept' being compiled
with
[
TDelegateInterface=ABI::Windows::Foundation::ITypedEventHandler<ABI::Windows::UI::ViewManagement::InputPane*,ABI::Windows::UI::ViewManagement::InputPaneVisibilityEventArgs*>,
T=HRESULT (__cdecl MyClass::* )(ABI::Windows::UI::ViewManagement::InputPane *,ABI::Windows::UI::ViewManagement::IInputPaneVisibilityEventArgs *),
TLambda=HRESULT (__cdecl MyClass::* )(ABI::Windows::UI::ViewManagement::InputPane *,ABI::Windows::UI::ViewManagement::IInputPaneVisibilityEventArgs *)
]
wrl\event.h(367,9):
while compiling class template member function 'Microsoft::WRL::ComPtr<TDelegateInterface>::ComPtr(Microsoft::WRL::ComPtr<U> &&,Details::EnableIf<Microsoft::WRL::Details::IsConvertible<U*,T*>::value,void*>::type *) noexcept'
with
[
TDelegateInterface=ABI::Windows::Foundation::ITypedEventHandler<ABI::Windows::UI::ViewManagement::InputPane*,ABI::Windows::UI::ViewManagement::InputPaneVisibilityEventArgs*>,
T=ABI::Windows::Foundation::ITypedEventHandler<ABI::Windows::UI::ViewManagement::InputPane*,ABI::Windows::UI::ViewManagement::InputPaneVisibilityEventArgs*>
]
wrl\event.h(367,9):
see reference to class template instantiation 'Microsoft::WRL::Details::IsConvertible<Microsoft::WRL::Details::DelegateArgTraits<HRESULT (__cdecl ABI::Windows::Foundation::ITypedEventHandler_impl<ABI::Windows::Foundation::Internal::AggregateType<ABI::Windows::UI::ViewManagement::InputPane *,ABI::Windows::UI::ViewManagement::IInputPane *>,ABI::Windows::Foundation::Internal::AggregateType<ABI::Windows::UI::ViewManagement::InputPaneVisibilityEventArgs *,ABI::Windows::UI::ViewManagement::IInputPaneVisibilityEventArgs *>>::* )(ABI::Windows::UI::ViewManagement::IInputPane *,ABI::Windows::UI::ViewManagement::IInputPaneVisibilityEventArgs *)>::DelegateInvokeHelper<ABI::Windows::Foundation::ITypedEventHandler<ABI::Windows::UI::ViewManagement::InputPane*,ABI::Windows::UI::ViewManagement::InputPaneVisibilityEventArgs*>,T,Microsoft::WRL::NoCheck,ABI::Windows::UI::ViewManagement::IInputPane *,ABI::Windows::UI::ViewManagement::IInputPaneVisibilityEventArgs *> *,ABI::Windows::Foundation::ITypedEventHandler<ABI::Windows::UI::ViewManagement::InputPane*,ABI::Windows::UI::ViewManagement::InputPaneVisibilityEventArgs*> *>' being compiled
with
[
T=HRESULT (__cdecl MyClass::* )(ABI::Windows::UI::ViewManagement::InputPane *,ABI::Windows::UI::ViewManagement::IInputPaneVisibilityEventArgs *)
]
wrl\internal.h(67,35):
see reference to class template instantiation 'Microsoft::WRL::Details::DelegateArgTraits<HRESULT (__cdecl ABI::Windows::Foundation::ITypedEventHandler_impl<ABI::Windows::Foundation::Internal::AggregateType<ABI::Windows::UI::ViewManagement::InputPane *,ABI::Windows::UI::ViewManagement::IInputPane *>,ABI::Windows::Foundation::Internal::AggregateType<ABI::Windows::UI::ViewManagement::InputPaneVisibilityEventArgs *,ABI::Windows::UI::ViewManagement::IInputPaneVisibilityEventArgs *>>::* )(ABI::Windows::UI::ViewManagement::IInputPane *,ABI::Windows::UI::ViewManagement::IInputPaneVisibilityEventArgs *)>::DelegateInvokeHelper<ABI::Windows::Foundation::ITypedEventHandler<ABI::Windows::UI::ViewManagement::InputPane*,ABI::Windows::UI::ViewManagement::InputPaneVisibilityEventArgs*>,T,Microsoft::WRL::NoCheck,ABI::Windows::UI::ViewManagement::IInputPane *,ABI::Windows::UI::ViewManagement::IInputPaneVisibilityEventArgs *>' being compiled
with
[
T=HRESULT (__cdecl MyClass::* )(ABI::Windows::UI::ViewManagement::InputPane *,ABI::Windows::UI::ViewManagement::IInputPaneVisibilityEventArgs *)
]
As is typical of C++ error messages, the interesting things are at the start and the end.
For the Microsoft Visual C++ compiler, the error message starts with the point where the compiler noticed the error, and it ends with a description of what piece of the original source code triggered that error. Though in this case, the “original source code” got buried in the middle because the compiler chose to show the instantiations oldest first.
The compiler ran into a problem trying to create a derived class and realizing that the base class is invalid.
error C2516: 'Microsoft::WRL::Details::RemoveReference<TCallback>::Type': is not a legal base class
with
[
TCallback=HRESULT (__cdecl MyClass::* )(ABI::Windows::UI::ViewManagement::InputPane *,ABI::Windows::UI::ViewManagement::IInputPaneVisibilityEventArgs *)
]
The TCallback is the type of the &MyClass::OnInputPaneShowing, and from the name, RemoveReference<T> probably removes reference qualifiers. This is not reference-qualified, so it’s a nop, so the error message is saying
'decltype(&MyClass::Completed)': is not a legal base class
And that’s true. Pointers to member functions are not valid base classes.
we see that the DelegateInvokeHelper removes references from TCallback and then derives from it. In our case, the TCallback was a pointer to a member function, and you can’t derive from that.
Even if we got past that problem, the next issue is that the Invoke method tries to use the function call operator operator() to invoke the TCallback. But you can’t invoke pointers to member functions that way. You have to use the function call syntax in conjunction with an explicit object: (obj.*callback)(args).
Okay, so we see from the point of failure that the TCallback parameter must satisfy two criteria.
Must be a class type that can be derived from.
Must have a function call operator that can be invoked with the delegate arguments.
It seems that the code wants to invoke the OnCompleted method on the this object, and we need to do that in the function call operator of a class type. Fortunately, C++ provides a handy syntax for this: A lambda.
(We can be lazy about how we forward the arguments because we know that the ABI parameters don’t have move constructors, so forwarding is the same as copying.)
Or if we look at the other overloads of WRL::Callback, we see a family of callbacks that do exactly what we want.
The purpose of the discussion was not to diagnose this specific use of the WRL::Callback function but rather as an exercise in reading compiler error messages and reconciling the error text (that talks about what the compiler sees) with the original code in order to understand what the code was expecting, and why we failed to meet that expectation.
Final note: Since this is captured as a raw pointer, we have to ensure that the MyClass object is not destroyed if there is a chance that the event handler could be called (or is in the middle of a call). We get away with it here because the input pane raises its events on the UI thread, so we don’t have to worry about an event being in flight or on its way at the time we unregister it.
Sunday morning, after Laila's long stay in the hospital, Papa and Nana came to get her--as she says, "Papa and Nana come and get you!"--and take her away for an overnight visit, since she hasn't stayed there for a while. That left sashagee and I with some free time, and what we eentually did with it is that we went to the Japanese grocery store Mitsuwa.
Another good one from Nicholas C. Zakas this time on code portability. Here’s some choices he made for a recent projects:
Astro, because it can be deployed on a “wide range of cloud services” and also supports a variety of front-end frameworks, so you can “start with React and later move to Vue or Solid without changing your application framework”.
Hono, because “it can run almost anywhere JavaScript runs: Node.js, Deno, Bun, Cloudflare Workers…”
Supabase because “if needed, you can export your entire database and run it anywhere PostgreSQL is supported”.
Cloudflare R2, because it’s “an S3-compatible service, so you can switch providers without hassle”.
Ah, RNA. As one frequently hears, it’s gradually taking over more and more territory in cell biology as we find more and more types of the stuff and more functions for each type. That process is nowhere near finished, you’d have to assume, and this paper makes that argument as well.
It’s a look at the general class of RNA-binding proteins, which many years ago was a fairly short list that has now grown to at least a thousand members. Along the way, a lot of them proved to have recognizable RNA-binding domains (RBDs)like the RNA recognition motif (RRM), the heterogeneous nuclear ribonucleoprotein (hnRNP), the K-homology (KH) domains and more. Many proteins were added to the list by looking through sequence databases for these motifs. But as the paper notes, in recent years many RNA-binding proteins don’t seem to have these domains (or at least not in a form that we recognize). Even more confusingly, these often aren’t weirdo proteins that no one’s studied before, but rather are widely known ones, with plenty of other functions, that also turn out for some reason to bind RNA.
Such a long list is bound to sprawl out, and the list of RNA-binding proteins sure does. There are the sorts of things you’d expect (very specific high-affinity binders, often recognizing specific RNA base sequences) along with plenty of low-affinity less-selective ones (which often seem to work by recognizing the RNA backbone more than specific bases). But don’t get too complacent: there are high-affinity proteins that hit all kinds of stuff, and low-affinity ones that are quite selective. The realization that a lot of intrinsically disordered proteins (or at least proteins with intrinsically disordered regions) bind RNA has complicated things quite a bit. You typically see an awful lot of RNA and RNA-binding protein content in biomolecular condensate droplets, for example, which are havens for disordered regions. And from the other direction, IDRs might turn out to be one of the most common RNA-binding motifs across all proteins. And remember that some of these (although not all) may well be not so disordered once they are bound to their RNA partners.
Some estimates are that up to 20% of the entire proteome can bind RNA and moreover that about 20% of all known protein complexes have an RNA component in them and you really have to think about how to deal with such large figures. The paper linked above makes a good point: the earlier years of study in this area were mostly looking at what these various proteins were doing to RNA, but more recently we’ve realized that RNA molecules are agents of their own acting on proteins. Indeed there’s a whole field of “riboregulation” studies that is coming together - protein/protein interactions, allosteric binding, transport, and more. At the moment, we have no good ways to predict these things, so we’re just piling up empirical knowledge and hoping to bring clarity to it. As the authors mention here, that means that we really should be as careful and thorough as we can be in order to avoid confusing ourselves even more; the field is big enough and complicated enough as it stands.
There’s a lot to do here, and it’s going to take many years. But in the end, this XKCD might well be on target. . .
We spent most of Shabbat at the hospital with Laila hooked up to an EEG, trying to see if we could find more about the increase of seizures she's been having lately.
The good news: She's...not actually having seizures? Even though sashagee texted me at 4 a.m. that she had had a seizure, and even though the neurology nurses who came in to check up on her when sashagee pushed the button also thought that, the review of the EEG showed none of the characteristic brain activity of a seizure.
The bad news: Now we don't know what is going on. Even the attending physician mentioned that the video seemed very similar to a seizure, though it could also be that Laila was just having night terrors (which we know she has). But that led sashagee to ask how we tell between a seizure and a night terror, and the doctor didn't really have an answer for that.
We'll probably have to go in for a longer EEG where we start drawing down her medication and see if anything triggers an actual seizure. It's possible that none of her recent "seizures" were actually seizures and her increased medication isn't necessary, but it'll take more testing to tell that.
Current Mood:worried
Current Music:Final Fantasy XIV: Dawntrail - Kaleidoscope
Cooking. ... yeah no I managed to make veg spag bol on Friday but otherwise we've mostly just been feeling faintly sorry for ourselves. Okay, no, that's not quite true, I did also achieve baked potato on Wednesday.
Eating. Misc takeaway from The Field (leftover Sunday night curry for dinner on Tuesday; leftover vegetable fried rice + Szechuan tofu for breakfast on same...). I remain mildly resentful that the Wagamama menu still does not contain any of My Favourites.
Growing. The second attempt at pineapple has NEW LEAVES. The second attempt at lemongrass is maybe Going? And other than that I have no idea because I have spectacularly failed to make it to the plot this week.
Observing. BATS. A variety of excellent dahlias and passion flowers on a Trip To Town (post office, pharmacy).
We published an edition of What You Need To Know about Modern CSS last year (2024), and for a while I really wasn’t sure if only a year later we’d have enough stuff to warrant and new yearly version. But time, and CSS, have rolled forward, and guess what? There is more this year than there was last. At least in this somewhat arbitrary list of “things Chris thinks are valuable to know that are either pretty fresh or have enjoyed a boost in browser support.”
Animate to Auto
What is this?
We don’t often set the height of elements that contain arbitrary content. We usually let elements like that be as tall as they need to be for the content. The trouble with that is we haven’t been able to animate from a fixed number (like zero) to whatever that intrinsic height is (or vice versa). In other words, animate to auto (or other sizing keywords like min-content and the like).
Now, we can opt-in to being able to animate to these keywords, like:
html {
interpolate-size: allow-keywords;
/* Now if we transition
"height: 0;" to "height: auto;"
anywhere, it will work */
}
If we don’t want to use an opt-in like that, alternatively, we can use the calc-size() function to make the transition work without needing interpolate-size.
This is the first time we’ve ever been able to do this in CSS. It’s a relatively common need and it’s wonderful to be able to do it so naturally, without breaking behavior.
And it’s not just height (it could be any property that takes a size) and it’s not just auto (it could be any sizing keyword).
Support
Browser Support
Just Chrome.
Progressive Enhancement
Yes! Typically, this kind of animation isn’t a hard requirement, just a nice-to-have.
Polyfill
Not really. The old fallbacks including things like animating max-height to a beyond-what-is-needed value, or using JavaScript to attempt to measure the size off-screen and then doing the real animation to that number. Both suck.
Usage Example
Popovers & Invokers
These are separate and independently useful things, and really rather HTML-focused, but it’s nice to show them off together as they complement each other nicely.
What is this?
A popover is an attribute you can put on any HTML element that essentially gives it open/close functionality. It will then have JavaScript APIs for opening and closing it. It’s similar-but-different to modals. Think of them more in the tooltip category, or something that you might want more than one of open sometimes.
Invokes are also HTML attributes that give us access to those JavaScript APIs in a declarative markup way.
Why should I care?
Implementing functionality at the HTML level is very powerful. It will work without JavaScript, be done in an accessible way, and likely get important UX features right that you might miss when implementing yourself.
Support
Browser Support
Popovers are everywhere, but invokers are Chrome only at time of publication.
Remember there are JavaScript APIs for popovers also, like myPopover.showPopover() and secondPopover.hidePopover() but what I’m showing off here is specifically the HTML invoker controls for them. There are also some alternative HTML controls (e.g. popovertarget="mypopover" popovertargetaction="show") which I suppose are fine to use as well? But something feels better to me about the more generic command invokers approach.
Also — remember popovers pair particularly well with anchor positioning which is another CSS modern miracle.
@function
What is this?
CSS has lots of functions already. Think of calc(), attr(), clamp(), perhaps hundreds more. They are actually technically called CSS value functions as they always return a single value.
The magic with with @function is that now you can write your own.
Abstracting logic into functions is a computer programming concept as old as computers itself. It can just feel right, not to mention be DRY, to put code and logic into a single shared place rather than repeat yourself or complicate the more declarative areas of your CSS with complex statements.
Support
Browser Support
Chrome only
Progressive Enhancement
It depends on what you’re trying to use the value for. If it’s reasonable, it may be as simple as:
property: fallback; property: --function();
Polyfill
Not really. Sass has functions but are not based on the same spec and will not work the same.
Conceptually, CSS is already full of conditional logic. Selectors themselves will match and apply styles if they match an HTML element. Or media queries will apply if their conditions are met.
But the if() function, surprisingly, is the first specific logical construct that exists soley for the function of applying logical branches.
Why should I care?
Like all functions, including custom @functions like above, if() returns a single value. It just has a syntax that might help make for more readable code and potentially prevent certain types of code repetition.
Support
Browser Support
Chrome only
Progressive Enhancement
It depends on the property/value you are using it with. If you’re OK with a fallback value, it might be fine to use.
The new field-sizing property in CSS is for creating form fields (or any editable element) that automatically grows to to the size of their contents.
Why should I care?
This is a need that developers have been creating in JavaScript since forever. The most classic example is the <textarea>, which makes a lot of sense to be sized to as large as the user entering information into it needs to be, without having to explicitly resize it (which is difficult at best on a small mobile screen). But inline resizing can be nice too.
Support
Browser Support
Chrome and looks to be coming soon to Safari.
Progressive Enhancement
Yes! This isn’t a hard requirement usually but more of a UX nicety.
Styling the outside of a <select> has been decently possible for a while, but when you open it up, what the browser renders is an operating-system specific default. Now you can opt-in to entirely styleable select menus.
Why should I care?
Support
Browser Support
Chrome only
Progressive Enhancement
100%. It just falls back to a not-styled <select> which is fine.
Polyfill
Back when this endeavor was using <selectlist>there was, but in my opinion the progressive enhancement story is so good you don’t need it.
The text-wrap property in CSS allows you to instruct the browser that it can and should wrap text a bit differently. For example, text-wrap: balance; will attempt to have each line of text as close to the same length as possible.
Why should I care?
This can be a much nicer default for large font-size elements like headers. It also can help with single-word-on-the-next-line orphans, but there is also text-wrap: pretty; which can do that, and is designed for smaller-longer text as well, creating better-reading text. Essentially: better typography for free.
Support
Browser Support
balance is supported across the board but pretty is only Chrome and Safari so far.
Progressive Enhancement
Absolutely. As important as we might agree typography is, without these enhancements the text is still readable and accessible.
I think this one a little confusing because linear as a keyword for transition-timing-function or animation-timing-function kinda means “flat and boring” (which is sometimes what you want, like when changing opacity for istance). But this linear() functionactually means you’re about to do an easing approach that is probably extra fancy, like having a “bouncing” effect.
Why should I care?
Even the fancy cubic-bezier() function can only do a really limited bouncing affect with an animation timing, but the sky is the limit with linear() because it takes an unlimited number of points.
Support
Browser Support
Across the board
Progressive Enhancement
Sure! You could fall back to a named easing value or a cubic-bezier()
Polyfill
Not that I know of, but if fancy easing is very important to you, JavaScript libraries like GSAP have this covered in a way that will work in all browsers.
While CSS has had a path() function for a while, it only took a 1-for-1 copy of the d attribute from SVG’s <path> element, which was forced to work only in pixels and has a somewhat obtuse syntax. The shape() function is basically that, but fixed up properly for CSS.
Why should I care?
The shape() function can essentially draw anything. You can apply it as a value to clip-path, cutting elements into any shape, and do so responsively and with all the power of CSS (meaning all the units, custom properties, media queries, etc). You can also apply it to offset-path() meaning placement and animation along any drawable path. And presumably soon shape-outside as well.
Support
Browser Support
It’s in Chrome and Safari and flagged in Firefox, so everywhere fairly soon.
Progressive Enhancement
Probably! Cutting stuff out and moving stuff along paths is usually the stuff of aesthetics and fun and falling back to less fancy options is acceptable.
Polyfill
Not really. You’re better off working on a good fallback.
.arrow {
clip-path: shape(
evenodd from 97.788201%41.50201%,
line by -30.839077% -41.50201%,
curve by -10.419412%0% with -2.841275% -3.823154% / -7.578137% -3.823154%,
smooth by 0%14.020119% with -2.841275%10.196965%,
line by 18.207445%24.648236%, hline by -67.368705%,
curve by -7.368452%9.914818% with -4.103596%0% / -7.368452%4.393114%,
smooth by 7.368452%9.914818% with 3.264856%9.914818%,
hline by 67.368705%, line by -18.211656%24.50518%,
curve by 0%14.020119% with -2.841275%3.823154% / -2.841275%10.196965%,
curve by 5.26318%2.976712% with 1.472006%1.980697% / 3.367593%2.976712%,
smooth by 5.26318% -2.976712% with 3.791174% -0.990377%, line by 30.735919% -41.357537%,
curve by 2.21222% -7.082013% with 1.369269% -1.842456% / 2.21222% -4.393114%,
smooth by -2.21222% -7.082013% with -0.736024% -5.239556%,
close
);
}
The natural re-sizeability and more readable syntax is big advantage over path():
More Powerful attr()
What is this?
The attr() function in CSS can pull the string value of the matching HTML element. So with <div data-name="Chris"> I can do div::before { content: attr(data-name); } to pull off an use “Chris” as a string. But now, you can apply types to the values you pull, making it a lot more useful.
Why should I care?
Things like numbers and colors are a lot more useful to pluck off and use from HTML attributes than strings are.
attr(data-count type(<number>))
Support
Browser Support
Chrome only
Progressive Enhancement
It depends on what you’re doing with the values. If you’re passing through a color for a little aesthetic flourish, sure, it can be a enhancement that fallback to something else or nothing. If it’s crucial layout information, probably not.
Polyfill
Not that I know of.
Usage Example
Reading Flow
What is this?
There are various ways to change the layout such that the visual order no longer matches the source order. The new reading-order property allow us to continue to do that while updating the behavior such that tabbing through the elements happens in a predictable manner.
Why should I care?
For a long time we’ve been told: don’t re-order layout! The source order should match the visual order as closely as possible, so that tabbing focus through a page happens in a sensible order. When you mess with the visual order and not source order, tabbing can become zig-zaggy and unpredictable, even causing scrolling, which is a bad experience and a hit to accessibility. Now we can inform the browser that we’ve made changes and to follow a tabbing order that makes sense for the layout style we’re using.
Support
Browser Support
Chrome only
Progressive Enhancement
Not particularly. We should probably not be re-ordering layout wildly until this feature is more safely across all browsers.
Polyfill
No, but if you were so-inclined you could (hopefully very intelligently) update the tabindex property of the elements to a sensible order.
Usage Example
.grid {
reading-flow: grid-rows;
}
Re-ordering a grid layout is perhaps of the most common things to re-order, and having the tabbing order follow the rows after re-arranging is sensible, so that’s what the above line of code is doing. But you’ll need to set the value to match what you are doing. For instance if you are using flexbox layout, you’d likely set the value to flex-flow. See MDN for the list of values.
“Masonry” layout, despite having different preliminary implementations, is not yet finalized, but there is enough movement on it it feels like we’ll see that get sorted out next year. The most interesting development at the moment is the proposal of item-flow and how that could not only help with Masonry but bring other layout possibilities to other layout mechanisms beyond grid.
The CSS property margin-trim is super useful and we’re waiting patiently to be able to use it more than just Safari.
The sibling-index() and sibling-count() functions are in Chrome and, for one thing, are really useful for staggered animations.
For View Transitions, view-transition-name: match-element; is awfully handy as it prevents us from needing to generate unique names on absolutely everything. Also — Firefox has View Transitions in development, so that’s huge.
We should be able to use calc() to multiply and divide with units (instead of requiring the 2nd to be unitless) soon, instead of needing a hack.
We never did get “CSS4” (Zoran explains nicely) but I for one still think some kind of named versioning system would be of benefit to everyone.
If you’re interested in a more straightforward list of “new CSS things” for say the last ~5 years, Adam Argyle has a great list.
Great Stuff to Remember
Container queries (and units) are still relatively new and the best thing since media queries in CSS.
The :has() pseudo-class is wildly useful for selecting elements where the children exist or are in a particular state.
On page 187 (of 218), we finally get this paragraph:
At this point we need to return to a crucial caveat. In most cases of persistent pain, whatever caused the initial injury has healed. Pain is now the primary disease. But there are a number of cases where there is continual damage that triggers nociceptive fibres; chronic inflammatory diseases are good examples. It is also important to point out that not every case of back pain is our brain's overreaction. A small -- but important -- minority of cases are caused by serious conditions -- cancer, some infections, spinal fractures and the nerve-compressing cauda equina syndrome -- but these can usually be ruled out by doctors, who will be on the lookout for 'red flag' symptoms. However, in the majority of cases of persistent pain (and over 90% of cases of back pain), there is no longer any identifiable tissue damage; our brain has become hypersensitive.
In a book that otherwise dedicates a lot of time to talking about gender and racial inequalities in healthcare access, including a solid half-paragraph on how common and how painful endometriosis (a chronic inflammatory condition!) is, the bit where "well this only applies to most people..." gets breezed past is certainly causing me more feelings. And yet it's still the closest anything I've read so far actually gets to engaging with the fact that the rest of us exist, so... no get-out-of-writing-essays-free card for me here, alas.
(The Painful Truth, Monty Lyman, mostly pretty good and definitely got me to think constructively about a few things -- like the merits of classical vs contemporary Pilates for my specific usecase via discussion of knitting -- and introduced me to some more, like open-label placebos and "safe threats" and the impact of paracetamol on empathy. It's incomplete, but not disrecommended.)
Hey folks, Fireside this week! Next week we should be back to start looking at the other half of labor in the peasant household, everything that isn’t agriculture. Also, here are some cats:
Catching that perfectly timed Percy-yawn, while Ollie (below) is doing his best Percy impression with those narrowed eyes.
For this week’s musing, I want to address something that comes up frequently in the comments, particularly any time we discuss agriculture: the ‘Mathusian trap.’ Now of course to a degree the irony of addressing it here is that it will still come up in the comments because future folks raising the point won’t see this first, but at least it’ll be written somewhere that I can refer to.
To begin, in brief, the idea of a Malthusian trap derives from the work of Thomas Robert Malthus (1766-1834) and his work, An Essay on the Principle of Population (1798). In essence the argument goes as follows (in a greatly simplified form): if it is the case that the primary resources to sustain a population grow only linearly, but population grows exponentially, then it must be the case that population will, relatively swiftly, approach the limits of resources, leading to general poverty and immiseration, which in turn provide the check that limits population growth.
As an exercise in logic Malthus’ point is inescapable: if you accept his premises and run the experiment long enough you must reach his conclusion. In short, given an exponentially growing population and given resources that only grow linearly and given an infinite amount of time, you have to reach the Malthusian ‘trap’ of general poverty and population checked only by misery. So far as that goes, fine.
The problem is assuming any of those premises were generally correct in any given point in history.
I find this comes up whenever I point out that certain social and political structures – the Roman Empire most notably – seem to have produced better economic conditions for the broad population or that other structures – Sparta, say – produced worse ones: someone rolls in to insist that because the Malthusian trap is inevitable the set of structures doesn’t matter, as a better society will just produce an equally miserable outcome shortly thereafter with a larger population. And then I response that Malthus is not actually always very useful for understanding these interactions, which prompts disbelief because – look just above – his logic is airtight given his premises and his premises are at least intuitive.
Because here’s the thing: Malthus was very definitely and obviously wrong. Malthus was writing as Britain (where he wrote) was beginning to experience the initial phases of the demographic transition, which begins with a period of very rapid population growth as mortality declines but birth rates remain mostly constant. Malthus generalizes those trends, but of course those trends do not generalize; to date they have happened exactly once in every society where they have occurred. Instead of running out of primary resources, world population is expected to peak later this century around 10.5 billion and we already can grow enough food for 10.5 billion people. The next key primary resource is energy and progress on renewable energy sources is remarkable; at this point it seems very likely that we will have more power-per-person available at that 10.5 billion person peak than we do today. Living standards won’t fall, they’ll continue to rise, assuming we avoid doing something remarkably foolish like a nuclear war. Even climate change – which is a very real problem – will only slow the rate of improvement under most projections, rather than result in an actual decline.
So while Malthus’ logic is ironclad and his premises are intuitive, as a matter of fact and reality he was wrong. Usefully wrong, but wrong. The question becomes why he was wrong. And the answer is that basically all of his premises are at least a little wrong.
The first, as we’ve noted, is that Malthus is extrapolating out a rate of population growth based on an unusual period: the beginning of rapid growth in the second stage of the demographic transition – and then he is extrapolating that pattern out infinitely in time in every direction. And that is a mistake, albeit an easy one to make: to assume that the question of population under agrarian production is an effectively infinite running simulation which has already (or very soon will) reach stability.
Here’s the thing (this is a very rough chronology): human beings (Homo sapiens) appeared about 300,000 years ago. We started leaving the cradle of Africa around 130,000 years ago, more or less and only filled out all of the major continents about 15,000 years ago. The earliest beginnings of agriculture are perhaps 20,000 years old or so, but agriculture reached most places in the form Malthus would recognize it much later. Farming got to Britain about 6,500 years ago. Complex states with large urban populations are 5,000 or so years old. Large sections of the American Great Plains and the Eurasian Steppe were grazing land until the last 150 years.
In short, it is easy to assume, because human lives are so short, that the way we have been living – agrarian societies – are already effectively ‘infinitely’ old. But we’re not! Assuming we do not nuke ourselves or cook the planet, in the long view pre-industrial agriculture will look like a very brief period of comparatively rapid development between hundreds of thousands of years of living as hunter-gathers and whatever comes after now. To Malthus, whose history could stretch no further back than the Romans and no further forward than the year in which he wrote, his kind of society seemed to have existed forever. It seemed that way to the Romans too. But we’re in a position to see both before agrarian economies and also after them; we’re not smarter, we just have the luck of a modestly better vantage.1
In short, while we might assume that given infinite time, exponential population growth will outpace any gains made to production but you shouldn’t assume infinite time because we are actually dealing with a very finite amount of time. Farmers, whose demographics concern us here, appear around 20,000 years ago and begin filling up the Earth, spreading out to bring new farmland under the plow (displacing, often violently, lower population density societies as they did so) and that process was arguably nearing completion but not yet complete when the second agricultural and first industrial revolutions fundamentally changed the basis of production. As we’ve discussed, estimates of global population in the deep past are deeply fraught, but there is general agreement that population globally has increased more or less continuously since the advent of farming; it never stalled out at any point. In short, the Malthusian long run is so long that it almost doesn’t matter.
But if we limit our view to a specific region or society, that changes things. We certainly do see, if not Malthusian traps, what we might term ‘Malthusian interactions’ apparent in history. Rising population density and trade connectivity help spread disease, which lead to major downward corrections in population like the Antonine Plague, the Plague of Justinian, the Black Death and the diseases of the Columbian Exchange. Notably though, these sudden downward corrections are at best only somewhat connected to population growth and resource scarcity: lower nutrition may play a role, but travel, trade lanes, high density cities and exposure to novel pathogens seems to play a larger role. It’s not clear that something like the Black Death would have been dramatically less lethal if the European population were 10 or 15% less; it seems quite clear the diseases of the Columbian exchange cared very little for how well fed the populations they devastated were. Still, we see the outline of what Malthus might expect: downward pressure on wages before the population discontinuity and often upward pressure afterwards (most clearly visible with the Black Death in Europe).
So does Malthus rule the ‘small print’ as it were? Perhaps, but not always. For one, it is possible, even in the pre-modern world, to realize meaningful per capita gains in productivity due to new production methods like new farming techniques. It is also possible for greater connectivity through trade to enable greater production by comparative advantage. It is also possible for capital accumulation in things like mills or draft animals to generate meaningful increases in production. And of course some political and economic regimes may be more or less onerous for the peasantry. Any of these things moving in the right direction can effectively create some ‘headroom’ in production and resources. Some of that ‘headroom’ is going to get extracted by the tiny number of elites at the top of these societies, but potentially not all of it.
This is what I often refer to as a society moving between equilibria (a phrasing not original to me), from a state condition of lower production (a low equilibrium) to a stable condition of higher production (a high equilibrium).
Now in the long run when just thinking about food production, the Malthusian interaction ought to catch up with us in the long run. The population increases, but the available land supply cannot keep pace – new lands brought under the plow are more marginal than old lands and so on – and so the surplus food per person steadily declines as the population grows until we’re back where we started. Except there are two problems here.
The first is that can take a long time even in a single society, region or state because even under ideal nutrition standards, these societies increase in population slowly compared to the rapid sort of exponential growth Malthus was beginning to see in the 1700s. It can take so long that exogenous shocks – invasion, plague, or new technology enabling a new burst of ‘headroom’ – arrive before the ceiling is reached and growth stops. Indeed, given the trajectory of pre-modern global population, that last factor must have happened quite a lot, since even the population of long-settled areas never quite stabilizes in the long term.
All of which is to say, in the time frame that matters – the time scale of states, regimes, economic systems and so on, measured in centuries not millennia – some amount of new ‘headroom’ might be durable and indeed we know it ended up being so, lasting long enough for us to get deep enough into the demographic transition that we could put Malthus away almost entirely.
The second thing to note is that not all material comforts are immediately related to survival and birth rates. To take our same society where some innovation has enabled increased production: the population rises, but no new land enters cultivation. That creates a segment of the population who can be fed, but who need not be farmers: they can do other things. Of course in actual pre-modern societies, it is most the elite who decide what other things these fellows do and many of those things (warfare, monumental construction, providing elite extravagance) do very little for the common folks.
But not always. Sometimes that new urban population is going to make stuff, stuff which might flow to consumers outside of the elite. We certainly seem to see this with sites of large-scale production of things like Roman coarseware pottery. Or, to take something from my own areas, it is hard not to notice that the amount of worked metal we imagine to be available for regular people for things like tools seems to rise as a function of time. Late medieval peasants do seem to have more stuff than early medieval or Roman peasants in a lot of cases. Wages – either measured in silver or as a ‘grain wage’ – may not be going up, but it sure seems like some things end up getting more affordable because there are more people making them.
And of course some of that elite investment might also be generally useful. Of course as a Roman historian, the examples of things like public baths and aqueducts, which provided services available not merely to the wealthy but also the urban poor, spring immediately to mind. And so even if the amount of grain available per person has stayed the same, the number of non-farmers as a percentage of the society has increased, making non-grain amenities easier for a society to supply. And naturally, social organization is going to play a huge role in the degree to which that added production does or does not get converted into amenities for non-elites.
In short it is possible for improvements to provide quality of life improvements even if a new Malthusian ceiling is reached. It is the difference between getting 3,000 calories in a wood-and-plaster building with a terracotta roof, a good collection of coarseware pottery and clean water from an aqueduct versus getting 3,000 calories in a wood-and-mud hut with a thatched roof, no pottery at all and having to pump water at the local well. In a basic Malthusian analysis, these societies are the same, but the lived experience is going to be meaningfully different.
Notionally, of course, you might argue that if population continued to rise we’d eventually reach the end of those fixed resources too: we’d run out of clay and metal ores and fresh water sources and so on, except that of course there are 8.2 billion of us and we haven’t yet managed to run out – or even be seriously constrained – by any of those things. We haven’t even managed to run out of oil or coal and again, at the rate at which renewable energy technology is advancing, it looks like we may never run out of oil, so much as it just won’t be worth anyone’s time pulling the stuff out of the ground.2
None of which is to say that Malthus is useless. Malthusian interactions do occur historically. But they do not always occur because the sweep of history is not infinitely wrong and developments which produce significant carrying capacity ‘headroom’ actually happen, on balance, somewhat faster than societies manage to reach the limit of that capacity.
Ollie gazing gloriously into the sun of a new day, while Percy, in shadow, plots his downfall.
On to Recommendations:
First off, the public classics project Peopling the Past has turned five! Congratulations to them. Peopling the Part runs both a blog and a podcast both highlighting the ways that scholars, especially early career scholars, study people in the (relatively deep) past, with an emphasis on highlighting interesting work and the methods it uses. It’s a great project to follow if you want a sense of how we know things about the past and the sort of work we continue to do to understand more, with an especially strong focus on archaeology.
Meanwhile over on YouTube and coinciding a bit with our discussion of Malthus, Angela Collier has a video on why “dyson spheres are a joke,“3 in the sense that they were quite literally proposed by Freeman J. Dyson as a joke, a deliberate ‘send up’ of the work of some of his colleagues he found silly, rather than ever being a serious suggestion for science fiction super-structures.
Where this cuts across our topic is that Dyson, writing in 1960, explicitly cites “Malthusian pressures” as what would force the construction of such a structure and it serves as a useful reminder that until well into the 1980s and 1990s, there were quite a lot of ‘overpopulation’ concerns and it was common to imagine the future as involving extreme overpopulation and resource scarcity. I wouldn’t accuse Dyson of this view (he is, as noted, writing a paper as satire), but I think it is notable that these panics continued substantially on the basis of assumptions that the demographic transition – which was already pretty clearly causing population growth in Europe to begin to slow significantly by the 1950s and 1960s – was, in effect, a ‘white people only’ phenomenon, fueling often very racially inflected fears about non-white overpopulation. You can see this sort of racist-alarmist-panic pretty clearly in Paul Ehrlich’s The Population Bomb (1968), appropriately skewered in theIf Books Could Killepisode on it.
Of course as noted is that what actually happened is that it turns out the demographic transition does not care about race or racists and happens to basically all societies as they grow wealthier and more educated – indeed, it has often happened faster in countries arriving to affluence late – with the result that it now appears that the ‘population bomb’ will never happen.
For this week’s book recommendation, I am going to recommend Rebecca F. Kennedy, C. Sydnor Roy and Max L. Goldman, Race and Ethnicity in the Classical World: An Anthology of Primary Sources in Translation (2013). Students often ask questions like ‘what did the Greeks and Romans think about race?’ and the complicated answer is they thought a lot of things. That can come as a surprise to moderns, as we’re really used to the cultural hegemony of ‘scientific racism’ and the reactions against it. But it is in fact somewhat unusual that a single theory of race – as unfounded in actual reality as all of the others – is so dominant globally as an ideology that people either hold or push against. Until the modern period, you were far more likely to find a confusing melange of conflicting theories (advanced with varying degrees of knowledge or ignorance of distant peoples) all presented more or less equally. Consequently, the Greeks and Romans didn’t think one thing about race, but had many conflicting ideas about where different peoples fit and why.
That makes an anthology of sources in translation an ideal way to present the topic and that is what Kennedy, Roy and Goldman have done here. This is very much what it says ‘on the tin’ – a collection of translated primary sources; the editorial commentary is kept quite minimal and the sources do largely speak for themselves. The authors set out roughly 200 different passages – some quite short, some fairly long – from ancient Greek and Roman writers that touch on the topic of race or ethnicity. Those passages are split in two ways: the book is divided into two sections, the first covering theories and the second covering regions. In the first section, the reader is given examples of some of the dominant strains of how Greeks and Romans thought about different peoples and what made them different – genealogical theories, environmental theories (people become different because they are molded by different places), cultural models and so on. The approach is a brilliant way to hammer home to the reader the lack of any single hegemonic model of ‘otherness’ in this period, while also exposing them to the most frequent motifs with which the ancients thought about different peoples.
Then the back two-thirds of the book proceed in a series of chapters covering specific regions. Presenting, say, almost 20 passages on the peoples of ‘barbarian’ Europe (Gaul, Germany, Britain) together also helps the reader get a real sense of both the range of ways specific regions were imagined but also common tropes, motifs and stereotypes that were common among ancient authors.
The translations in the volume are invariably top-rate, easy to read while being faithful to the original text. The editorial notes are brief but can help put passages in the context of the larger works they come from. The book also features reprints of a series of maps showing the world as described by the Greeks and Romans, a useful way to remember how approximate their understanding of distant places and their geographic relations could be. Overall, the volume is useful as a reference text – when you really need to find the right passage to demonstrate a particular motif, stereotype or theory of difference – but is going to be most valuable to the student of antiquity who wants to begin to really get a handle on the varied ways the Greeks and Romans understood ethnic and cultural difference.
I’ve never been on the receiving end of the sorts of manuscript peer reviews detailed in this article, but I know for sure that they’re out there. Examples shown include things like “This manuscript was not worth my time so I did not read it and recommend rejection”, “What the authors have done is an insult to science” and “This young lady is lucky to have been mentored by the leading men in the field”. Completely unacceptable.
The point of reviewing an article for publication is to offer constructive criticism, not ad hominem zingers. I mean, even if a manuscript is an insult to science, you can tell the authors what you think is wrong with it and why you don’t think it should be published. I realize that takes longer than insulting them, but there you have it. There really are worthless manuscripts out there, God knows, but just saying “This is worthless” doesn’t do anything to help solve the problem. Tell the authors, tell the editors what the problems are. And if the paper isn’t down in that category but (in your view) has significant problems, well, tell the authors what those problems are without mocking them.
As the article mentions, cultural factors can blur the line between plainspoken criticism and insults, but the examples above (and many others quoted) definitely cross the line in anybody’s culture. I have (for example) told authors that their paper is (in my view) not ready for publication until they cite some extremely relevant literature, but I didn’t go on to add my suspicions that they were avoiding doing so to try to make their own work look more novel, or perhaps that they were just too slapdash to have realized that there was any such precedent at all. At most, I might say something like this in the “Notes to the Editors” section that the authors don’t see. Another common problem is poor English on the part of the authors, but that doesn’t call for insults, either: just note that the paper needs polishing up, perhaps giving a few examples of what you mean. All of us who have had to get by in second (or third!) languages are familiar with the problem of sounding unintelligent in them, but just as we don’t want others to make that assumption about us, we shouldn’t turn around and do the same.
I’ve also given “Do not publish” reviews that are more “Do not publish here” when I think that a paper is not a good fit for the journal that it’s been sent to. Given today’s landscape, I think that the old-fashioned category of “Not fit to be published at all” is long dead - there are so many journals out there, many of them hungry for manuscripts and/or author fees, that anything at all can be published somewhere. But most of the time I end up recommended publication after some fixes (and I try not to be one of those reviewers who suggest something that means nine more months of experimental work).
It’s the anonymity that breeds the nastiness, for sure. I have said unkind things about published work here on the blog, of course, but by gosh I say it under my own name with my email address attached. You shouldn’t use reviewer anonymization, in my view, just to say things to authors that you wouldn’t tell them to their faces. As the article says, a key test is for authors in turn to ask themselves, when they get unfavorable comments, whether these things will help them revise their paper or strengthen their results, or whether all they do is shake their confidence (or piss them off, I will add myself). There may be some of each, naturally. But you shouldn’t be afraid to call out unprofessional comments with the editors themselves.
A lot of people who make it a point to talk about how they tell it like it is and how they aren’t afraid to hurt anyone’s feelings are actually trying to give themselves licenses to behave like assholes, because that’s the part that they really enjoy. We have our share of those in the research world, perhaps an outright statistical surplus. But that doesn’t mean we have to give them what they want.
A little while ago the toddler's household told me that you could turn the top of a pineapple into a whole entire pineapple plant (with the caveat that at least 60% of the time it goes mouldy). My first attempt at this had got as far as growing a whole entire root network but then suffered a Tragic Incident from which it never recovered; the second had been sat around with partially-browned but no-longer-becoming-more-browned and definitely-still-partially-green leaves for Quite Some Time. I had more or less hit the point of "... is this actually doing anything? at all?" and then upon my return from the most recent round of Adventures I rotated it in service of watering it, to discover...
... that it's growing a WHOLE NEW SET OF LEAVES. Look at it go! I am very excited!
(My understanding is that if I manage to keep it alive that long it'll take somewhere in the region of 3 years to fruit, and then in the fashion of all bromeliads will die having produced said single fruit. Happily this is about the rate at which we eat fresh pineapple...)
The prohibition is against closing a handle while another thread is waiting on that handle. It’s a case of destroying something while it is in use. More generally, you can’t close a handle while another thread is reading from that handle, writing to that handle, using that handle to modify a process or thread’s priority, signaling that handle, whatever.
But if one thread is waiting on the original handle and another thread closes a duplicate, that is not the same handle, so you didn’t break the rule. In fact, closing a duplicate while another thread is waiting on the original is not an uncommon scenario.
Consider this: Suppose there is a helper object whose job it is to set an event handle when something has completed. For example, maybe it’s something similar to ID3D12Fence::SetEventOnCompletion. When you give it the event handle, the object has to duplicate the handle to ensure that it can still get the job done even if the caller later closes the handle. Eventually, the thing completes, and the object calls SetEvent() with the duplicated handle and then closes the duplicate.
Meanwhile, your main thread has done a WaitForMultipleObjects to wait a block of signals.
There is nothing wrong with the helper object closing its private copy of the handle. The point is it didn’t close your copy of the handle, which means that the handle being waited on is not closed while the wait is in progress.