Microspeak: turn into a pumpkin

Oct. 31st, 2025 02:00 pm
[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

The English idiom turn into a pumpkin comes from the fairy tale story of Cinderella, in which the protagonist’s ragged clothes are magically transformed into an elegant gown and a pumpkin is transformed into a carriage, allowing her to attend a royal ball. She is warned that the magic wears off at midnight, upon which her clothes and carriage return to their original forms.

In some fields, the idiom turn into a pumpkin means to regress to a previous level of performance after a period of marked (but perhaps inexplicable) improvement. “The widget install success rate has gone up from 90% to 95%, which is great, but we don’t have a good understanding of why this is happening, since we haven’t made any significant changes to widget installation. It won’t be surprising if this improvement turns into a pumpkin.” (In other words, if this improvement vanishes just as mysteriously as it appeared.)

At Microsoft, the phrase turn into a pumpkin also applies to expected reductions in performance: During the period between the United States Thanksgiving holiday (the end of November) and continuing through Christmas (December 25) to the end of the year, a large percentage of people take vacation, leaving teams running on what feels like a skeleton crew. In anticipation of this reduction, many engineering services scale back their capacity. For example, a minor branch might not get any builds at all, and the major branches might build only once a day instead of twice.

Now you know what somebody means when they say something like, “We should wrap this up before people turn into pumpkins at the end of the year.”

Bonus chatter: Other citations I’ve found

The installation will turn into a pumpkin at the end of the trial period.

In this case, this is saying that the installation will cease to function at the end of the trial period. It will lose its magical powers and become about as useful as a pumpkin.

We should ask Chris before he turns into a pumpkin.

This could mean “We should ask Chris before he goes on vacation.” I’ve also seen it used in online meetings when one participant is joining from a far-away time zone at what for them is an inconvenient time. In this case, it means “We should ask Chris before he falls asleep from exhaustion.”

The post Microspeak: turn into a pumpkin appeared first on The Old New Thing.

[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

You might, for some reason, be building some XAML in code rather than markup. Starting with this XAML:

<!-- XAML markup -->
<Grid>
    <Grid.ColumnDefinitions>
        <ColumnDefinition Width="Auto" />
        <ColumnDefinition Width="Auto" />
    </Grid.ColumnDefinitions>
</Grid>

Here’s a flawed attempt to create the grid in code.

// C#
var autoWidth = new ColumnDefinition() { Width = GridLength.Auto };
var grid = new Grid() { ColumnDefinitions = { autoWidth, autoWidth } };

We create a column definition whose Width is Auto, and then create a grid whose ColumnDefinitions is two columns, namely two of our Auto columns.

This throws a confusing exception.

System.Runtime.InteropServices.COMException (0x800F1000): No installed components were detected.

What do you mean, no installed components were detected? Why does something need to be installed at all? Do I have to install a special component to be able to create XAML elements from code?

Okay, the first problem is an error code collision. The error message text is coming from error number 0x800F1000 which is SPAPI_E_ERROR_NOT_INSTALLED and belongs to the SETUPAPI facility. Unfortunately, XAML decided to put some of its error codes in the same facility, resulting in an error code collision. XAML defined error code number 0x800F1000 to mean “invalid operation” but gave it the same number as SPAPI_E_ERROR_NOT_INSTALLED. Therefore, when various components try to decide the error code, they come up with the setup error instead of the XAML error.

Another XAML error code collision is “X is not a valid value for property Y” which has the numeric value 0x800F1001 and which collides with SPAPI_E_BAD_SECTION_NAME_LINE.

So the real problem has nothing to do with installed components. That’s just an unfortunate decode of the error code.

Fortunately, XAML provides a details string, which is easily overlooked because it’s two lines away and looks like part of another paragraph.

System.Runtime.InteropServices.COMException (0x800F1000): No installed components were detected.

Element is already the child of another element.
Source: Cannot evaluate the exception source.

The problem is that we are reusing the same ColumnDefinition object to describe two different columns. The markup creates two different ColumnDefinition objects, but our code version creates one object and inserts it twice.

To make our code equivalent to the markup, we have to create two ColumnDefinition objects, because that’s what the markup does.

// C#
var autoWidth1 = new ColumnDefinition() { Width = GridLength.Auto };
var autoWidth2 = new ColumnDefinition() { Width = GridLength.Auto };
var grid = new Grid() { ColumnDefinitions = { autoWidth1, autoWidth2 } };

And since we’re not reusing the column definition, we don’t need to give it a name.

// C#
var grid = new Grid() { ColumnDefinitions = {
        new ColumnDefinition() { Width = GridLength.Auto },
        new ColumnDefinition() { Width = GridLength.Auto }
    } };

It’s convenient that this code does resemble the original XAML.

We thought we were doing the right thing by reusing an existing object with identical properties, but the XAML tree is a tree, and you can’t insert the same node into a tree in two different locations.

The post Trying to build a XAML tree in code throws a “No installed components were detected” exception appeared first on The Old New Thing.

[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

Windows Control Flow Guard (CFG) is a defense in depth feature which validates indirect call targets. The idea is that each module that is enabled for CFG provides a bitmap that describes which addresses in the module are intended to be targets of indirect calls. When CFG is enabled in a process, indirect function calls are checked against this table, and if the address is deemed invalid, the process terminates itself, and the Watson service records the details for future investigation.

If you are studying a crash in the control flow guard validator¹ you may want to pick out the failed address so you can understand better what went wrong and use it to guide the next step of your debugging. (Was it a bad address? Was the DLL unloaded? Was it a garbage value due to use-after-free?)

In general, the control flow guard validator takes a function address in some register, performs shifting and masking operations using that register as a source (to calculate the bit position in the call target bitmap), and then tests a bit. The source register is left unchanged so that the caller, on success, can use the validated address as a jump target.

Let’s practice. Here’s one of the control flow guard validator functions for x86-64, which Windows often calls x64. Try to spot the register that holds the address being validated.

ntdll!LdrpValidateUserCallTarget:
    mov     rdx,qword ptr [ntdll!................]
    mov     rax,rcx                  
    shr     rax,9                     ; shift
    mov     rdx,qword ptr [rdx+rax*8] ; crash here
    mov     rax,rcx
    shr     rax,3
    test    cl,0Fh
    jne     @1
    bt      rdx,rax
    jae     @2
    ret
@1: btr     rax,0
    bt      rdx,rax
    jae     @3
@2: or      rax,1
    bt      rdx,rax
    jae     @3
    ret
@3: mov     rax,rcx
    xor     r10d,r10d
    jmp     ntdll!LdrpHandleInvalidUserCallTarget

We see that the value in rcx gets moved into rax, and then rax gets shifted. So the address being validated is in rcx. The marked instruction is the only one that accesses memory, so if there’s a crash, it’ll happen there. The rest of the function is just bit twiddling.

Let’s do the same exercise for x86-32, which Windows often just calls x86.

ntdll!LdrpValidateUserCallTarget:
    mov     edx,dword ptr [ntdll!........]
    mov     eax,ecx                  
    shr     eax,8                     ; shift
    mov     edx,dword ptr [edx+eax*4] ; crash here
    mov     eax,ecx
    shr     eax,3
    test    cl,0Fh
    jne     @1
    bt      edx,eax
    jae     ...
    ret
@1: btr     eax,0
    bt      edx,eax
    jae     ...
    or      eax,1
    bt      edx,eax
    jae     ...
    ret

This time, it’s the value in ecx that gets moved into eax, and then eax gets shifted. The address being validated is therefore in ecx. Again, the marked instruction is the only one that accesses memory.

One more: This time, it’s 32-bit ARM, which Windows calls simply arm.

ntdll!LdrpValidateUserCallTarget:
    mov         r3,#0x.... 
    movt        r3,#0x.... 
    ldr         r3,[r3]    

    lsrs        r2,r0,#6    ; shift
    ubfx        r1,r0,#3,#3
    ldrb        r2,[r3,r2]  ; crash here

    mov         r3,r0
    and         r0,r0,#0xF
    subs        r0,r0,#1
    bne         ...

There are two memory accesses this time. The first is loading from a fixed address (built into r3 in two instructions), so it matches the first instruction of the x86-32 and x86-64 versions; it’s just that x86 can load from many fixed adresses in just one instruction.

The second group of instructions is the interesting one. It shifts the value in r0 and puts the result in r2. It also uses r0 as the source for a bit extraction operation that puts the result in r1, and then it accesses some memory. So it looks like r0 is the address, since it’s the source of the shift instruction.

Mind you, this code modifies r0 later on, so the value in r0 doesn’t hold the address through the entire function. It got copied into r3 for safekeeping, so if you break in later in the function, you’ll want to look to r3 for the address. But if you crash on the memory access, the address is in r0.

Our last example is AArch64, which Windows usually calls arm64.

ntdll!LdrpValidateUserCallTarget:
    adrp        xip0,ntdll!....   
    ldr         xip0,[xip0,#0x598]

    lsr         xip1,x15,#6      ; shift
    tst         x15,#0xF        
    ldrb        wip1,[xip0,xip1] ; crash here
    ubfx        xip0,x15,#3,#3
    bne         @2

    lsr         xip1,xip1,xip0
    tbz         wip1,#0,@3
@1: ret

@2: and         xip0,xip0,#-2
    lsr         xip1,xip1,xip0
    tbz         wip1,#0,@4
@3: tbnz        wip1,#1,@1
@4: mov         xip0,#0
    b           @5
@5: b           ntdll!LdrpHandleInvalidUserCallTarget

Again, we start by loading an address from memory, and then we shift a register, this time the x15 register. There is a bit test instruction whose result is used later, and then we perform a memory access (which could crash). From inspection, we therefore see that the address being validated is in x15.

The point of this exercise is not to memorize the registers that each architecture uses for control flow guard,³ but rather to take a little information about the design of control flow guard (checking a bit in a bitmap, using the address passed in a register to calculate the index),² and using that to figure out on the fly which register you need to look at based on the code surrounding the crashing access.

¹ Usually, these crashes occur because the address that got passed in is so invalid that there is no memory at the location where the bit in the validation bitmap is supposed to be, resulting in an access violation.

² You don’t even have to know the precise meaning of the bits in the bitmap. All you have to remember is that the address is used to determine the bit to check.

³ I sure don’t have them memorized. Each time it happens, I just re-derive it from the instructions around the crash.

The post What to do when you have a crash in the runtime control flow guard check appeared first on The Old New Thing.

[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

A long time ago, somebody asked, “How did the new Windows 95 user interface get brought to the Windows NT code base? The development of Windows 95 lined up with the endgame of Windows NT 3.1, so how did the finished Windows 95 code get brought to the Windows NT code base for Windows NT 4.0? Did they reimplement it based on the existing specification? Was the code simply merged into the Windows NT code base?”

Members of the Windows 95 user interface team met regularly with members of the Windows NT user interface team to keep them aware of what was going on and even get their input on some ideas that the Windows 95 team were considering. The Windows NT user interface team were focused on shipping Windows NT, but they appreciated being kept in the loop.

During the late phases of the development of Windows 95, the Windows NT side of the house took a more active role in bringing the Windows 95 user interface to Windows NT. They started implementing the new window manager features that Windows 95 introduced, both in terms of new functions such as Register­Class­Ex and Set­Scroll­Info, as well as new behaviors such as having a Close button in the upper right corner. The window managers on Windows NT and Windows 95 both had ancestry in the Windows 3.1 window manager, so a lot of the designs were the same, but the code had long since diverged significantly, so it wasn’t so much merging the code as it was using the Windows 95 code as a reference implementation when reimplementing the features on Windows NT. (For example, the algorithm for walking the dialog tree in the face of WS_EX_CONTROL­PARENT remained the same, but the expression of the algorithm was different.)

The code for Explorer and other user-model components had an easier time. They were taken as-is into the Windows NT code base, warts and all, and the copy was then updated in-place to be more Windows NT-like. For example, the Windows 95 shell used CHAR as its base character, but Windows NT was Unicode-based, so the Windows NT folks had to replace CHAR to WCHAR, and they had to create Unicode variants of the existing shell interfaces, which is why we have IShellLinkA and IShellLinkW, for example. This in turn exposed other problems, such as code that used sizeof(stringBuffer) to calculate the size of a CHAR string buffer, that now had to be changed to sizeof(stringBuffer) / sizeof(stringBuffer[0]) so that it returned the number of elements in the buffer rather than the number of bytes.

Whereas the window manager changes were one-way ports (from Windows 95 into Windows NT), the Explorer and other user-mode changes were bidirectional. The Windows NT team merged their changes back into the Windows 95 code base, so that when they took the next drop of the Windows 95 code base, they wouldn’t have to go back and fix the things that they had already fixed.

Since the code went back to Windows 95, all of the Windows NT-side changes had to be arranged so that they would have no effect on the existing Windows 95 code. The Windows NT team didn’t want to introduce any bugs into the Windows 95 code as part of their port. Protecting the changes was done in a variety of ways.

One was to enclose all the new code inside #ifdef WINNT directives so that they weren’t compiled by Windows 95, or #ifdef#else#endif blocks if the Windows NT version had to diverge from the Windows 95 version. Another was to use macros and typedefs like TCHAR and LPCTSTR so that the same code could compile both as Windows 95 and Windows NT, but using different base characters.

In the case of sizeof directives, a change from sizeof(stringBuffer) to sizeof(stringBuffer) / sizeof(stringBuffer[0]) has no effect on Windows 95 because the sizeof(stringBuffer[0]) is 1, and dividing by 1 has no effect, so that change could be left without protection. But the Windows NT team had another problem: How would they know whether a particular sizeof was one that had already been vetted for Windows NT compatibility, as opposed to one that came from fresh code changes to Windows 95 that need to be inspected? Their solution was to define a synonym macro

#define SIZEOF sizeof

When the Windows NT team verified that a sizeof was correct, or when they fixed it to be correct, they changed the sizeof to SIZEOF. That way, they could search the code for lowercase sizeof to see what still had yet to be inspected. (The Windows 95 team were told to continue using sizeof in new code, so as not to mess up this convention.)

Now, all this happened decades ago when Microsoft used an internal source code system known as SLM, pronounced slime. Formally, it stood for Source Library Manager, but nobody knew that. We just called it SLM. SLM did not support branches, so moving changes from Windows 95 to Windows NT involved manually doing three-way merges for all of the files that changed since the last drop. I suspect that this manual process was largely automated, but it was not as simple as a git merge.

Bonus chatter: The team responsible for porting the Windows 95 shell to Windows NT included Dave Plummer, host of the YouTube channel Dave’s Garage. They had custom business cards made which they used as calling cards. On one side was the email aliases of the team members, arranged in an attractive pattern. On the other side was the simple message: “You have been visited.”

The post How did the Windows 95 user interface code get brought to the Windows NT code base? appeared first on The Old New Thing.

[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

A denial of service vulnerability report was filed against a program, let’s call it Notepad. The actual text of the report was very hard to understand because the grammar was all messed up. I’ll give the finder the benefit of the doubt on the assumption that they are not a native English speaker. Here’s a cleaned-up version:

If you open multiple documents, one very large document and several small documents, and then try to exit all of them at once, the program will take a very long time saving the large document, resulting in a denial of service against the small documents.

I’m not sure what the point is here. The program does eventually finish saving the large document, so everything works out in the end. Are they suggesting that the program should save the smallest documents first? But then wouldn’t that be a denial of service against the large document if you had lots of small documents?

But wait, let’s ask the standard questions.

Who is the attacker?

I guess the attacker is the person who opened the very large document.

Who is the victim?

The victim is the person who is unable to save their small documents because the large document is hogging the program.

What has the attacker gained?

The attacker has annoyed the victim temporarily.

But wait, the attacker and the victim are the same person!

It’s not a security vulnerability that you have the power to annoy yourself. Other ways include “Putting itching powder in your pants” and “Throwing your glasses in the trash.”

Furthermore, there is no impact on other users, or even to other apps by this user. The only person you’re denying service to is yourself.

If you’re concerned about the order in which files are saved on close, you could explicitly close them in the desired order, like, I dunno, most important files first? Removable drives first?

And really, it’s not clear what the finder was expecting here. You loaded a large file, and now you’re saving it. Why is it surprising that this takes a long time?

This was resolved as “Not a vulnerability” with the subcategory “By design.” But sometimes I wish there was subcategory “So what did you expect?”

The post Dubious security vulnerability: Denial of service by loading a very large file appeared first on The Old New Thing.

[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

Some time ago, I mentioned that the Property­Value.Create­Inspectable method doesn’t create a Property­Value but instead just returns its parameter unchanged. I concluded with the remark, “Property­Value.Create­Inspectable even exist? I’m not sure. Perhaps it was added for completeness.”

Since that article was written, I did some archaeology, and I think I found the answer.

Originally, the Property­Value did wrap inspectables, which is the Windows Runtime name for what most languages call an “object”. The idea was that the Property­Set would be an IMap<String, Property­Value>, meaning that it was an associative array mapping strings to Property­Value objects. These Property­Value objects could hold value types like integers, or they could hold reference types in the form of an IInspectable, or they could hold arrays of value or reference types, or they could hold nothing at all (“empty”).

In that original design, the Property­Value.Create­Inspectable method did return a Property­Value whose Type was Property­Type.Inspectable. Similarly, the Property­Value.Create­Empty method returned a Property­Value whose Type was Property­Type.Empty.

Value PropertyType
pv.Type
Creation Retrieval
Nothing Empty CreateEmpty() N/A
Scalar type (e.g. Int32) Int32 CreateInt32(v) pv.GetInt32()
Reference type Inspectable CreateInspectable(v) pv.GetInspectable()
Array type (e.g. Int32[]) Int32Array CreateInt32Array(v) pv.GetInt32Array()

You had to wrap inspectables and empties because the Property­Set was a mapping from strings to Property­Value. Everything had to be expressed as a Property­Value.¹ The Property­Value was the fundamental variant type of the Windows Runtime.

At some point, the team decided to change the design and let Property­Set to be a mapping from string to inspectable (object). If the associated value is an object, then you get the corresponding object. If the associated value is empty, then you get null. And if the associated value is a value type, then the value type is wrapped inside a Property­Value, and the Property­Value wrapper acts as the object.

I don’t know why the team changed their mind, but I suspect one of the points in favor of the new design is that the new design matches how most languages already work: C#, JavaScript, Python, Ruby, Scheme, Java, Objective-C all follow the principle that any type can be expressed as an object. (The act of converting value types to objects is known as “boxing”.)

In other words, the design change promoted IInspectable to be the fundamental variant type, and the Property­Value was demoted to being the fundamental boxed type.

A very large change was made to the Windows code base to update the design and to update all the code that had been using the old design. To make the conversion easier, the general shape of the new design matched the old design where it made sense. And this meant that Property­Value.Create­Inspectable and Property­Value.Create­Empty still existed as functions, but they didn’t do anything interesting. They remained for backward compatibility. Also remaining for backward compatibility are the enumeration values Empty and Inspectable. They remain defined, but nobody uses them.

That takes us to what we have today:

Value PropertyType
pv.Type
Creation Retrieval
Nothing N/A null null
Scalar type (e.g. Int32) Int32 CreateInt32(v) pv.GetInt32()
Reference type N/A v pv
Array type (e.g. Int32[]) Int32Array CreateInt32Array(v) pv.GetInt32Array()

For backward compatibility, Create­Empty returns null and Create­Inspectable returns its (non-null) parameter unchanged.

In addition to aligning closer to a large number of popular languages, the new design simplifies the code required to store and retrieve a possibly-null reference in a Property­Set.

// C#

// Old design: CreateInspectable requires a non-null reference
ps.Insert("key", v == null ? PropertyValue.CreateEmpty() : PropertyValue.CreateInspectable(v));

pv = ps.Lookup("key");
if (pv.Type == PropertyValue.Empty) {
    v = null;
} else {
    v = pv.GetInspectable();
}

// New design: If it's null, then store null
ps.Insert("key", v);

v = ps.Lookup("key");

If you want a function that returns the Property­Type for an arbitrary inspectable, you can do this:

// C#
PropertyType WhatIsThisThing(object thing)
{
    if (thing is IPropertyValue pv) {
        return pv.Type;
    } else if (thing is null) {
        return PropertyType.Empty;
    } else {
        return PropertyType.Inspectable;
    }
}

// C# 9.0
PropertyType WhatIsThisThing(object thing)
{
    return thing switch {
        null => PropertyType.Empty,
        IPropertyValue pv => pv.Type,
        _ => PropertyType.Inspectable,
    };
}

// C++/WinRT
PropertyType WhatIsThisThing(IInspectable const& thing)
{
    if (thing == nullptr) {
        return PropertyType::Empty;
    } else if (auto pv = thing.try_as<IPropertyValue>()) {
        return pv.Type();
    } else {
        return PropertyType::Inspectable;
    }
}

¹ Since Property­Value is a reference type, they could have decided to use a null pointer to represent the empty state. I assume they explicitly wrapped the empty state for uniformity, rather than forcing people to check for null before querying the Type. Compare the JsonValue which has an explicit object for representing the JSON null rather than using a null pointer.

The post The early history of the Windows Runtime PropertyValue and why there is a PropertyType.Inspectable that is never used appeared first on The Old New Thing.

[syndicated profile] acoup_feed

Posted by Bret Devereaux

Something different this week! The folks at Paradox Development Studios were nice enough to give me a review copy of the upcoming Europa Universalis V (releasing Nov. 4) ahead of release so that I could share some thoughts! For the unfamiliar, Europa Universalis is a series of strategy games covering the early modern period (traditionally 1444 to 1836, but now 1337 to 1836). While the series title implies a European focus, these games have become over time increasingly global and really represent an effort as global historical strategy. So this is a game where you play as a state (or state-like entity) from the end of the Middle Ages through the Age of Exploration through the rest of the early modern period to the period of the American and French revolutions, though obviously as your game develops, those revolutions may or may not happen as historically they did.

First, I should note that if you have not read it, I did an extended series on this game’s predecessor (I, II, III, IV) as part of my Teaching Paradox series – indeed, as the very first of the Teaching Paradox posts. You don’t need to read that series to understand this post, but I will be referring back to it.

Now while I have had EU5 for a couple of weeks at this point, due to teaching and research demands and such, I have only been able to give it a limited amount of time – about 30 hours – so I’m going to call this post a ‘first impression’ rather than a ‘review’ or ‘analysis.’ It should, of course, tell you something about this game that “about 30 hours” is a “limited amount of time.” In particular, I play these games relatively slowly (with a lot of automation turned off so I can make granular decisions), so those hours have only gotten me in one run to about 1450 (more on that below) which is hardly all of the game – hell, it is barely past the start date of EUIV. So what I want to do here is first present very briefly my answer to “is it good?” (yes) because if I don’t, I will be asked about it endlessly, and then get into the real meat of the question which is how the historical assumptions of Europa Universalis V differ from those of its predecessors.

And finally, in the interest of full disclosure, I did receive a review copy of the game and obviously I have interacted with the folks at Paradox before and will continue to try to bully them into making Imperator II.

This is where I got by 1450 starting as the County of Holland in 1337 (Luxembourg, Cologne, Julich and Berg are all vassals). You can’t see it, but somehow my now-Duke-of-the-Netherlands got elected Holy Roman Emperor, so I am actually the number 2 great power in Europe behind France, though that ranking is really deceptive, as my country doesn’t really have the military power to stand up to England or France alone.

Is it Good?

Yes.

Now, I should do some table-setting here: I am the kind of player who, as you will recall, really enjoyed Victoria III on launch. That title was divisive, in part because of a heavy emphasis on relatively indirect systems (building factories changes your demographics which changes your interest groups which changes your politics, that sort of thing) and because it de-emphasized the ‘war-game’ aspect these games often have. So please understand that in the battle between ‘more spreadsheets’ and ‘less spreadsheet’ when it comes to Paradox games, I am firmly in the ‘more spreadsheets’ camp. With that said then, let me qualify my previous ‘yes’ with:

Europa Universalis V is likely to be divisive: it is much more systems driven, much less ‘gamey’ in its design, much more granular (and thus slower) and substantially more complex. I also imagine that, once the game hits the general public, players will find ways to break those more complex systemic interactions in amusing ways – I imagine there will be a lot of balance patches and tweaks. But I think it needs to be stressed that this is not EU4.5. One of the things I really like about Paradox is that they do not tread water in design from one iteration to the next: HoI4 takes some big risks compared to HoI3, VickyIII is quite a jump from VickyII and so on (Crusader Kings has experienced perhaps the least of this from II to III, but the gap between CKII (2012) and CKIII (2022) is still a lot bigger, design wise, in my view, than, say the gap between Total War: Rome II (2013) and Total War: Pharaoh: Dynasties: Colon (:2024)).

The word I am going to keep using for this “lots of complex interacting systems summarized by charts of charts with charts and charts” is ‘crunchy.’ It isn’t soft and smooth, it fights you a little bit, but there’s a lot of texture and complexity there.

I wanted a screenshot to give an impression of just how complex they were willing to make this game and I couldn’t do better than “here look at how they model the Holy Roman Empire.” That is, I kid you not, 7 electors, 26 Free Imperial Cities, 52 Prince-(Arch)bishops, 195 Imperial Princes and 1 Republic for a two hundred and eighty-one states in the Holy Roman Empire.

The design is relentlessly crunchier: monarch points (“mana”) are gone, but this isn’t a full switch back to EUIII‘s ‘gold does everything’ system. Instead, monarch points have been replaced by a bunch of interacting systems that blend Imperator and VickyIII‘s approach to pops and buildings. Estates, an add-on mechanic in EU4 are now central: buildings and pops contribute to income but also estate power which shapes politics and cultural values. Culture-Value Sliders are back (something I missed from EUIII) but they now move gradually rather than in clear increments and are shaped by policies and privileges. In short, abstract game mechanics have been mostly replaced by interwoven systems which the player influences only slowly and often indirectly: if you build a lot of burgher buildings (like, say, fabric guilds in your towns) you will slowly empower your burghers and as they become more powerful keeping them happy will demand shifting your government in ways they want (but their increased power will enable you to do that). In a process that might take, you know, a century or two to happen, if not longer.

The game is also unapologetic about its complexity and frankly suffers a little from a UI that is clearly straining to do what it needs to. Basically all of the major country screens (‘diplomacy, economy’ that stuff) have sub-screens (usually 2-3), some of which open sub-sub-screens. Every ‘location’ – the ‘sub-province’ territorial unit – has its own buildings and economy and pops which all have their own screens. I never found it too hard to find what I was looking for, but if you are the kind of player who cordially hates that sort of thing and likes how minimal it is in CKII and III, well it is maximal here. I imagine that we’ll see over the course of this game’s post-release development, some stream-lining of UI and a few more tooltips (some of the economy tooltips take a minute before you understand what all of the icons mean), but unless they streamline the systems – and please do not streamline away the systems – this is always going to be a crunchy, complex game.

The developers are clearly aware of that because they’ve added the ability to automate most of these systems. If you don’t want to individually set trade priorities in every trading center, you can automate them. If you don’t want to individually plan out your economic buildings, you can automate that too. If you don’t want to set estate-by-estate tax rates to crush the wealth of your aristocracy so that they can’t stop you when you tear away their feudal privileges, you can set a ‘keep them happy’ automation on that as well. The clear intent here is to let players, especially of large territorial empires, focus their attention as they like.

I can’t say how well these automation systems worked because I did my run as the County of Holland, which can’t really afford to automate its economy, because economy is all it has. I fiddled a bit with trade automation and found it worked well enough prioritizing profit in my market, although I did end up reserving a chunk of trade capacity (you can do that!) so I could make sure specific industries got the raw materials they needed (the trade AI seems, reasonably enough, to focus on the profit of the trade but can’t consider the downstream economic effects of, say, a purchase of wool that makes a small profit, but the wool then goes through two production buildings to come out as dyed fabric where the big profit is in production. But again, easy enough, as the player, to notice the warning up at the top of the screen telling you your market is short on wool and to just reserve some trade capacity to buy wool). To judge by the performance of AI countries, the automation seems to work OK, but I imagine players will find exciting ways to break it shortly.

So that’s the ‘review’ part: this is a much more systems driven approach, which blends some of the best ideas from Victoria III, Imperator, and EUIII and EUIV and I think it works really well. I suspect it will turn off players who prefer less systems-driven, less-crunchy experiences, but the automation may soften that blow. There are some rough edges, some systems that clearly need balancing and refinement and such, as you’d expect for a game this complex at launch. But it works pretty well, I had no stability issues, it ran well for me (shocking, given its complexity) and for the folks who want a crunchier, systems-driven approach, this will be your jam. And France is a monster which must be destroyed.

I suppose if we must do number-scores, we ought to do them in true ACOUP fashion, so I give EUV a rating of Part Vc out of a ‘three part’ series.

On to the history part.

The Historical Viewpoint of Europa Universalis V

Perhaps the defining conclusion in my series on EU4 was that it was, fundamentally, a game about states. The state was the primary actor and frankly non-state actors – people, estates, non-state peoples, companies and so on – didn’t figure in very much. This was a reasonable frame for a game about the early modern period to take, but it was a frame and like any frame, some things must be left out.

Europa Universalis V is a game about nations – that’s the closest ‘single word’ I can get to in English. It’s better to say Europa Universalis V is a game about the place where people and polity meet. And indeed, the game actually specifies this: one of my repeated complaints with Paradox games not named Crusader Kings is that there’s often a lack of clarity as to exactly what the player is playing as: the ruler? the state? the people?

EUV specifies, at the beginning of the tutorial: you play as the “spirit of the nation.” Again, nation is an awkward word, but there’s none better. I admit, when I saw that, I laughed out loud because it was such a direct response – intended or not – to one of my critiques (particularly of Imperator, which shares its director, Johan Andersson with EUV). But the game sincerely means it: you are not the state, but the point at which the state and its people meet.

This is a game about the conjunction of people and polity, regardless of if those people make up a ‘nation’ or even a ‘state.’ It thus embraces more kinds of polities than EUIII or IV did: non-territorial companies, nomadic polities and so on. But it also embraces more about polities than they did: this is not just a game about states but also a game about people. ‘History from above’ is not gone – the state (or polity) – is a major mover and shaper of culture and events and Big Men can do Big Things in this game, but EUV introduces ‘history from below’ in dramatic fashion.

I think this is clearest in how it handles estates. First, a bit of background in my experience: I was playing as the County of Holland, with the aim of forming the Netherlands (to include all of the Low Countries) and eventually becoming a Republic. But Holland doesn’t start out as a plutocratic republic because this game starts in 1336: it starts out as a ‘feudal’ aristocratic imperial prince like dozens of others, albeit with a bit more ‘town’ orientation. But to hit the ‘become a republic’ button, you need to crank the ‘plutocracy’ value really high and to do that, you need a bunch of privileges and reforms that prioritize your burghers (the rich townsfolk) over your aristocracy and to do that you need to break the power of the aristocracy.

And doing that is a mix of bottom-up and top-down systems and what is really neat and new is that the aristocrats will try to fight you. The economy of your country is shaped by buildings. You build most of the buildings, but your estates build them too and not all of the buildings they build are helpful: most of the estates (aristocracy, clergy, burghers, everyone else) have a building which does basically nothing but increase their political power. Meanwhile, their chosen economic buildings add to your tax revenue, but also add to the estate’s revenue, and thus their power – power that in turn makes it more and more costly to tear away their privileges or preferred policies. And those privileges and policies in turn lock in the ‘values’ sliders (which now change gradually rather than in increments) so as long as you have a bunch of old-style late-medieval privileges and policies, you are never cranking the sliders from ‘decentralized’ to ‘centralized’ or from ‘aristocratic’ to ‘plutocratic.’

So you have to tear away those privileges, which costs a scaling amount of stability based on how powerful the estate that has them is. And of course ripping away the privilege of a powerful estate is going to both infuriate them (they each have an independent approval meter) and tank your stability (which lowers the approval of every estate) which is very likely to trigger some exciting unrest. What I like is that those privileges are not just “the aristocracy gets more powerful and the state gets nothing” – they almost invariably represent a tradeoff. The aristocracy gets more powerful but their levies are larger (because they’re expected to do vassal service) or the clergy gets more powerful but you get a bump to literacy (because they’re running your schools) and so on.

It creates a system where you can see why a country might become ‘trapped’ in a stable but stagnant situation, because change means losing the advantages of stability and the upsides of those tradeoffs immediately, but realizing the benefits of reform only gradually.

This is further emphasized with the control mechanic. Every ‘location’ has a control rating, which scales the manpower, tax revenue and other goodies you can get from it, representing how deeply that location is actually penetrated by your government. Control scales with travel distance from the capital (by sea or land), which in part incentivizes vassal states (better a vassal with good control paying tribute than you with zero control). But it also creates this quite granular feel of how state control radiates from the capital and from cities and towns, so while states have formal, rigid borders, you can absolutely have hinterlands in which state power barely exist (replacing the very binary core/not-core distinction from EUIV).

Beneath those systems people matter a lot more and you engage with them more directly. Buildings have to be staffed to work, which means they need pops of the correct social status to run them. If there’s demand, pops can promote between social statuses (somewhat slowly) to fill workplaces, so a modernizing country is going to be draining out its peasantry to fill growing towns, which in turn is going to change social, cultural and political balances. Warfare is also more closely tied to pops: levies are drawn from your pops now (professional soldiers come from the manpower pool, now generated by buildings which must be staffed) based on their type: burghers show up as heavy infantry, aristocrats as cavalry, peasants as peasant levies.

The new early start date (1337) gives the game a chance to just hammer you out of the door with the importance of population in its new systems because just about as you’ve got your economy humming the way you want, the Black Death comes along and kills a ton of your pops, causing absolute havoc in your economy, a potent, forceful reminder that your economy is run by people and those people fundamentally matter.

Pops in turn have needs (they want certain goods) and get angry if they don’t have those goods and that frustration links back into the estate system and influences control which in turn impacts how much you can gain in resources from a given place. Pop satisfaction, based on estate opinion and if they are getting needs feeds not just into unrest but also into control, so it is important not to just keep your pops alive, but increasingly as the game goes on to keep them happy.

Pops matter so much, one of the ‘country types’ is a society of pops: a polity that is a people without a defined territory, used for a lot of non-state peoples. I was initially really concerned when I loaded up the map and saw that much of the Americas was ’empty,’ but then realized, looking closer: no, it is nearly entirely filled by societies of pops, with whom you can still do diplomacy, declare war and so on. They’re just non-state polities that don’t have hard defined territories (but do exist in places – there’s a mapmode to see where they are). There are also ‘countries’ which are businesses with a collection of buildings. I don’t think either of these are playable (yet?) – I couldn’t seem to select them to play as – but even just interacting with them creates a really interesting texture to the game and a reminder that there are historical actors here that are not states.

So in the 1430s, I ended up getting elected as Holy Roman Emperor (a few key countries had women rulers who were ineligible, which left my Duke on top) and I managed to get an imperial tithe through the diet, which gave me an enormous influx of cash.
Which I am definitely, in this screenshot, using to better the safety of the Empire, by which I mean build out my economy.

Conclusions

One of the things I appreciate about Paradox’s development style is that they take risks: while there is certainly a Paradox ‘house style,’ the numbered sequels in their core lineup always take big swings at trying something new. Sometimes it works great (Crusader Kings II and III), sometimes it needs a bit of refinement to find itself (Victoria III) and sometimes it doesn’t ever quite come together (Imperator, Sengoku). But each time they are pushing, trying something new and embracing new viewpoints on what history can be in the process.

And I think Europa Universalis V is a substantial step in that direction. It is still EU – a bit more state-centered than CK or Vicky – but the historical vision here encompasses a lot more. It’s also clear that EUV was built from the ground up to really be a global grand strategy game. It’s not perfect at that (the institution system returns, with a better design but maybe not a perfect one), but now that not only non-European states, but non-European non-states are actors in the game with their own agendas and interactions, it’s made pretty great strides.

I’m excited to see where this one goes. Now, to be fair, I am a guy who really likes Victoria III, which is also very ‘crunchy,’ so Europa Universalis V is pitched right at my kind of player. I do think that Paradox is going to have to think, in terms of future development, what their ‘introductory’ game is, because I’d imagine someone coming in with no experience of a Paradox game is going to find Europa Universalis V pretty challenging to pick up (but rewarding).

But I’m also excited to play more and see how these systems interactions change over the various ages (the game breaks its tech tree into chronological ‘ages’). There are a bunch of key variables – crown power, control, market prices, and so on – which I can see get significantly tweaked either by technologies or modifiers linked to specific periods, which I can imagine really shake things up pretty dramatically.

But beyond just the ‘more systems’ approach, I’m really impressed by the effort, in a game that remains on some level fundamentally about states, to bring in a broader vision of history, which encompasses the historical agency of far more kinds of people and their competing interests and visions. I think I’ll have a lot more to say about this game as time goes on, both as I have more time with it and as it gets tweaks and more content.

Now, if you’ll excuse me, I still need to figure out how I am going to cut France down to size.

The Nocebo Effect Hangs Around

Oct. 31st, 2025 01:53 pm
[syndicated profile] in_the_pipeline_feed

The “nocebo” effect is something that makes a lot of sense when you think about it, but it still seems weird. Everyone has heard of the placebo effect, where some interventions tend to have a beneficial effect if you think that they’re having (or going to have) a beneficial effect. There is no doubt that this is real, although its magnitude varies a great deal depending on circumstances, as it well should. The nocebo effect is just that with the sign flipped: there are things that have a real negative effect on people when they believe that a negative effect is there.

Physical pain has been a great proving ground for both of these effects, and there’s a great deal of literature on this. That’s well-summed-up in this new paper, which has some disconcerting new results to add. As the authors note, the nocebo literature is much smaller than the placebo one, in no small part due to the latter effect being recognized much earlier. One of the open questions has been whether people who have been shown to be more responsive to placebo effects are more so to nocebo ones as well (or vice versa). 

This work looked at pain (from experimentally applied heat) in over a hundred healthy volunteers, and they were looking at both the perception of pain reduction (placebo) and pain aggravation (nocebo) at temperatures that were the same difference off the control temperature. And they do a pretty exhaustive comparison between all three conditions. One thing that came out first was that the nocebo effect seemed to be stronger than the placebo one in the initial round of experiments (Day 1). The same subjects came back for an identical round of testing a week later, and the nocebo effect was still stronger than placebo. The strength of the two had been significantly correlated on Day 1, but not on Day 8, interestingly. On both days, looking at “expectancy” in the volunteers showed that the expected pain relief was noticeably stronger than the expected pain worsening. But neither set of expectations were linked to the real experiences on Day 1.

It’s possible that the “stronger nocebo effect” setting in human psychology is driven by evolutionary adaptation (avoidance of threats and negative consequences, which is a feature of human behavior that’s been demonstrated in many model systems). One thing this paper should serve as a warning for, though, is around the persistence of these nocebo effects: experimenters should not assume that they’re going to fade away over time! This especially applies to experiment-subject relationships, with positive/negative framing, overall relationship (trust, perceptions of competence), and amount of time spent dwelling on possible side effects or other negative consequences. I hope that these findings are extended to the pharmaceutical side of things!

[syndicated profile] frontendmasters_feed

Posted by Ana Tudor

Recently, I saw someone asked on Reddit what others are using these days for full-bleed and breakout elements. This refers to having a main content area of limited width (usually centered), but having the ability for some elements to be wider, either all the way to the browser edges or somewhere in-between.

desired layout at various viewports — notice the image is a full-bleed element, the warning is a breakout element and the header is a  breakout element with a full-bleed background

Is it still the old method that involves stretching elements to 100vw and then moving them in the negative direction of the x axis via an offset, margin, or translation?

Or is it the newer method that involves a grid with a limited width main column in the middle then symmetrical columns on the sides, with elements spanning an odd number of columns that depends on whether we want them to have the normal width of the main column or we want them a bit wider, breaking out of that or we even want them to be full-bleed?

There is no perfectly right answer. It depends on use case and how you look at it. We’re going to look at modified and combined versions and essentially achieve what we need to depending on the situation with modern CSS.

The old method described in the 2016 CSS-Tricks article has the disadvantage of relying on a Firefox bug (that has been fixed since 2017) to work well in all situations. The problem is that 100vw doesn’t take into account any vertical scrollbars we might have (and no, the new viewport units don’t solve that problem either). This leads to the 100vw width elements being wider than the available horizontal space if there is a vertical scrollbar, overflowing and causing a horizontal scrollbar, something I also often see with the bizarre practice of setting the width of the body to 100vw. Now, considering the elements we normally want to be full-bleed are likely images, we can hide the problem with overflow-x: hidden on the html. But it still doesn’t feel quite right.

Maybe it’s because I’m a tech, not a designer who thinks in terms of design grids, but I prefer to keep my grids minimal and when I look at the desired result, my first thought is: that’s a single column grid with the items that are wider than the column, and everything is center-aligned.

So let’s take a look at the approach I most commonly use (or at least start from), which doesn’t involve a scary-looking grid column setup, and, for the simple base cases, doesn’t involve any containers or even any calc(), which some people find confusing.

The Base Grid

We’re starting off with a grid, of course! We set a one limited width column grid on the body and we middle align this grid horizontally within the the content-box of the body:

body {
  display: grid;
  grid-template-columns: min(100%, 60em);
  justify-content: center
}

By default, display: grid creates a one column grid that stretches horizontally across the entire content-box width of the element it’s set on. This makes all the children of the element getting display: grid be distributed in that one column, one on each row. The first on the first row, the second on the second row and so on.

The grid-template-columns property is used here to max out the width of this one column at 60em by setting its width to be the minimum between 100% of the content-box width and 60em. If the content-box of the element we’ve set the grid on has a width of up to 60em, then the one column of the grid stretches horizontally across the entire content-box. If the content-box of the element we’ve set the grid on has a width above 60em, then our one grid column doesn’t stretch horizontally across the entire content-box anymore, but instead stays 60em wide, the maximum width it can take. Of course, this maximum width can be any other value we want.

The justify-content property is used to align the grid horizontally within the content-box of the element it’s set on. In this case, our one grid column is center aligned.

Note that I keep talking about the content-box here. This is because, even at really narrow viewports, we normally want a bit of space in between the text edge and the lateral edge of the available area (the viewport minus any scrollbars we might have). Initially, this space is the default margin of 8px on the body, though I also often do something similar to the approach Chris wrote about recently and zero the default margin to replace it with a clamped font-relative padding. But whichever of them is used still gets subtracted from the available space (viewport width minus any vertical scrollbar we might have) to give us the content-box width of the body.

Now whatever children the body may have (headings, paragraphs, images and so on), they’re all in the limited width grid cells of our one column, something that’s highlighted by the DevTools grid overlay in the screenshot below.

Screenshot. Shows a middle aligned grid with a single column and multiple rows, something that's highlighted by the DevTools-enabled grid overlay.
the one limited width column grid layout with the DevTools grid lines overlay (live demo)

Full-Bleed Elements

Let’s say we want to make an element full-bleed (edge to edge). For example, an image or an image gallery, because that’s what makes the most sense to have stretching all across the entire available page width. This means we want the full viewport width minus any scrollbars we might have.

Nowadays we can get that by making the html a container so that its descendants know its available width (not including scrollbars) as 100cqw (container query width).

html { container-type: inline-size }

Having this, we can create our full-bleed elements:

.full-bleed-elem {
  justify-self: center;
  width: 100cqw
}

Setting width: 100cqw on our full-bleed elements means they get the full available content-box width of the nearest container, which is the html in this case.

The justify-self aligns the element horizontally within its grid-area (which is limited to one grid cell in our case here). We need to set it here because the default is start, which means the left edge of the element starts from the left edge of its containing grid-area. The left edge of the containing grid-area is the same as the left edge of our one column grid here.

Screenshot. Shows a middle aligned grid with a single column and multiple rows, something that's highlighted by the DevTools-enabled grid overlay. On some of these rows, we have full-bleed elements that expand all across the entire available page width (the viewport width minus any vertical scrollbars we might have).
one column grid with full-bleed elements and a DevTools grid overlay highlighting the grid lines

Just like before, we still have a single column grid, center aligned.

One thing to note here is this means we cannot have any margin, border or padding on the html element as any of these would reduce its content-box, whose size is what the container query units are based on. In practice, the margin, border, and padding on the html are all zero by default and I don’t think I’ve seen them set to anything else anywhere outside of some mind-bending CSS Battle solutions.

Another thing to note is that there may be cases where we need another container somewhere in between. In that case, we can still access the content-box width of the html as detailed in a previous article:

@property --full-w {
  syntax: '<length>';
  initial-value: 0px;
  inherits: true;
}

html { container-type: inline-size }

body { --full-w: 100cqw }

.full-bleed-elem {
  justify-self: center;
  width: var(--full-w);
}

Often times, we probably also want some padding on the full-bleed element if it is, for example, an image gallery, but not if it is a single img element.

For img elements, the actual image always occupies just the content-box. Any padding we set on it is empty space around the content-box. This is not generally  desirable in our case. Unless we want to add some kind of decorations around it via the background property (by layering CSS gradients to create some kind of cool pattern, for example), we want the image to stretch all across the available viewport space after accounting for any vertical scrollbar we might have and not be left with empty space on the lateral sides.

Furthermore, if the img uses a box-sizing of content-box, that empty padding space gets added to the 100cqw width of its content-box, making the padding-box width exceed the available space and causing a horizontal scrollbar on the page.

When setting a padding on full-bleed elements, it’s probably best to exclude img elements:

.full-bleed-elem:not(img) { padding: .5em }

Note that in this case, the full-bleed elements getting the padding need to also have box-sizing set to border-box. This is done so that the padding gets subtracted out of the set width and not added as it would happen in the default content-box case.

.full-bleed-elem:not(img) {
  box-sizing: border-box;
  padding: .5em
}

You can see it in action and play with it in the following live demo:

You might be wondering… is it even necessary to set border-box since setting everything to border-box is a pretty popular reset style?

Personally, I don’t set that in resets anymore because I find that with the the new layout options we have, the number of cases where I still need to explicitly set dimensions in general and widths in particular has declined. Drastically. Most of the time, I just size columns, rows, set the flex property instead and let the grid or flex children get sized by those without explicitly setting any dimensions. And when I don’t have to set dimensions explicitly, the box-sizing becomes irrelevant and even problematic in some situations. So I just don’t bother with including box-sizing: border-box in the reset these days anymore and instead only set it in the cases where it’s needed.

Like here, for the non-img full bleed elements.

Another thing you may be wondering about… how about just setting a negative lateral margin?

We know the viewport width minus any scrollbars as 100cqw, we know the column width as 100%, so the difference between the two 100cqw - 100% is the space on the left side of the column plus the space on the right side of the column. This means half the difference .5*(100cqw - 100%), which we can also write as 50cqw - 50%, is the space on just one side. And then we put a minus in front and get our lateral margin. Like this:

.full-bleed-elem {
  margin: .5rem calc(50% - 50cqw);
}

Or, if we want to avoid overriding the vertical margin:

.full-bleed-elem {
  margin-inline: calc(50% - 50cqw);
}

This seems like a good option. It’s just one margin property instead of a justify-self and a width one. And it also avoids having to set box-sizing to border-box if we want a padding on our full-bleed element. But we should also take into account what exactly we are most likely to make full-bleed.

One case we considered here was that of full-bleed images. The thing with img elements is that, by default, they don’t size themselves to fit the grid areas containing them, they just use their own intrinsic size. For full-bleed images this means they are either going to not fill the entire available viewport space if their intrinsic width is smaller than the viewport or overflow the viewport if their intrinsic width is bigger than the available viewport space (the viewport width minus any vertical scrollbar we might have). So we need to set their width anyway.

For the other case, that of the scrolling image gallery, the negative margin can be an option.

Breakout Elements

These are wider than our main content, so they break out of our grid column, but are not full-bleed.

So we would give them a width that’s smaller than the content-box width of the html, which we know as 100cqw, but still bigger than the width of our only grid column, which we know as 100%. Assuming we want breakout elements to extend out on each side by 4em, this means:

.break-elem {
  justify-self: center;
  width: min(100cqw, 100% + 2*4em)
}

Again, we might use a negative lateral margin instead. For breakout elements, which are a lot more likely to be text content elements, the negative margin approach makes more sense than for the full-bleed ones. Note that just like the width, the lateral margin also needs to be capped in case the lateral space on the sides of our column drops under 4em.

.break-elem { margin: 0 max(-4em, 50% - 50cqw) }

Note that we use the max() because for negative values like the margin here, the smaller (minimum) one in absolute value (closer to 0) is the one that’s bigger when looking at the full axis going from minus to plus infinity.

But then again, we might want to be consistent and set full-bleed and breakout styles the same way, maybe grouping them together:

.full-bleed-elem, .break-elem {
  justify-self: center;
  width: min(100cqw var(--comp-w, ));
}

/* This is valid! */
.break-elem { --comp-w: , 100% + 2*4em  }

:is(.full-bleed-elem, .break-elem):not(img) {
  box-sizing: border-box;
  padding: .5em;
}

Some people prefer :where() instead of :is() for specificity reasons, as :where() always has 0 specificity, while :is() has the specificity of the most specific selector in its arguments. But that is precisely one of my main reasons for using :is() here.

And yes, both having an empty default for a CSS variable and its value starting with a comma is valid. Replacing --comp-w with its value gives us a width of min(100cqw) (which is the same as 100cqw) for full-bleed elements and one of min(100cqw, 100% + 2*4em) for breakout elements.

Screenshot. Shows a middle aligned grid with a single column and multiple rows, something that's highlighted by the DevTools-enabled grid overlay. On some of these rows, we have full-bleed images that expand all across the entire available page width (the viewport width minus any vertical scrollbars we might have). On others, we have breakout boxes that expand laterally outside their grid cells, but are not wide enough to be full-bleed.
one column grid with full-bleed and breakout elements, as well as a DevTools grid overlay highlighting the grid lines (live demo)

If we want to have different types of breakout elements that extend out more or less, not all exactly by the same fixed value, we make that value a custom property --dx, which we can change based on the type of breakout element:

.break-elem { --comp-w: , 100% + 2*var(--dx, 4em) }

The --dx value could also be negative and, in this case, the element doesn’t really break out of the main column, it shrinks so it’s narrower.

.break-elem--mini { --dx: -2em }
.break-elem--maxi { --dx: 8em }
Screenshot. Shows a middle aligned grid with a single column and multiple rows, something that's highlighted by the DevTools-enabled grid overlay. One of these rows has a full-bleed image that expands all across the entire available page width (the viewport width minus any vertical scrollbars we might have). On other rows, we have breakout boxes that are not the same width as their grid cells, but are not wide enough to be full-bleed. Most of these boxes are wider than their containing grid cells, but one is narrower.
one column grid with a full-bleed image and various sizes of breakout elements, as well as a DevTools grid overlay highlighting the grid lines (live demo)

Full-Bleed Backgrounds for Limited Width Elements

Sometimes we may want only the background of the element to be full-bleed, but not the element content. In the simplest case, we can do with a border-image and if you want to better understand this property, check out this article by Temani Afif detailing a lot of use cases.

.full-bleed-back {
  border-image: var(--img) fill 0/ / 0 50cqw;
}

This works for mono backgrounds (like the one created for the full-bleed header and footer below with a single stop gradient), for most gradients and even for actual images in some cases.

Screenshot. Shows a middle aligned grid with a single column and multiple rows, something that's highlighted by the DevTools-enabled grid overlay. On the very first row, we have a limited width header with a solid full-bleed mono background. On other rows, we have full-bleed elements that expand all across the entire available page width (the viewport width minus any vertical scrollbars we might have). On other rows, we have breakout boxes that are not the same width as their grid cells, but are not wide enough to be full-bleed.
one column grid that has a tightly fit limited width header with a full-bleed mono background; it also has a full-bleed image and a breakout element, as well as a DevTools grid overlay highlighting the grid lines (live demo)

The mono background above is created as follows (all these demos adapt to user theme preferences):

--img: conic-gradient(light-dark(#ededed, #121212) 0 0)

This method is perfect for such mono backgrounds, but if we want gradient or image ones, there are some aspects we need to consider.

The thing about the 0 50cqw outset value is that it tells the browser to extend the area where the border-image is painted by 50cqw outwards from the padding-box boundary on the lateral sides. This means it extends outside the vewport, but since this is just the border-image, not the border reserving space, it doesn’t cause overflow/ a horizontal scrollbar, so we can keep it simple and use it like this for gradients.

That is, if we can avoid percentage position trouble. While this is not an issue in linear top to bottom gradients, if we want to use percentages in linear left to right gradients or to position radial or conic ones, we need to scale the [0%, 100%] interval to the [50% - 50cqw, 50% + 50cqw] interval along the x axis.

.linear-horizontal {
  --img: 
    linear-gradient(
      90deg, 
      var(--c0) calc(50% - 50cqw), 
      var(--c1) 50%
    );
}

.radial {
  --img: 
    radial-gradient(
      15cqw at calc(50% - 25cqw) 0, 
      var(--c0), 
      var(--c1)
    );
}

.conic {
  --img: 
    conic-gradient(
      at calc(50% + 15cqw), 
      var(--c1) 30%, 
      var(--c0), 
      var(--c1) 70%
    );
}

However, this scaling is not enough for linear gradients at an angle that’s not a multiple of 90°. And it may be overly complicated even for the types of gradients where it works well.

So another option is compute how much the border-image needs to expand laterally out of the available horizontal space 100cqw and the maximum grid column width --grid-w. This then allows us to use percentages normally inside any kind of gradient, including linear ones at an angle that’s not a multiple of 90°.

body {
  --grid-w: 60em;
  display: grid;
  grid-template-columns: min(100%, var(--grid-w));
  justify-content: center;
}

.full-bleed-back {
  border-image: 
    var(--img) fill 0/ / 
    0 calc(50cqw - .5*var(--grid-w));
}
Screenshot. Shows a middle aligned grid with a single column and multiple rows, something that's highlighted by the DevTools-enabled grid overlay. On the very first row, we have a limited width header with a solid full-bleed gradient background. On other rows, we have full-bleed elements that expand all across the entire available page width (the viewport width minus any vertical scrollbars we might have). On other rows, we have breakout boxes that are not the same width as their grid cells, but are not wide enough to be full-bleed.
one column grid that has a tightly fit limited width header with a full-bleed angled gradient background (at an angle that’s not a multiple of 90°); it also has a full-bleed image and a breakout element, as well as a DevTools grid overlay highlighting the grid lines (live demo)

This has a tiny problem that other styling decisions we’re likely to take (and which we’ll discuss in a moment) prevent from happening, but, assuming we don’t make those choices, let’s take a look at it and how we can solve it.

full-bleed background issue on narrow viewports

On narrow viewports, our background isn’t full-bleed anymore, it stops a tiny distance away from the lateral sides. That tiny distance is at most the size of the lateral margin or padding on the body. As mentioned before, I prefer to zero the default margin and use a font-size-relative padding, but in a lot of cases, it doesn’t make any difference whatsoever.

Screenshot collage. Shows the top area of the page with the header in both the dark and light theme cases at a narrow viewport width of 400px. It also highlights the fact that the header's full-bleed background isn't quite full-bleed, but stops a tiny distance away from the lateral sides.
the problem in the narrow viewport case, highlighted for both the dark and the light themes

This happens when the maximum grid column width --grid-w doesn’t fit anymore in the available viewport space (not including the scrollbar) minus the lateral spacing on the sides of our one column grid (set as a margin or padding).

The solution is to use a max() instead of the calc() to ensure that the border-image expands laterally at the very least as much as that lateral spacing --grid-s.

body {
  --grid-w: 60em;
  --grid-s: .5em;
  display: grid;
  grid-template-columns: min(100%, var(--grid-w));
  justify-content: center;
  padding: 0 var(--grid-s);
}

.full-bleed-back {
  border-image: 
    var(--img) fill 0/ / 
    0 max(var(--grid-s), 50cqw - .5*var(--grid-w));
}
fix for full-bleed background issue on narrow viewports (live demo)

For actual images however, we have an even bigger problem: border-image doesn’t offer the cover option we have for backgrounds or images and we don’t really have a reliable way of getting around this. One of the repeat options might work for us in some scenarios, but I find that’s rarely the case for the results I want in such situations.

You can see the problem in this demo when resizing the viewport — for an element whose height is unknown as it depends on its content, the border-image option (the second one) means that if we want to avoid the image getting distorted, then its size needs to be intrinsic size. Always. It never scales, which means it repeats for large viewports and its sides get clipped off for small viewports.

So if we want more control over an image background or multiple background layers, it’s probably better to use an absolutely positioned pseudo-element. This also avoids the earlier problem of the full-bleed background not going all the way to the edges without taking into account the lateral spacing on the grid container (in this case, the body).

.full-bleed-back-xtra {
  position: relative;
  z-index: 1
}

.full-bleed-back-xtra::before {
  position: absolute;
  inset: 0 calc(50% - 50cqw);
  z-index: -1;
  content: ''
}

The inset makes our pseudo to stretch across the entire padding-box of its parent vertically and outside of it (minus sign) by half the available viewport space (viewport width minus any scrollbars) minus half the pseudo parent’s width.

The negative z-index on the pseudo ensures it’s behind the element’s text content. The positive z-index on the element itself ensures the pseudo doesn’t end up behind the grid container’s background too.

The pseudo background can now be a cover image:

background: var(--img-pos, var(--img) 50%)/ cover

I’m taking this approach here to allow easily overriding the background-position together with each image if necessary. In such a case, we set --img-pos:

--img-pos: url(my-back-img.jpg) 35% 65%

Otherwise, we only set --img and the default of 50% gets used:

--img-pos: url(my-back-img.jpg)

In the particular case of our demos so far, which use a light or dark theme to respect user preferences, we’ve also set a light-dark() value for the background-color, as well as an overlay blend mode to either brighten or darken our full-bleed background depending on the theme. This ensures the header text  remains readable in both scenarios.

Screenshot. Shows a middle aligned grid with a single column and multiple rows, something that's highlighted by the DevTools-enabled grid overlay. On the very first row, we have a limited width header with a solid full-bleed image background. On other rows, we have full-bleed elements that expand all across the entire available page width (the viewport width minus any vertical scrollbars we might have). On other rows, we have breakout boxes that are not the same width as their grid cells, but are not wide enough to be full-bleed.
one column grid that has a tightly fit limited width header with a full-bleed image background; it also has a full-bleed image and a breakout element, as well as a DevTools grid overlay highlighting the grid lines (live demo)

We can also have multiple layers of gradients, maybe even blended, maybe even with a filter making them grainy (something that would help with the visible banding noticed in the border-image method examples) or creating a halftone pattern.

Screenshot. Shows a middle aligned grid with a single column and multiple rows, something that's highlighted by the DevTools-enabled grid overlay. On the very first row, we have a limited width header with a solid full-bleed multi-gradient, filtered background. On other rows, we have full-bleed elements that expand all across the entire available page width (the viewport width minus any vertical scrollbars we might have). On other rows, we have breakout boxes that are not the same width as their grid cells, but are not wide enough to be full-bleed.
one column grid that has a tightly fit limited width header with a filtered full-bleed multi-layer background; it also has a full-bleed image and a breakout element, as well as a DevTools grid overlay highlighting the grid lines (live demo)

Combining options

We can of course also have a breakout element with a full-bleed background – in this case, we give it both classes, break-elem and full-bleed-back.

Our recipe page header for example, probably looks better as a breakout element in addition to having a full-bleed background.

If the breakout elements in general have a border or their own specific background, we should ensure these don’t apply if they also have full-bleed backgrounds:

.break-elem:not([class*='full-bleed-back']) {
  border: solid 1px;
  background: var(--break-back)
}

Or we can opt to separate these visual prettifying styles from the layout ones. For example, in the Halloween example demos, I’ve opted to set the border and background styles via a separate class .box:

.box {
  border: solid 1px var(--c);
  background: lch(from var(--c) l c h/ .15)
}

And then set --c (as well as the warning icon in front) via a .box--warn class.

Screenshot. Shows a middle aligned grid with a single column and multiple rows, something that's highlighted by the DevTools-enabled grid overlay. On the very first row, we have a breakout header (wider than its containing grid cell, but not wide enough to be full-bleed) with a solid full-bleed multi-gradient, filtered background. On other rows, we have full-bleed elements that expand all across the entire available page width (the viewport width minus any vertical scrollbars we might have). On other rows, we have breakout boxes.
one column grid that has a breakout header with a filtered full-bleed multi-layer background; it also has a full-bleed image and a breakout element, as well as a DevTools grid overlay highlighting the grid lines (live demo)

Another thing to note here is that when having a full-bleed background for a breakout element and we use the border-image tactic, we don’t have to adapt our formula to take into account the lateral spacing, as that’s set as a padding on the breakout element and not on its grid parent.

The most important of these techniques can also be seen in the meta demo below, which has the relevant CSS in style elements that got display: block.

Nesting

We may also have a figure whose img is full-bleed, while the figcaption uses the normal column width (or maybe it’s a breakout element).

<figure>
  <img src='full-bleed-img.jpg' alt='image description' class='full-bleed-elem'>
  <figcaption>image caption</figcaption>
</figure>

Not much extra code is required here.

The simple modern solution is to make the img a block element so that the justify-self property set via the .full-bleed-elem middle aligns it even if it’s not a grid or flex item.

img.full-bleed-elem { display: block }

However, support for justify-self applying to block elements as per the current spec is still limited to only Chromium browsers at the moment. And while the Firefox bug seems to have had some activity lately, the Safari one looks like it’s dormant.

So the easy cross-browser way to get around that without any further computations is to make the figure a grid too in this case.

figure:has(.full-bleed-elem, .break-elem) {
  display: grid;
  grid-template-columns: 100%;
  width: 100%;
}
Screenshot. Shows a middle aligned grid with a single column and multiple rows, something that's highlighted by the DevTools-enabled grid overlay. This grid has a figure that is tightly fit inside its grid cell, but also has a full-bleed image spreading across the entire available horizontal space (the viewport width minus any vertical scrollbars) we might have. On other rows, we have full-bleed elements or breakout boxes (wider than their containing grid cells, but still not wide enough to be full-bleed on wide screens). We also have a combination that's a breakout header with a full-bleed background.
one column grid that has a figure, tightly fit horizontally within its containing column, but with a full-bleed image; there’s also a DevTools grid overlay highlighting the grid lines (live demo)

Floating Problems

This is a problem that got mentioned for the three column grid technique and I really didn’t understand it at first.

I started playing with CSS to change the look of a blog and for some reason, maybe because that was what the first example I saw looked like, I got into the habit of putting any floated thumbnail and the text next to it into a wrapper. And it never occurred to me that the wrapper wasn’t necessary until I started writing this article and looked into it.

Mostly because… I almost never need to float things. I did it for those blog post thumbnails fifteen years ago, for shape-outside demos, for drop caps, but that was about it. As far as layouts go, I just used position: absolute for years before going straight to flex and grid.

This was why I didn’t understand this problem at first. I thought that if you want to float something, you have to put it in a wrapper anyway. And at the end of the day, this is the easiest solution: put the entire content of our one column in a wrapper. In which case, until justify-self applying on block elements works cross-browser, we need to replace that declaration on full-bleed and breakout elements with our old friend margin-left:

margin-left: calc(50% -50cqw)

This allows us to have floated elements inside the wrapper.

Screenshot. Shows a middle aligned grid with a single column and multiple rows, something that's highlighted by the DevTools-enabled grid overlay. This grid has a single grid child that is tightly fit inside its containing column and acts as a wrapper for full-bleed elements, breakout boxes (wider than their containing grid cells, but still not wide enough to be full-bleed on wide screens), combinations of these like a breakout header with a full-bleed background. But this wrapper also allows its children to be floated.
one column grid that has a single grid child, tightly fit horizontally within its containing column and acting as a wrapper for the entire page content; since this wrapper has no flex or grid layout, its children can be floated (live demo)

Final Thoughts: Do we even really need grid?

At this point, getting to this floats solution begs the question: do we even really need grid?

It depends.

We could just set lateral padding or margin on the body instead.

I’d normally prefer padding in this case, as padding doesn’t restrict the background and sometimes we want some full viewport backdrop effects involving both the body and the html background.

Other times, we may want a background just for the limited width of the content in the middle, in which case margin on the body makes more sense.

If we want to be ready for both situations, then we’re better off with not setting any margin or padding on the body and just wrapping all content in a limited width, middle aligned (good old max-width plus auto margins) main that also gets a background.

At the same time, my uses cases for something like this have never involved using floats and have benefitted from other grid features like gaps, which make handling spacing easier than via margins or paddings.

So at the end of the day, the best solution is going to depend on the context.

new ware at Hanukkah Craft Market

Oct. 31st, 2025 04:18 am
[syndicated profile] fuzzychef_feed

Posted by Josh "FuzzyChef" Berkus

photo of a sluggakiah with a few candles in

Every year for the Leikam Hanukkah Craft Market I make a few new things to have them available for holiday sales. All of these will be available for purchase this Sunday (Nov 2).

First, pictured above, is two new Sluggakiahs, for those who want a more Pacific Northwest note to their festival of lights celebration.

three red syrup birds

I've made a few new syrup birds for your brunching pleasure.

sourdough crock with lid

Since I needed a new crock for our 20-year-old San Francisco sourdough, I made a couple extra to sell.

blue-and-yellow wine chiller with handles

Now, I realize it's totally the wrong time of year for this, but I also needed a new wine chiller for the back patio, and thought that other folks might want one as well. So there will be 3 on sale.

And, of course, there's always mugs:

two streaky orange-and-brown mugs

So, if you're in the Portland area, please drop by Leikam Brewing on Sunday and check out what I and nine other artisans have to offer.

If you're further away, check my online store next week for more goodies for the holiday season.

View Transitions Feature Explorer

Oct. 30th, 2025 11:58 pm
[syndicated profile] frontendmasters_feed

Posted by Chris Coyier

It’s a generally good thing to know that browser support for browser features isn’t always quite a simple as yes or no. There can be sub-features involved as things evolve that roll out in browsers at different times. View Transitions is an example of that going on right now. There are “Same-Document View Transitions” supported in all browsers right now, but “Cross-Document View Transitions” are still missing Firefox support. And there are quite a few more related features beyond that! Bramus has a nice way to explore that.

kaberett: Trans symbol with Swiss Army knife tools at other positions around the central circle. (Default)
[personal profile] kaberett

I supplied knives and fine motor control; the toddler supplied art direction; the toddler's resident adults supplied outlines for me to cut around (and candles, and matches, and in fact all of the cutting of the tiny pumpkin).

one large and one small pumpkin, carved, with candles, in the dark

Wipeout

Oct. 30th, 2025 02:44 pm
dorchadas: (Warcraft Face your Nightmares)
[personal profile] dorchadas
Walking Laila to school today and she wanted me to carry her, so I picked her up. Why not--there's only so much longer that I'll be able to hold on to her, since she's already around 18 kg (~40 lbs) and carrying her for long distances is hard work! So when she asks me to carry her, I'll try to do it as much as I can. I already know that she's capable of walking for long distances herself if she has to, so I don't have any concerns about her ability.

Anyway, I was walking and all of a sudden I felt my boot hit a raise in the sidewalk. I took a step forward, still off balance, took another step, and then started falling. As I fell, I made sure to twist a bit so I wouldn't directly land on Laila, and her backpack cushioned her fall too, but I still spent a very scary ten seconds or so trying to get her to respond to me when I asked if she was okay. I asked her if she was hurt anywhere, she shook her head, and I had her stand up to make sure her balance was working okay. A couple bystanders were also there and asked if she was okay, then asked if I was okay, but I said I'll be fine and took Laila to school.

Well, I wasn't fine per se. I had a bunch of blood dripping from a laceration on my wrist, and when I got home my pants were stuck to my knee by blood. I got home and [instagram.com profile] sashagee cleaned up the wounds and bandaged them, and now I'm sitting down with some ice on my knee. I was still able to walk just fine in my after-lunch walk, so I'll live. And so will Laila, and that's the important thing.

A Rare Silicon Switch

Oct. 30th, 2025 12:10 pm
[syndicated profile] in_the_pipeline_feed

Here’s another example of an idea that has been kicking around for years in medicinal chemistry without ever really breaking through: substituting a silicon atom for a carbon. To be fair, most of the time this doesn’t seem to do all that much, while introducing various uncertainties around ADME and toxicity (since we don’t have all that much experience with organosilanes as drugs). So you can see why we’re not overrun with “silyl switch” compounds. But at the same time, there really do seem to be instances where it can help.

For instance, there was (is?) a camptothecin derivative, known variously as karenitecin, cositecan, or BNP1350, that had an alkylsilyl side chain that was claimed to help it be less prone to being removed by efflux pumps. As far as I can tell, this one has kicked around in a number of Phase I and II trials without ever advancing. And a silane analog of haloperidol did indeed show a different (and quite possibly beneficial) metabolic profile, but I don’t think that one even made it to the clinic. As I mentioned in that blog post linked in the first paragraph, I sent in a trimethylsilyl-for-t-butyl switch compound one time in an analoging program, and I have to say that the response from the project team was not a favorable one. But as often happens, there seemed to be no particular advantage to the TMS analog, so it didn’t become an issue, other than in the “Please don’t do that again” way.

This new paper (first link in the post) is a silyl-containing KIF18A kinesin inhibitor, which is class of compounds with several representatives, some of which are in the clinic already for susceptible cancers. Like the example mentioned above, this switch (a silapiperidine for plain piperidine) seems to have improved efflux stability. I’m not completely sure how this occurs, though - the silicon analogs are a big less hydrophilic, but what efflux transport proteins like and dislike is still a mystery to me (and no, not just to me!) 

I find it hard to believe that “silicon slows down efflux pumping” will turn out to be a general rule, but I think it’s an idea that’s worth testing if your particular project is having that sort of trouble. Just be ready for some pushback! We’ll see if this compound (ATX020) advances. The company behind it (Accent Therapeutics) is calling it a “tool compound”, but we’ll see if they have the nerve (or the need!) to take a similar organosilane into human trials. . .

Blasting Through Cells

Oct. 29th, 2025 12:04 pm
[syndicated profile] in_the_pipeline_feed

I think that I can guarantee that you haven’t heard this phrase before: “ballistic microscopy”, the subject of this recent preprint. What the authors describe a combination of near-medieval technology on the one hand and cutting-edge analytical work on the other. They are bombarding cells with focused streams of gold nanoparticles (which range from 50 to 1000 nm diameter). These things are traveling at speeds up to 1 km/sec (over 2000 miles per hour (edit: fixed!)) and blast straight through their cellular targets. That’s a thickness of 2 to 4 microns for something like a HEK cell, and those velocities mean that the transit takes only a few picoseconds.

They come out the other side of the cell and splat into a hydrogel matrix on the other side. They’ve already slowed down a bit from their passage through the cell, and the hydrogel brings them to a halt. But when you examine them there, you find that they have carried along small amounts of the cellular material with them. It’s only a few attoliters, but by gosh that’s enough for current proteomic, nucleic acid, and cryo-EM techniques to get a handle on what’s in there. So what you get is an instantaneous snapshot of the cellular contents from a very small, very well defined needle-stick through a living cell. (People have actually done that, sampling cells with micro-needles and micro-straws, but this seems to be a step further). 

You can tell that the authors are enjoying themselves: the technique itself is abbreviated BaM, and the hydrogel sample obtained is referred to as a “SPLAT-MAP”. (If that’s an acronym it seems to be undefined in the manuscript!) You get a lot of information from doing fluorescent imaging while the bombardment is underway - location of the particle stream hitting the cell (complete with streaks through the cytoplasm in high-speed side views), xy spatial distribution on the hydrogel itself, and depth (z) which is dependent on the size of the particles involved. 

The group tested this in lysate from cells that had been expressing GFP-labled actin protein, and sure enough: the particles entrained fluorescent bits of cell material that corresponded to the labeled protein. And those particles penetrated less into the hydrogel braking material than control particles that were shot in directly, showing that they had experienced drag from schlorking through the cellular contents (my term, which all are welcome to if this technique catches on). Moving on to real cells, HEK293 cells were stained for nuclear membrane and cell membrane (to aid in IDing the now-fluorescent particles after capture), and they could be cultured right on top of the hydrogel surface. 

If the fluorescent label was applied instead to another protein, then everything around that protein could be checked out. This was done with the known condensate-former CLIP170, and the nanoparticles pulled condensate droplets right out of the cell. Proteomic analysis showed 641 proteins (with a large number of them annotated as RNA binders, which fits with previous condensate work). One was keratin-18, which hadn’t been seen in these before but which seems to form filaments inside the droplets. But about 17% of them are unannotated, which is just the sort of thing you’d like to dredge up with a method like this. 

Electron microscopy of the particles and their associated cellular samples showed that the cell contents that were brought along tended to be bunched up on the high-curvature edges of the gold particles (and not wetting the entire surface) and that they tended to be membrane-enclosed, sometimes with more than one membrane layer. There’s going to have to be more work done to interpret that, but it does seem significant (and might represent a type of sampling bias with this technique?)

There are a lot of things to be done in general! Zapping all sorts of cellular substructures, in both healthy and diseased or stressed cells, is an obvious set of experiments, and it’ll be interesting to see if some protein distribution maps can be produced from such runs. It’s certainly a new label-free assay technique, and I urge everyone interested in it to fire away and collect piles of data!

Gamer Brainrot

Oct. 29th, 2025 03:26 pm
dorchadas: (Mario SMB3 Boss Bass Eating Mario)
[personal profile] dorchadas
I try not to be an old man yelling at clouds about video games, but something that happened recently in my work on Cataclysm: Dark Days Ahead is testing that.

So CDDA has a mod called Bombastic Perks that adds Fallout-style perks you can take. Some of them are generic gamer perks, like +5% HP or -20% falling damage, that kind of thing, and some of them are weirder. There's one that lets you randomly find soda cans, and one that makes you so bony that your skeleton counts as armor. We recently got the ability to visit other dimensions in CDDA, so I added a perk called Closetland--when you're in a closet, you can walk the secret paths to Closetland, where you can take a breather, bandage your wounds, drop some items, and then get back into the fight. As a counterbalance, you get a message about how tired you are when you enter, you rapidly become more and more tired as you stay (and get messages about the shadows growing darker as you yawn), and if you fall asleep, the Boogeyman gets you and you die. The logic here is that you're using the secret paths that the monsters that every eight-year-old knows live in the closet and under the bed take to get from house to house, and if they find you in their domain, well.

But boy did "Sorry, you die" bring people out of the woodwork. I had people complain that a dozen messages in the message log, a message when you enter Closetland, and a message that pops up asking if you want to keep doing whatever you're doing when you get tired were all not enough warning and I should let people just barely escape the first time so they know explicitly that falling asleep in Closetland can kill you. I had people suggest that I develop an entire separate gamemode where the Boogeyman is chasing you and you have to run. I had people asking to just take it out.

Keep in mind--this is a permadeath survival game about a zombie apocalypse. The intent is that you play it more than once, learning more with each death, and eventually develop survival strategies that will allow you to succeed most of the time while learning the common pitfalls and ways to die. You have to deliberately ignore multiple warnings that maybe this is a bad idea and stick around while you watch your character become preternaturally sleepy extremely fast in order for this to happen to you.

Rabble rabble gamers these days, back in my day if you don't hurl the pie at the yeti in three seconds you died! When you played Angband you'd turn a corner and get blinded, stunned, breathed on, and die in less than a turn! The game is about planning and avoiding these things, not about having the personal power to defeat all challengers. But people are so used to games being power fantasies that they can't handle a no-win encounter even in a game about the inevitable end of the world.

[pain] working on an articulation

Oct. 29th, 2025 09:48 pm
kaberett: Photo of a pile of old leather-bound books. (books)
[personal profile] kaberett

I have, in the latest book, got to The Obligatory Page And A Half On Descartes, but this one makes a point of describing it as a "reductionistic approach".

The Thing Is, of course, that much like the Bohr model (for all that's 250 years younger, give or take), for many and indeed quite plausibly most purposes, The Cartesian Model Of Pain is, for most people and for most purposes, good enough: if you've got to GCSE level then you'll have met the Bohr model; if you get to A-level, you'll start learning about atomic orbitals; and then by the time I was starting my PhD I had to throw out the approximation of atomic nuclei as volumeless points (the reason you get measurable and interpretable stable isotope fractionations of thallium is -- mostly! -- down to the nuclear field shift effect).

Similarly, most of the time you don't actually need to know anything beyond the lie-to-children first-approximation of "if you're experiencing pain, that means something is damaging you, so work out what it is and stop doing that". The Bohr model is good enough for a general understanding of atomic bonds and chemical reactions; specificity theory is good enough for day-to-day encounters with acute pain.

The problem with specificity theory isn't actually that it's wrong (although it is); it's that it gets misapplied in cases where Something More Complicated is going on in ways that obscure even the possibility of Something More Complicated. The problem, as far as I'm concerned, is that it doesn't get presented with the footnote of "this isn't the whole story, and for understanding anything beyond very short-term acute pain you need to go into considerably more detail". But most people aren't in more complex pain than that! Estimates run at ~20% of the population living with chronic pain, but even if we accept the 43% that sometimes gets quoted about the UK, most people do not live with chronic pain.

There's probably an analogy here with the "Migraine Is Not Just A Bad Headache" line (and indeed I'm getting increasingly irritated with all of these books discussing migraine as though the problem is solely and entirely the pain, as opposed to, you know, the rest of the disabling neurological symptoms) but I'm upping my amitriptyline again and it's past my bedtime so I'm not going to work all the details of that out now, but, like, Pain Is Not Just A Tissue Damage, style of thing.

Anyway. The point is that I still haven't actually read Descartes (I've got the posthumously published and much more posthumously translated Treatise on Man in PDF, I just haven't got to it yet) and nonetheless I am bristling at people describing him as reductionist (derogatory). Just. We aren't going to do better if we also persist in wilful misunderstandings and misrepresentations for the sake of slagging off someone who has been dead for three hundred and seventy-five years instead of recognising the actual value inherent in "good enough for most people most of the time", and how that value complicates attempts at more nuance! How about we actually acknowledge the reasons the idea is so compelling, huh, and discuss the circumstances under which the approximation holds versus breaks down? How about that for an idea.

Junior Dev Tip: “Scroll Up”

Oct. 29th, 2025 04:47 pm
[syndicated profile] frontendmasters_feed

Posted by Chris Coyier

Alex Riviere shares a quick story of a junior developer not looking in the right places for error messaging that would directly help them.

… the tools do provide you with information most of the time. You genuinely just need to take a few extra seconds and read what it is saying.

Profile

mathemagicalschema: A blonde-haired boy asleep on an asteroid next to a flower. (Default)
schema

January 2019

S M T W T F S
   12345
6789101112
13141516171819
20212223242526
2728293031  

Most Popular Tags

Style Credit

  • Style: Midnight for Ciel by nornoriel

Expand Cut Tags

No cut tags
Page generated Nov. 1st, 2025 03:09 am
Powered by Dreamwidth Studios