this post is not Descartes apologia

Nov. 20th, 2025 10:25 pm
kaberett: Photo of a pile of old leather-bound books. (books)
[personal profile] kaberett

but I did spend this morning sat down with my printouts and my page markers and my highlighters, and I did this evening take some photos of the relevant pages of a book I've loaned to someone else, and the essay (I say, grandiosely) tentatively entitled The Obligatory Page And A Half On Descartes: against a new dualism is definitely In The Works.

I haven't quite worked out the It is a truth universally acknowledged... opening sentence, and it's probably mostly going to be a series of quotations accompanied by EMPHATIC GESTICULATION in the form of CAPSLOCK, but it's not actually (in its entirety) germane to The Book, so here the indignant yelling can go.

[syndicated profile] frontendmasters_feed

Posted by Sunkanmi Fafowora

3D CSS has been around for a while. The earliest implementation of 3D CSS you can find is in one of W3C’s earliest specifications on 3D transforms in 2009. That’s exactly 15 years after CSS was introduced to the web in 1994, so it’s a really long time!

A common pattern you would see in 3D transformations is the layered pattern, which gives you the illusion of 3D CSS, and this is mostly used with text, like this demo below from Noah Blon:

Or in Amit Sheen’s demos like this one:

The layered pattern, as its name suggests, stacks multiple items into layers, adjusting the Z position and colors of each item with respect to their index value in order to create an illusion of 3D.

Yes, most 3D CSS are just illusions. However, did you know that we can apply the same pattern, but for images? In this article, we will look into how to create a layered pattern for images to create a 3D image in CSS.

In order for you to truly understand how 3D CSS works, here’s a quick list of things you need to do before proceeding:

  1. How the CSS perspective works
  2. A good understanding of the x, y, and z coordinates
  3. Sometimes, you have to think in cubes (bonus)

This layered pattern can be an accessibility problem because duplicated content is read as many times as its repeated. That’s true for text, however, for images this can be circumvented by just leaving all the but first alt attribute empty or setting all the duplicated divs with aria-hidden="true" (this one also works for text). This would hide the duplicated content from the user.

The HTML

Let’s start with the basic markup structure. We’re linking up an identical <img> over and over in multiple layers:

<div class="scene"> 
  <div class="image-container">
    <div class="original">
      <img src="https://images.unsplash.com/photo-1579546929518-9e396f3cc809" alt="Gradient colored image with all colors present starting from the center point">
    </div>
    
    <div class="layers" aria-hidden="true">
      <div class="layer" style="--i: 1;"><img src="https://images.unsplash.com/photo-1579546929518-9e396f3cc809" alt=""></div>
      <div class="layer" style="--i: 2;"><img src="https://images.unsplash.com/photo-1579546929518-9e396f3cc809" alt=""></div>
      <div class="layer" style="--i: 3;"><img src="https://images.unsplash.com/photo-1579546929518-9e396f3cc809" alt=""></div>
      <div class="layer" style="--i: 4;"><img src="https://images.unsplash.com/photo-1579546929518-9e396f3cc809" alt=""></div>
      <div class="layer" style="--i: 5;"><img src="https://images.unsplash.com/photo-1579546929518-9e396f3cc809" alt=""></div>
      ...
      <div class="layer" style="--i: 35;"><img src="https://images.unsplash.com/photo-1579546929518-9e396f3cc809" alt=""></div>
    </div>
  </div>
</div>

The first <div> has a “scene” class wrapped around all the layers. Each layer <div> has an index custom property set --i in the style attribute. This index value is very important, as we will use it later to calculate positioning values. Notice how the <div> with class “original” doesn’t have the aria-hidden attribute? That’s because we want the screen reader to read that first image and not the rest.

We’re using the style indexing approach and not sibling-index() / sibling-count() because they are not yet supported globally across all major browsers. In the future with better support, we could remove the inline styles and use sibling-index() wherever we’re using --i in calculations and sibling-count() when you need to total (35 in this blog post).

It’s important we start with a container for our scene as well because we will apply the CSS perspective property, which controls the depth of our 3D element.

The CSS

Setting the scene, we use a 1000px value for the perspective. A large perspective value is typically good, so the 3D element won’t be too close to the user, but feel free to still use any perspective value of your choice.

We then set all the elements, including the image container <div>s to have a transform-style of preserve-3d. This allows the stacked items to be visible in the 3D space.

.scene {
  perspective: 1000px;
}

.scene * {
  transform-style: preserve-3d;
}

Everything looks a little janky, but that’s expected until we add a bit more CSS to make it look cool.

We need to calculate the offset distance between each of the stacked layers, that is, the distance each layer will have against each other in order for it to appear together or completely separated.

Illustration of layered blocks showing layer offsets in a 3D perspective with a gradient background.

On the image container, we set two variables: the offset distance to be just 2px and the total layers. These would be used to calculate the offset on the Z-axis and the colors between them to make it appear as a single whole 3D element.

.image-container{
  ...
  --layers-count: 35;
  --layer-offset: 2.5px;
}

That’s not all, we now calculate the distance between each layer using the index --i and the offset on the translateZ() function inside the layer class:

.layer {
  transform: translateZ(calc(var(--i) * var(--layer-offset)));
  ...
}

The next step is to use a normalized value (because the index would be too big) to calculate how dark and saturated we want each image to be, so it appears darker in 3D as it goes down in index value. i.e:

.layer {
  ...
  --n: calc(var(--i) / var(--layers-count));
  filter: 
    brightness(calc(0.4 + var(--n) * 0.8))
    saturate(calc(0.8 + var(--n) * 0.4));
}

I’m adding 0.4 to the multiplied value of 80% and --n. If --n is 2/35 for example, our brightness value would equal to 0.45 (0.4 + 2/36 x 0.8) and the saturation would be equal to 0.83. If --``n is 3/35, the brightness value would be 0.47, while the saturation would be 0.82 and so on.

And that’s it! We’re all set! (sike! Not yet).

We just need to set the position property to absolute and inset to be 0 for all the layers so they can be on top of each other. Don’t forget to set the height and width to any desired length, and the position property of the image-container class to relative while you’re at it. Here’s the code if you’ve been following:

.image-container {
  position: relative;
  width: 300px;
  height: 300px;
  transform: rotateX(20deg) rotateY(-10deg);
  --layers-count: 35;
  --layer-offset: 2.5px;
}

.layers,
.layer {
  position: absolute;
  inset: 0;
}

.layer {
  transform: translateZ(calc(var(--i) * var(--layer-offset)));
  --n: calc(var(--i) / var(--layers-count));
  filter: 
    brightness(calc(0.4 + var(--n) * 0.8))
    saturate(calc(0.8 + var(--n) * 0.4));
}

Here’s a quick breakdown of the mathematical calculations going on:

  • translateZ() makes the items stacked visible by calculating them based on their index multiplied by --layer-offset. This moves it away from the user, which is our main 3D affect here.
  • --n is used to normalize the index to a 0-1 range
  • filter is then used with --n to calculate the saturation and brightness of the 3D element

That’s actually where most of the logic lies. This next part is just basic sizing, positioning, and polish.

.layer img {
  width: 100%;
  height: 100%;
  object-fit: cover;
  border-radius: 20px;
  display: block;
}

.original {
  position: relative;
  z-index: 1;
  width: 18.75rem;
  height: 18.75rem;
}

.original img {
  width: 100%;
  height: 100%;
  object-fit: cover;
  border-radius: 20px;
  display: block;
  box-shadow: 0 20px 60px rgba(0 0 0 / 0.6);
}

Check out the final result. Doesn’t it look so cool?!

We’re not done yet!

Who’s ready for a little bit more interactivity? 🙋🏾 I know I am. Let’s add a rotation animation to emphasize the 3D affet.

.image-container {
  ...
  animation: rotate3d 8s ease-in-out infinite alternate; 
}

@keyframes rotate3d {
  0% {
    transform: rotateX(-20deg) rotateY(30deg);
  }
  100% {
    transform: rotateX(-15deg) rotateY(-40deg);
  }
}

Our final result looks like this! Isn’t this so cool?

Bonus: Adding a control feature

Remember how this article is about images and not gradients? Although the image used was an image of a gradient, I’d like to take things a step further by being able to control things like perspective, layer offset, and its rotation. The bonus step is adding a form of controls.

We first need to add the boilerplate HTML and styling for the controls:

 <div class="controls">
  <h3>3D Controls</h3>
  <label>Perspective: <span id="perspValue">1000px</span></label>
  <input type="range" id="perspective" min="200" max="2000" value="1000">

  <label>Layer Offset: <span id="offsetValue">2px</span></label>
  <input type="range" id="offset" min="0.5" max="5" step="0.1" value="2">

  <label>Rotate X: <span id="rotXValue">20°</span></label>
  <input type="range" id="rotateX" min="-90" max="90" value="20">

  <label>Rotate Y: <span id="rotYValue">-10°</span></label>
  <input type="range" id="rotateY" min="-90" max="90" value="-10">

  <div class="image-selector">
    <label>Try Different Images:</label>
    <button data-img="https://images.unsplash.com/photo-1579546929518-9e396f3cc809" class="active">Abstract Gradient</button>
    <button data-img="https://images.unsplash.com/photo-1506905925346-21bda4d32df4">Mountain Landscape</button>
    <button data-img="https://images.unsplash.com/photo-1518791841217-8f162f1e1131">Cat Portrait</button>
    <button data-img="https://images.unsplash.com/photo-1470071459604-3b5ec3a7fe05">Foggy Forest</button>
  </div>
</div>

This would give us access to a host of images to select from, and we would also be able to rotate the main 3D element as we please using <input> type range and <button>s.

The CSS is to add basic styles to the form controls. Nothing too complicated:

.controls {
  display: flex;
  flex-direction: column;
  justify-content: space-between;
  position: absolute;
  top: 1.2rem;
  right: 1.2rem;
  background: rgba(255, 255, 255, 0.1);
  backdrop-filter: blur(10px);
  padding: 1.15rem;
  height: 20rem;
  border-radius: 10px;
  overflow-y: scroll;
  color: white;
  max-width: 250px;
}

.controls h3 {
  margin-bottom: 15px;
  font-size: 1.15rem;
}

.controls label {
  display: flex;
  justify-content: space-between;
  gap: 0.5rem;
  margin: 15px 0 5px;
  font-size: 0.8125rem;
  font-weight: 500;
}

.controls input {
  width: 100%;
}

.controls span {
  font-weight: bold;
}

.image-selector {
  margin-top: 20px;
  padding-top: 20px;
  border-top: 1px solid rgb(255 255 255 / 0.2);
}

.image-selector button {
  width: 100%;
  padding: 8px;
  margin: 5px 0;
  background: rgb(255 255 255 / 0.2);
  border: 1px solid rgb(255 255 255 / 0.3);
  border-radius: 5px;
  color: white;
  cursor: pointer;
  font-size: 12px;
  transition: all 0.3s;
}

.image-selector button:hover {
  background: rgb(255 255 255 / 0.3);
}

.image-selector button.active {
  background: rgb(255 255 255 / 0.4);
  border-color: white;
}

This creates the controls like we want. We haven’t finished, though. Try making some adjustments, and you’d notice that it doesn’t do anything. Why? Because we haven’t applied any JS!

The code below would affect the rotation values on the x and y axis, layer offset, and perspective. It would also change the images to any of the other 3 specified:

const scene = document.querySelector(".scene");
const container = document.querySelector(".image-container");

document.getElementById("perspective").addEventListener("input", (e) => {
  const val = e.target.value;
  scene.style.perspective = val + "px";
  document.getElementById("perspValue").textContent = val + "px";
});

document.getElementById("offset").addEventListener("input", (e) => {
  const val = e.target.value;
  container.style.setProperty("--layer-offset", val + "px");
  document.getElementById("offsetValue").textContent = val + "px";
});

document.getElementById("rotateX").addEventListener("input", (e) => {
  const val = e.target.value;
  updateRotation();
  document.getElementById("rotXValue").textContent = val + "°";
});

document.getElementById("rotateY").addEventListener("input", (e) => {
  const val = e.target.value;
  updateRotation();
  document.getElementById("rotYValue").textContent = val + "°";
});

function updateRotation() {
  const x = document.getElementById("rotateX").value;
  const y = document.getElementById("rotateY").value;
  container.style.transform = `rotateX(${x}deg) rotateY(${y}deg)`;
}

// Image selector
document.querySelectorAll(".image-selector button").forEach((btn) => {
  btn.addEventListener("click", () => {
    const imgUrl = btn.dataset.img;

    // Update all images
    document.querySelectorAll("img").forEach((img) => {
      img.src = imgUrl;
    });

    // Update active button
    document
      .querySelectorAll(".image-selector button")
      .forEach((b) => b.classList.remove("active"));
    btn.classList.add("active");
  });
});

Plus we pop into the CSS and remove the animation, as we can control it ourselves now. Viola! We have a full working demo with various form controls and an image change feature. Go on, change the image to something else to view the result.

Bonus: 3D CSS… Steak

Using this same technique, you know what else we can build? a 3D CSS steak!

It’s currently in black & white. Let’s make it show some color, shall we?

Summary of things I’m doing to make this work:

  • Create a scene, adding the CSS perspective property
  • Duplicate a single image into separate containers
  • Apply transform-style’s preserve-3d on all divs to position them in the 3D space
  • Calculate the normalized value of all items by dividing the index by the total number of images
  • Calculate the brightness of each image container by multiplying the normalized value by 0.9
  • Set translateZ() based on the index of each element multiplied by an offset value. i.e in my case, it is 1.5px for the first one and 0.5px for the second, and that’s it!!

That was fun! Let me know if you’ve done this or tried to do something like it in your own work before.

[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

Some time ago, I discussed the technique of reserving a block of address space and committing memory on demand. In the code, I left the exercise

    // Exercise: What happens if the faulting memory access
    // spans two pages?

As far as I can tell, nobody has addressed the exercise, so I’ll answer it.

If the faulting memory access spans two pages, neither of which is present, then an access violation is raised for one of the pages. (The processor chooses which one.) The exception handler commits that page and then requests execution to continue.

When execution continues, it tries to access the memory again, and the access still fails because one of the required pages is missing. But this time the faulting address will be an address on the missing page.

In practice, what happens is that the access violation is raised repeatedly until all of the problems are fixed. Each time it is raised, an address is reported which, if repaired, would allow the instruction to make further progress. The hope is that eventually, you will fix all of the problems,¹ and execution can resume normally.

Bonus chatter: For the x86-64 and x86-32 instruction sets, I think the most number of pages required by a single instruction is six, for the movsw instruction. This reads two bytes from es:rsi/esi, and writes them to ds:rdi/edi. If both addresses straddle a page, that’s four data pages. And the instruction itself is two bytes, so that can straddle two code pages, for a total of six. (There are other things that could go wrong, like an LDT page miss, but those will be handled in kernel mode and are not observable in user mode.)

Bonus exercises: I may as well answer the other exercises on that page. We don’t have to worry about integer overflow in the calculation of sizeof(WCHAR) * (Result + 1) because we have already verified that Result is in the range [1, MaxChars), so Result + 1 ≤ MaxChars, and we also know that MaxChars = Buffer.Length / sizeof(WCHAR), so multiplying both sides by sizeof(WCHAR) tells us that sizeof(WCHAR) * (Result + 1) ≤ Buffer.Length.

For the final exercise, we use CopyMemory instead of StringCchCopy because the result may contain embedded nulls, and we don’t want to stop copying at the first null.

¹ Though it’s possible that your attempt to fix one problem may undo a previous fix, putting you into an infinite cycle of repair.

The post In the commit-on-demand pattern, what happens if an access violation straddles multiple pages? appeared first on The Old New Thing.

Stop Using CustomEvent

Nov. 20th, 2025 12:03 am
[syndicated profile] frontendmasters_feed

Posted by Chris Coyier

A satisfying little rant from Justin Fagnani: Stop Using CustomEvent.

One point is that you’re forcing the consumer of the event to know that it’s custom and you have to get data out of the details property. Instead, you can subclass Event with new properties and the consumer of that event can pull that data right off the event itself.

[food] breadferences

Nov. 19th, 2025 09:26 pm
kaberett: Trans symbol with Swiss Army knife tools at other positions around the central circle. (Default)
[personal profile] kaberett

At the weekend we made a mildly unusual detour to a fancy local bakery; one of the things they had on the shelves about which I went "oooh" was fig, hazelnut & anise bread. So that flavour combination (plus some spelt) was went into the oven this morning!

The way bread normally works around here is that I make it, via the Ritual Question of Do You Have Any Breadferences (Bread Preferences). To facilitate this call and response, A List of our Usual Options, doubtless to be added to. Suggestions welcome. :)

Read more... )

[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

Say you need to transfer a large amount of data between two processes. One way is to use shared memory. Is that the fastest way to do it? Can you do any better?

One argument against shared memory is that the sender will have to copy the data into the shared memory block, and the recipient will have to copy it out, resulting in two extra copies. On the other hand, Write­Process­Memory could theoretically do its job with just one copy, so would that be faster?

I mean, sure you could copy the data into and out of the shared memory block, but who says that you do? By the same logic, the sender will have to copy the data from the original source into a buffer that it passes to Write­Process­Memory, and the recipient will have to take the data out of the buffer that Write­Process­Memory copied into and copy it out into its own private location for processing.

I guess the theory behind the Write­Process­Memory design is that you could use Write­Process­Memory to copy directly from the original source, and place it directly in the recipient’s private location.

But you can do that with shared memory, too. Just have the source generate the data directly into the shared buffer, and have the recipient consume the data directly out of it. Now you have no copying at all!

Imagine two processes sharing memory like two people sitting with a piece of paper between them. The first person can write something on the piece of paper, and the second person can see it immediately. Indeed, the second person can see it so fast that they can see the partial message before the first person finishes writing it. This is surely faster than giving each person a separate piece of paper, having the first person write something on their paper, and then asking a messenger to copy the message to the second person’s paper.

The “extra copy” straw man in the shared memory double-copy would be like having three pieces of paper: One private to the first person, one private to the second person, and one shared. The first person writes their message on their private sheet of paper, and then they copy the message to the shared piece of paper, and the recipient sees the message on the shared piece of paper and copies it to their private piece of paper. Yes, this entails two copies, but that’s because you set it up that way. The shared memory didn’t force you to create separate copies. That was your idea.

Now, maybe the data generated by the first process is not in a form that the second process can consume directly. In that case, you will need to generate the data into a local buffer and then convert it into a consumable form in the shared buffer. But you had that problem with Write­Process­Memory anyway. If the first process’s data is not consumable by the second process, then it will need to convert it into a consumable form and pass that transformed copy to Write­Process­Memory. So Write­Process­Memory has those same extra copies as shared memory.

Furthermore, Write­Process­Memory doesn’t guarantee atomicity. The receiving process can see a partially copied buffer. It’s not like the system is going to freeze all the threads in the receiving process to prevent them from seeing a partially-copied buffer. With shared memory, you can control how the memory becomes visible to the other process, say by using an atomic write with release when setting the flag which indicates “Buffer is ready!” The Write­Process­Memory function doesn’t let you control how the memory is copied. It just copies it however it wants, so you will need some other way to ensure that the second process doesn’t consume a partial buffer.

Bonus insult: The Write­Process­Memory function internally makes two copies. It allocates a shared buffer, copies the data from the source process to the shared buffer, and then changes memory context to the destination process and copies the data from the shared buffer to the destination process. (It also has a cap on the size of the shared buffer, so if you are writing a lot of memory, it may have to go back and forth multiple times until it copies all of the memory you requested.) So you are guaranteed two copies with Write­Process­Memory.

Bonus chatter: Another strike against Write­Process­Memory is the security implications. It requires PROCESS_VM_WRITE, which basically gives full control of the process. Shared memory, on the other hand, requires only that you find a way to get the shared memory handle to the other process. The originating process does not need any special access to the second process aside from a way to get the handle to it. It doesn’t gain write access to all of the second process’s memory; only the part of the memory that is shared. This adheres to the principle of least access, making it suitable for cases where the two processes are running in different security contexts.

Bonus bonus chatter: The primacy of shared memory is clear once you understand that shared memory is accomplished by memory mapping tricks. It is literally the same memory, just being viewed via two different apertures.

The post Is <CODE>Write­Process­Memory</CODE> faster than shared memory for transferring data between two processes? appeared first on The Old New Thing.

Microspeak: Little-r

Nov. 18th, 2025 03:00 pm
[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

Remember, Microspeak is not necessarily jargon exclusive to Microsoft, but it’s jargon that you need to know if you work at Microsoft.

You may receive an email message that was sent to large group of people, and it will say something like “Little-r me if you have any questions.” What is a little-r?

The term “little-r”¹ (also spelled “little ‘r'” or other variations on the same) means to reply only to the sender, rather than replying to everyone (“reply all”). My understanding is that this term is popular outside Microsoft as well as within it.

As I noted some time ago, employees in the early days of electronic mail at Microsoft used a serial terminal that was connected to their Xenix email server, and they used the classic Unix “mail” program to read their email. In that program, the command to reply only to the email sender was (and still is) a lowercase “r”. The command to reply to everyone is a capital “R”. And the “little-r” / “big-R” commands were carried forward into the WZMAIL program that most employees used as a front end to their Xenix mail server.

These keyboard shortcuts still linger in Outlook, where Ctrl+R replies to the sender and Ctrl+Shift+R replies to all. If you pretend that the Ctrl key isn’t involved, this is just the old “little-r” and “big-R”.

Related reading: Why does Outlook map Ctrl+F to Forward instead of Find, like all right-thinking programs? Another case of keyboard shortcut preservation.

¹ Note that this is pronounced “little R”, and not “littler”.

The post Microspeak: Little-r appeared first on The Old New Thing.

[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

Igor Levicki asked for a plain C version of the sample code to detect whether Windows is running in S-Mode. I didn’t write one for two reasons. First, I didn’t realize that so many people still tried to use COM from plain C. And second, I didn’t realize that the people who try to use COM from plain C are not sufficiently familiar with how COM works at the ABI level to perform the mechanical conversion themselves.

  • p->Method(args) becomes p->lpVtbl->Method(p, args).
  • Copying a C++ smart COM pointer consists of copying the raw pointer and performing an AddRef if the raw pointer is non-null.
  • Destroying a C++ smart COM pointer consists of performing a Release if the raw pointer is non-null.
  • Before overwriting a C++ smart COM pointer, remember the old pointer value, and if it is non-null, Release it after you AddRef the new non-null pointer value.

The wrinkle added by the Windows Runtime is that C doesn’t support namespaces, so the Windows Runtime type names are decorated by their namespaces.

And since you’re not using WRL, then you don’t get the WRL helpers for creating HSTRINGs, so you have to call the low-level HSTRING functions yourself.

#include <Windows.System.Profile.h>

HRESULT ShouldSuggestCompanion(BOOL* suggestCompanion)
{
    HSTRING_HEADER header;
    HSTRING className;
    HRESULT hr;

    hr = WindowsCreateStringReference(RuntimeClass_Windows_System_Profile_WindowsIntegrityPolicy,
                ARRAYSIZE(RuntimeClass_Windows_System_Profile_WindowsIntegrityPolicy) - 1,
                &header, &className);
    if (SUCCEEDED(hr))
    {
        __x_ABI_CWindows_CSystem_CProfile_CIWindowsIntegrityPolicyStatics* statics;
        hr = RoGetActivationFactory(className, &IID___x_ABI_CWindows_CSystem_CProfile_CIWindowsIntegrityPolicyStatics, (void**)&statics);
        if (SUCCEEDED(hr))
        {
            boolean isEnabled;
            hr = statics->lpVtbl->get_IsEnabled(statics, &isEnabled);
            if (SUCCEEDED(hr))
            {
                if (isEnabled)
                {
                    // System is in S-Mode
                    boolean canDisable;
                    hr = statics->lpVtbl->get_CanDisable(statics, &canDisable);
                    if (SUCCEEDED(hr))
                    {
                        // System is in S-Mode but can be taken out of S-Mode
                        *suggestCompanion = TRUE;
                    }
                    else
                    {
                        // System is locked into S-Mode
                        *suggestCompanion = FALSE;
                    }
                }
                else
                {
                    // System is not in S-Mode
                    *suggestCompanion = TRUE;
                }
            }
            statics->lpVtbl->Release(statics);
        }
    }

    return hr;
}

There is a micro-optimization here: We don’t need to call Windows­Delete­String(hstring) at the end because the string we created is a string reference, and those are not reference-counted. (All of the memory is preallocated; there is nothing to clean up.) That said, it doesn’t hurt to call Windows­Delete­String on a string reference; it’s just a nop.

It wasn’t that exciting. It was merely annoying. So that’s another reason I didn’t bother including a plain C sample.

Baltasar García offered a simplification to the original code:

bool s_mode = WindowsIntegrityPolicy.IsEnabled;
bool unlockable_s_mode = WindowsIntegrityPolicy.CanDisable;
bool suggestCompanion = !s_mode || (s_mode && unlockable_s_mode);

and Csaba Varga simplified it further:

bool suggestCompanion = !s_mode || unlockable_s_mode;

I agree that these are valid simplifications, but I spelled it out the long way to make the multi-step logic more explicit, and to allow you to insert other logic into the blocks that right now merely contain an explanatory comment and a Boolean assignment.

The post How can I detect that Windows is running in S-Mode, redux appeared first on The Old New Thing.

[embodiment] ... ha

Nov. 18th, 2025 10:52 pm
kaberett: Trans symbol with Swiss Army knife tools at other positions around the central circle. (Default)
[personal profile] kaberett

"Ugh," I thought, "why am I feeling weirdly migrainey? My Next Phase Of The Menstrual Cycle is very much not due for like another week? I've been weirdly super regular basically since it reasserted itself post-surgery?"

... TURNS OUT that I had lost track of time a bit and I'm not a solid week early at all, it's a whole two days. This Means Some Things:

  1. ... still super regular by my pre-surgical standards,
  2. I will not be at the worst stage of my cycle during Significant Travel next week, and LAST BUT VERY MUCH NOT LEAST
  3. the migraine is still in fact very clearly associated with hormonal changes even when I'm not expecting them, take THAT Headache Is The Second Most Common Form Of Psychosomatic Pain ~statistics~ (and ongoing anxiety).

Laila brain: who's right?

Nov. 18th, 2025 03:21 pm
dorchadas: (Azumanga Daioh Chiyo-chan big eyes)
[personal profile] dorchadas
Right now we're confused and a little worried.

So, a couple weeks ago Laila had a suite of neuropsychological tests done. [instagram.com profile] sashagee told me that at the time, the tester said that Laila's performance indicated symptoms of ADHD, but since this misfold in her brain that was causing her seizures was affecting her behavior, she didn't want to formally diagnose her with anything. She also said that based on the same criteria she would diagnose Laila with a mild intellectual disability, but for the same reason she didn't want to formally diagnose her since Laila might be getting brain surgery with the hope of stopping her seizures--I say might because they have to do an sEEG to see if surgery is even an option, since if the seizures are coming from a critical brain area then excising it would be an awful idea.

Well, we told [facebook.com profile] aaron.hosek about this. [facebook.com profile] aaron.hosek is a school psychologist and most of his work involves testing kids for ADHD, autism, learning disabilities, etc., and he seemed pretty dismissive of neuropsychologists. He was like, yeah, they give them a test on one day and think they've seen it all. He said not to worry too much about it. And Laila's pediatrician seemed to back that up, saying that while Laila was definitely behind, she was only months behind, not a full year, and her rapid progress once she entered school was a very good sign that with therapy she could catch up.

But the reason I'm writing this post is that we just got a copy of the neuropsych's report and it rates Laila "exceptionally low" in many areas, and the highest she got on any area was "low average" (this was on life skills, like using the bathroom, cleaning up after herself, dressing herself, etc). The recommendation was to put her in special education and have an individualized curriculum with one-on-one instruction where possible.

I want to think the pediatrician is right, but of course the pediatrician didn't do any tests. But this is literally [facebook.com profile] aaron.hosek's job that he does all day and he didn't seem to think there were major concerns, but he also hasn't actually tested Laila. We do have her in speech and occupational therapy for increasing her vocabulary and learning to better control her emotions and focus on tasks (we should have started it a year ago when we first were worried about her speech--the state agencies who did her testing did not do a great job if this is the outcome, since they let her out of services after only a few months), and they want to implement some of that in school for her too.

A lot of this is contingent on the results of her brain surgery (if eligible) and later possibly starting ADHD medication, since she has a very short attention span that's really hindering her learning and memory. On the other hand, since starting school her language has gotten much better--she's consistently using I and you correctly, answering questions with "yes" or "no" instead of just repeating the last choice you gave her back at her, narrating her actions to observers, and sometimes asking questions. But is this all just delusion on my part? I don't know--I guess it depends on if she keeps advancing quickly or not. We're going to have a consultation on an sEEG this week and we'll have to see what they say there.
[syndicated profile] frontendmasters_feed

Posted by Chris Coyier

The random() function in CSS is well-specced and just so damn fun. I had some, ahem, random ideas lately I figured I’d write up.

As I write, you can only see random() work in Safari Technical Preview. I’ve mostly used videos to show the visual output, as well as linked up the demos in case you have STP.

Rotating Star Field

I was playing this game BALL x PIT which makes use of this rotating background star field motif. See the video, snipped from one of the games promo videos.

I like how the star field is random, but rotates around the center, and in rings where the direction reverses.

My idea for attempting to reproduce it was to make a big stack of <div> containers where the top center of them are all in the exact center of the screen. Then apply:

  1. A random() height
  2. A random() rotation

Then if I put the “star” at the end (bottom center) of each <div>, I’ll have a random star field where I can later rotate the container around the center of the screen to get the look I was after.

Making a ton of divs is easy in Pug:

- let n = 0;
- let numberOfStars = 1000;
while n < numberOfStars
- ++n
div.starContainer
div.star

Then the setup CSS is:

.starContainer {  
  position: absolute;
  left: 50%;
  top: 50%;
   
  rotate: random(0deg, 360deg);
  transform-origin: top center;
  display: grid;

  width: 4px;
  height: calc(1dvh * var(--c));

  &:nth-child(-n+500) {
    /* Inside Stars */
    --rand: random(--distAwayFromCenter, 0, 35);
  }

  &:nth-child(n+501) {
    /* Outside Stars */
    --rand: random(--distAwayFromCenter2, 35, 70);
  }

}

.star {
  place-self: end;
  background: red;
  height: calc(1dvh * var(--rand));
  width: random(2px, 6px);
  aspect-ratio: 1;
  border-radius: 50%;
}

If I chuck a low-opacity white border on each container so you can see how it works, we’ve got a star field going!

with border on container
border removed

Then if we apply some animated rotation to those containers like:

...
transform-origin: top center;
animation: r 20s infinite linear;

&:nth-child(-n+500) {
  ...
  --rotation: 360deg;
}

&:nth-child(n+501) {
  ...
  --rotation: -360deg;
}

@keyframes r {
  100% {
    rotate: var(--rotation);
  }
}

We get the inside stars rotating one way and the outside stars going the other way:

Demo

I don’t think I got it nearly as cool as the BALL x PIT design, but perhaps the foundation is there.

I found this particular setup really fun to play with, as flipping on and off what CSS you apply to the stars and the containers can yield some really beautiful randomized stuff.

Imagine what you could do playing with colors, shadows, size transitions, etc!

Parallax Stars

While I had the star field thing on my mind, it occurred to me to attach them to a scroll-driven animation rather than just a timed one. I figured if I selected a random selection of 1/3 of them into three groups, I could animate them at different speeds and get a parallax thing going on.

Demo

This one is maybe easier conceptually as we just make a bunch of star <div>s (I won’t paste the code as it’s largely the same as the Pug example above, just no containers) then place their top and left values randomly.

.star {
  width: random(2px, 5px);
  aspect-ratio: 1;
  background: white;
  position: fixed;
  top: calc(random(0dvh, 150dvh) - 25dvh);
  left: random(0dvh, 100dvw);

  opacity: 0.5;
  &:nth-child(-n + 800) {
    opacity: 0.7;
  }
  &:nth-child(-n + 400) {
    opacity: 0.6;
  }
}

Then attach the stars to a scroll-driven animation off the root.

.star {
  ...

  animation: move-y;
  animation-timeline: scroll(root);
  animation-composition: accumulate;
  --move-distance: 100px;

  opacity: 0.5;
  &:nth-child(-n + 800) {
    --move-distance: 300px;
    opacity: 0.7;
  }
  &:nth-child(-n + 400) {
    --move-distance: 200px;
    opacity: 0.6;
  }
}

@keyframes move-y {
  100% {
    top: var(--move-distance);
  }
}

So each group of stars either moves their top position 100px, 200px or 300px over the course of scrolling the page.

The real trick here is the animation-composition: accumulate; which is saying not to animate the top position to the new value but to take the position they already have and “accumulate” the new value it was given. Leading me to think:

I think `animation-composition: accumulate` is gonna see more action with `random()`, as it's like "take what you already got as a value and augment it rather than replace it".Here's a parallax thing where randomly-fixed-positioned stars are moved different amounts (with a scroll-driven animation)

Chris Coyier (@chriscoyier.net) 2025-11-14T16:22:46.035Z

Horizontal Rules of Gridded Dots

Intrigued by combining random() and different animation controlling things, I had the thought to toss steps() into the mix. Like what if a scroll-driven animation wasn’t smooth along with the scrolling, it kinda stuttered the movement of things only on a few “frames”. I considered trying to round() values at first, which is maybe still a possibility somehow, but landed on steps() instead.

The idea here is a “random” grid of dots that then “step” into alignment as the page scrolls. Hopefully creating a satisfying sense of alignment when it gets there, half way through the page.

Again Pug is useful for creating a bunch of repetitive elements1 (but could be JSX or whatever other templating language):

- var numberOfCells = 100;
- var n = 0;

.hr(role="separator")
- n = 0;
while n < numberOfCells
- ++n;
.cell

We can make that <div class="hr" role="seperator"> a flex parent and then randomize some top positions of the cells to look like:

.hr {
  view-timeline-name: --hr-timeline;
  view-timeline-axis: block;

  display: flex;
  gap: 1px;

  > .cell {
    width: 4px;
    height: 4px;
    flex-shrink: 0;
    background: black;

    position: relative;
    top: calc(random(0px, 60px));

    animation-name: center;
    animation-timeline: --hr-timeline;
    animation-timing-function: steps(5);
    animation-range: entry 50% contain 50%;
    animation-fill-mode: both;
  }
}

Rather than using a scroll scroll-driven animation (lol) we’ll name a view-timeline meaning that each one of our separators triggers the animation based on it’s page visibility. Here, it starts when it’s at least half-visible on the bottom of the screen, and finished when it’s exactly half-way across the screen.

I’ll scoot those top positions to a shared value this time, and wait until the last “frame” to change colors:

@keyframes center {
  99% {
    background: black;
  }
  100% {
    top: 30px;
    background: greenyellow;
  }
}

And we get:

Demo

Just playing around here. I think random() is an awfully nice addition to CSS, adding a bit of texture to the dynamic web, as it were.

  1. Styling grid cells would be a sweet improvement to CSS in this case! Here where we’re creating hundreds or thousands of divs just to be styleable chunks on a grid, that’s a lot of extra DOM weight that is really just content-free decoration. ↩︎

[syndicated profile] in_the_pipeline_feed

This is a very useful article on phenotypic screening, and is well worth a read. And if you haven’t done this sort of screen before but are looking to try it out, I’d say it’s essential.

The authors (both with extensive industrial experience) go into detail on the factors that can make for successful screens, and the ones that can send you off into the weeds. There are quite a few of the latter! For small molecule screens, you need to be aware that you’re only going to be covering a fraction of the proteome/genome to begin with, no matter how large your library might be under current conditions. And of course as those libraries get larger, the throughput of your assay becomes a major issue. You can cast your net broadly and lower the number of compounds screened, or you can zero in on One Specific Thing and screen them all, at the risk of missing important and useful stuff. Your call! And there are other problems that the paper provides specific examples of - the way that your compounds will (probably) not distinguish well between related proteins in a family, and the opposite problem of how some of them distinguish so sharply between (say) human and rodent homologs that your attempts at translational assays break down. 

For genomic-based screens, you have to be cognizant of the time domain you’re working in. One the one hand, the expression of a particular gene may be a rather short-lived phenomenon (and only under certain conditions which you may not be aware of), and on the other hand you might have a delayed onset of any effects of your compounds as they have to work their way through the levels of transcription, translation, protein stability, and so on. You can definitely run into genetic redundancies that will mask the activity of some compounds, so take the existence of false negatives as a given. And you should always be aware that the proteins whose levels or conditions you’re eventually modifying probably have several functions in addition to whatever their main “active site” function might be - partner proteins, allosteric effects, scaffolding, feedback into other transcriptional processes, and more. Another consideration: it may be tempting to focus on gene knockouts or knockdowns, and you can often get a lot done that way, but that ignores the whole universe of activation mechanisms. There are more!

And in general, you’re going to have to ask yourself - be honest - what your best workflow is and what you mean by “best”. Is what you’re proposing going to fit well with cellular or animal models of disease, or are you going to be faced with bridging that, too (not recommended)? Do you really have the resources (equipment and human), the time, and the money to do a reasonable job of it all? Another large-scale question, if you’re really thinking of drug discovery by this route, is whether you (or your organization, or your funders) have the stomach for what is a fairly common outcome: you find hits, you refine them, you end up with a list of interesting compounds that do interesting things. . .and no one has the nerve to make the jump into the clinic if there isn’t a well-worked-out translational animal model already in place. You’re not going to discover and validate one of those from scratch along the way, so if there isn’t such a model out there already you’d better be ready for a gut check at the end of the project.

I like to say that a good phenotypic assay is a thing of beauty. But I quickly add quickly that those are hard to realize, and that a bad phenotypic assay is just about the biggest waste of time and resources that you can imagine. Unfortunately, the usual rules apply: there are a lot more ways to do this poorly than to do it well, and many of those done-poorly pathways are temptingly less time- and labor-intensive than the useful ones. 

LLMs for Medical Practice: Look Out

Nov. 17th, 2025 01:50 pm
[syndicated profile] in_the_pipeline_feed

As regular readers well know, I get very frustrated when people use the verb “to reason” in describing the behavior of large language models (LLMs). Sometimes that’s just verbal shorthand, but both in print and in person I keep running into examples of people who really, truly, believe that these things are going through a reasoning process. They are not. None of them. (Edit: for a deep dive into this topic, see this recent paper).

To bring this into the realm of medical science, have a look at this paper from earlier this year. The authors evaluated six different LLM systems in their ability to answer 68 various medical questions. The crucial test here, though was that the question was asked twice in two different ways. All of them started by saying “You are an experienced physician. Provide detailed step-by-step reasoning, then conclude with your final answer in exact format Answer: [Letter]” The prompt was written in that way because the questions would be some detailed medical query, followed by a list of likely options/diagnoses/recommendations, each with a letter, and the LLM was asked to choose among these.

The first time the question was asked, one of the five options was “Reassurance”, i.e. “Don’t do any medical procedure because this is not actually a problem”. Any practicing physician will recognize this as a valid option at times! But the second time the exact same question was posed, the “reassurance” option was replaced by a “None of the other answers” option. Now, the step-by-step clinical reasoning that one would hope for should not be altered in the slightest by that change, and if “Reassurance” was in fact the correct answer, then “None of the above” should be the correct answer when phrased the second way (rather than the range of surgical and other interventions proposed in the other choices).

Instead, the accuracy of the answers across all 68 questions dropped notably in every single LLM system when presented with a “None of the above” option. DeepSeek-R1 was the most resilient, but still degraded. The underlying problem is clear: no reasoning is going on, despite some of these systems being billed as having reasoning ability. Instead, this is all pattern matching, which presents the illusion of thought and the illusion of competence.

This overview at Nature Medicine covers a range of such problems. The authors here find that the latest GPT-5 version does in fact make fewer errors than other systems, but that’s like saying that a given restaurant has overall fewer cockroaches floating in its soup. That’s my analogy, not theirs. The latest models hallucinate a bit less than before and breaks their own supposed rules a bit less, but neither of these have reached acceptable levels. The acceptable level of cockroaches in the soup pot is zero.

As an example of that second problem, the authors here note that GPT-5, like all the other LLMs, will violate its own instructional hierarchy to deliver an answer, and without warning users that this has happened. Supposed safeguards and rules at the system level can and do get disregarded as the software rattles around searching for plausible text to deliver, a problem which is explored in detail here. This is obviously not a good feature in an LLM that is supposed to be dispensing medical advice - as the authors note, such systems should have high-level rules that are never to be violated, things like “Sudden onset of chest pain = always call for emergency evaluation” or “Recommendations for dispensing drugs on the attached list must always fit the following guidelines”. But at present it seems impossible for that “always” to actually stick under real-world conditions. No actual physician whose work was this unreliable would or should be allowed to continue working.

LLMs are text generators, working on probabilities of what their next word choice should be based on what has been seem in their training sets, then dispensing answer-shaped nuggets in smooth, confident, grammatical form. This is not reasoning and it is not understanding - at its best, it is an illusion that can pass for them. And that’s what it is at its worst, too. 

[syndicated profile] frontendmasters_feed

Posted by Chris Coyier

Alex MacArthur shows us there are a lot of ways to break up long tasks in JavaScript. Seven ways, in this post.

That’s a senior developer thing: knowing there are lots of different ways to do things all with different trade-offs. Depending on what you need to do, you can hone in on a solution.

LLMs for Medical Practice: Look Out

Nov. 17th, 2025 01:50 pm
[syndicated profile] in_the_pipeline_feed

As regular readers well know, I get very frustrated when people use the verb “to reason” in describing the behavior of large language models (LLMs). Sometimes that’s just verbal shorthand, but both in print and in person I keep running into examples of people who really, truly, believe that these things are going through a reasoning process. They are not. None of them. 

To bring this into the realm of medical science, have a look at this paper from earlier this year. The authors evaluated six different LLM systems in their ability to answer 68 various medical questions. The crucial test here, though was that the question was asked twice in two different ways. All of them started by saying “You are an experienced physician. Provide detailed step-by-step reasoning, then conclude with your final answer in exact format Answer: [Letter]” The prompt was written in that way because the questions would be some detailed medical query, followed by a list of likely options/diagnoses/recommendations, each with a letter, and the LLM was asked to choose among these.

The first time the question was asked, one of the five options was “Reassurance”, i.e. “Don’t do any medical procedure because this is not actually a problem”. Any practicing physician will recognize this as a valid option at times! But the second time the exact same question was posed, the “reassurance” option was replaced by a “None of the other answers” option. Now, the step-by-step clinical reasoning that one would hope for should not be altered in the slightest by that change, and if “Reassurance” was in fact the correct answer, then “None of the above” should be the correct answer when phrased the second way (rather than the range of surgical and other interventions proposed in the other choices).

Instead, the accuracy of the answers across all 68 questions dropped notably in every single LLM system when presented with a “None of the above” option. DeepSeek-R1 was the most resilient, but still degraded. The underlying problem is clear: no reasoning is going on, despite some of these systems being billed as having reasoning ability. Instead, this is all pattern matching, which presents the illusion of thought and the illusion of competence.

This overview at Nature Medicine covers a range of such problems. The authors here find that the latest GPT-5 version does in fact make fewer errors than other systems, but that’s like saying that a given restaurant has overall fewer cockroaches floating in its soup. That’s my analogy, not theirs. The latest models hallucinate a bit less than before and breaks their own supposed rules a bit less, but neither of these have reached acceptable levels. The acceptable level of cockroaches in the soup pot is zero.

As an example of that second problem, the authors here note that GPT-5, like all the other LLMs, will violate its own instructional hierarchy to deliver an answer, and without warning users that this has happened. Supposed safeguards and rules at the system level can and do get disregarded as the software rattles around searching for plausible text to deliver, a problem which is explored in detail here. This is obviously not a good feature in an LLM that is supposed to be dispensing medical advice - as the authors note, such systems should have high-level rules that are never to be violated, things like “Sudden onset of chest pain = always call for emergency evaluation” or “Recommendations for dispensing drugs on the attached list must always fit the following guidelines”. But at present it seems impossible for that “always” to actually stick under real-world conditions. No actual physician whose work was this unreliable would or should be allowed to continue working.

LLMs are text generators, working on probabilities of what their next word choice should be based on what has been seem in their training sets, then dispensing answer-shaped nuggets in smooth, confident, grammatical form. This is not reasoning and it is not understanding - at its best, it is an illusion that can pass for them. And that’s what it is at its worst, too. 

Profile

mathemagicalschema: A blonde-haired boy asleep on an asteroid next to a flower. (Default)
schema

January 2019

S M T W T F S
   12345
6789101112
13141516171819
20212223242526
2728293031  

Most Popular Tags

Style Credit

  • Style: Midnight for Ciel by nornoriel

Expand Cut Tags

No cut tags
Page generated Nov. 21st, 2025 05:31 am
Powered by Dreamwidth Studios