[syndicated profile] acoup_feed

Posted by Bret Devereaux

Hey folks, Fireside this week! As I noted a couple of weeks ago, things are probably going to get more than a little fireside-y over the next few weeks, simply because of the start of the semester – and a semester in which I am undertaking a set of entire new preps (that is, teaching classes I have not taught before), in this case Latin (first and second semester). That demands a bunch of time as you are planning each class meeting and assignment for the first time.1

Percy, gazing upon his domain with what seems to me to be something between ennui and disdain, but of course that’s just how his face is.

In any case, I thought I would use briefly this week about how we defend history as a field and how that ties into the way we teach and talk about history.

The great disconnect here is that, when asked, the public regularly notes that they think history is important, but when their opinions are processed into political outcomes it is clear they do not think history departments or historians or really even history teachers are important. That interaction comes to a head with the notion that large language models (LLMs like ChatGPT) will replace historians, because I think both effects speak to the same cause, which is that while the public understands science as a process of discovery, they understand history – incorrectly – as purely a process of transmission.

When it comes to actually engaging with the public, teaching our students and defending our field, this is the rub. There is plenty of public support for history as a teaching field – which AI-boosters imagine LLMs can supplant – but not for history as a research field, which in turn betrays a crucial misunderstanding of what history is.

Put briefly, I think for the majority of the public, who has, after all, never gone beyond high school history class or at most a 101-level collegiate history survey, has a history as scripture view of the field. In this view, history is a set of basically known and static information – names, dates and so on – which does not change over time, but is merely transmitted, via textbooks and introductory courses, from one generation to the next, the way a religious text is transmitted. That view is why folks get so upset when historians say some of the history they learned in high school we now know is wrong, because if violates the basic principle they understand historical knowledge to function on. It is also why they see no real connection between historians doing research and history as a body of knowledge: ‘well, we basically know everything about the past, right?’

If you are reading this, I don’t need to explain that we do not basically know everything about the past (right?) but are instead discovering the past continually, both in improving our knowledge of the deep past but also understanding the new past which time, as is its nature, generates at a rate of one minute per minute.

Instead, what I want to muse on is why we are so bad as communicating this to the public.

I think the problem begins with how we teach high school and extends through how we teach it in undergraduate courses and discuss it with the public. I was struck that, when I took science classes in high school, the narrative of early scientists was a key part of the early weeks of the course. Invariably classes began with stories about figures like Copernicus, Galileo, Kepler, Redi and Spallanzani, Newton, Einstein and so on. Those narratives followed a familiar pattern: first there was what people (incorrectly) believed and why they believed it, then the experiment the hero-scientist devised to test that belief and finally the new knowledge that was earned – often with a proviso for figures like Newton that even that model was no longer fully current, having been superseded itself.

In short, science classes pair a description of our best knowledge at the present with a story of discovery of how we came to know what we know now, with the clear implication that this method is how we will continue to discover new things.

By contrast in history this same story (we call it historiography – the history of the history) doesn’t generally attract sustained attention until graduate school. Students learn the names of rulers and thinkers and key figures but they rarely learn the names of historians. Likewise, instead of being presented with a process of historical discovery they are given a narrative of human development – it is not until advanced undergraduate courses that they begin to engage meaningfully with how we know these things. In my own experience the exceptions to this were almost invariably stories about the knowledge-making achievements of other disciplines – archaeology and linguistics, mostly – rather than narratives of historical investigation. So it is not surprising that many students at those introductory levels come away assuming that the narrative is pretty much fixed and has been known and understood effectively forever.

Instead, students of history generally only begin to learn even the basics of how their history came to be – again, that’s historiography – when they get to the graduate level. And that’s simply too late. Sure, you can’t present a mature historiography of Alexander the Great at the 100-level, but you can sprinkle the standard narrative with (accurate) stories of how our understanding of, say, Greek history has changed and improved.2

I think part of the reason for this is that historians are trained to be really skeptical of heroic narratives, because when we meet them in our sources, they’re usually nonsense. We’ve talked already about the flaws of ‘Great Man’ history – no surprise that historians are thus skeptical of ‘Great Historian’ history. And yet it is certainly fair to talk about our understanding of the past as something that has progressed substantially. A cutting-edge textbook on antiquity or the Middle Ages written in the 1950s or the 1900s would be remarkably wrong today (not least of which because it would likely feature some pretty bald racism). You’d probably have an oversimplified, over-generalized model of ‘feudalism,’ for instance, and have Rome treated as if it had an early modern economy. The Greeks would arrive, in that textbook, in Greece at the end of the bronze age, as ‘Dorian invaders,’ when we know they had already been in Greece for centuries at that point and did not displace the Mycenaeans because they were the Mycenaeans (before the 1880s, those would simply be blank pages).

We do, in fact, know more, indeed a lot more about the past than we did fifty, seventy, a hundred years ago.

As historians looking to justify our field as a research field – not merely a history-as-scripture ‘teaching’ field that transmits the ‘received truth’ about the past – we have to transition not just to telling stories about the past but to telling stories about how the past was discovered. I’ve tried to do that more and more here on the blog, foregrounding methods (like modeling in our recent series on peasants) and also at points progress in historical debates (as with Alexander and the Fall of Rome).

But I am one very small voice in the digital wilderness. I think this problem only begins to change for historians if we change the way we teach introductory level history courses, because that is how we change the way history gets taught at the high school level and thus how the public at large understands history. Not just a story about the past but a story about how we have come to know the past. That means changing our courses but also our teaching materials to better signal the role of historians – and for this to stick with students, specific historians – in making history. After all, no history professor has an ironclad grip on the historiography of every period they teach – especially as we often has to teach very widely – so this material needs to be embedded in things like college textbooks to be available for teaching.

And it means letting ourselves have narratives of ‘hero historians’ to match the ‘hero scientists,’ even if like Newton, we might caution that the historical vision of those ‘hero historians’ are not above further discovery and revision.

On to Recommendations:

Naturally, with a topic like that leading, there is higher education news to discuss and it is quite bad. Last fireside, we mentioned the combined moral and financial crisis at the University of Chicago. We now have a clearer look at what appears to be a massive pause – perhaps permanent – to a wide range of humanities programs there. Among other things, those cuts will make it nearly impossible to study cuneiform – the dominant writing system in the Near East from c. 3000BC to at least the fourth century (with the latest dated cuneiform inscription dating to the reign of Vespasian in 75 AD) – anywhere in the United States.

UChicago is hardly alone, as university trustees and administrators are using the financial pressure created by grant cuts to the sciences to justify the further cuts to humanities they already wanted to do. Thus deep cuts disproportionately to the humanities at the University of Utah, cuts at the University of Oregon, particularly targeting language programs, cuts at Virginia Tech, including shutting down the Religion and Culture program, a ‘prioritization’ plan that will almost certainly slash humanities to the bone at the University of North Carolina, and on and on. What I think needs to be reiterated here is this is a finance problem in the sciences, since it is their grant money being disrupted (they’re also disproportionately impacted by the drop in international students). When the humanities have a finance issue, they cut the humanities, but when the sciences have a finance issue, they still cut the humanities. Frankly, I do not think the slide can be arrested, because this ideology – which believes that only the sciences are really important fields – has been baked deep into multiple generations; we will have to rebuild these fields and their public support largely from scratch (and should begin doing so now).

For further reading on the wave of cuts to the humanities, note also Annette Yoshiko Reed’s essay on the topic.

Also worth reading this week is Sarah E. Bond’s discussion in Hyperallergic of the new open-access anthology How Republics Die: Creeping Authoritarianism in Ancient Rome and Beyond (2025). As Bond notes, it is remarkably rare for ‘consolidated’ democracies to de-consolidate, but the Roman Republic (arguably a democratic-ish government) did so and so provides a rare piece of comparative evidence to think about how such deconsolidation happens (and thus might be reversed). How Republics Die is focused on this question and features an impressive list of contributors and contributions and is also well worth your time. The unfortunate thing about the Roman Republic, of course, is that the Late Republic is a story of failure, rather than of success in maintaining democratic norms, but we can still take guidance from understanding those failures better. We must try, for as Bond notes, “apathy will always be to the advantage of the autocrat.” The past is written; the future is not.

Outside of higher education, I suspect most of you are already well aware of Perun’s channel on YouTube, but I thought I would highlight last week’s video on the long-term economic ramifications of the human toll of war. The entire analysis is worth listening to. What equally struck me is how much this is, generally speaking, a change. For reasons we’ve been beginning to observe in our series on peasants, pre-modern populations (with high birthrates offset by high mortality) ‘bounce back’ from losses in war relatively quickly3 and they also have a lot of inefficiently utilized labor. As a result, the tradition of statecraft not just in Europe, but all the world over, often treated peasant manpower as an almost infinitely replaceable resource. But modern industrial societies utilize their labor far more efficiently and have family patterns which don’t ‘bounce back’ as quickly (if they did, we’d have trouble controlling population growth), which means the scars of war on a population last a lot longer and – as Perun notes – have ‘echoes’ because killing a large part of a generation in their childbearing years reduces the number of children in the next. All of which serves as another component in the thesis that war is no longer a profitable business – and yet our state strategies often continue to falsely assume that it is.

A late addition! I want to also recommend this video by Jamelle Bouie (the NYT columnist) asking “How many slaveholders were, there, really?” in the American south before the Civil War. It is a fantastic exercise in cutting data different ways to reveal assumptions about slaveholding societies. In particular, Bouie notes – and the evidence backs him on this – that while a small percentage of individuals owned slaves, a very large percentage (upwards of 30%) of free households did and beyond that many free persons not in slaveowning households were employed ‘in the slavery business’ as it were, as traffickers, overseers, ‘breakers,’ and so on. The video thus provides a really impressive brief exercise in realizing how thorough the penetration of slavery as an inhuman institution can be in a society. I think it is a useful thing to think about when we think about ancient slavery as well: the percentage of the total population enslaved for ancient Greece was probably marginally higher than the American South, for Roman Italy, modestly lower, so we ought to assume similar levels of penetration (and the horror and atrocity that comes with that).

For this week’s book recommendation, I want to recommend a book that keeps showing up in my bibliographies and citations here, but I haven’t recommended yet, which is A. Lintott’s The Constitution of the Roman Republic (1999). However, I want to add some caveats on the front that CRR is a bit unusual in terms of my recommendations. First, it is a bit pricier than usual (the paperback runs around $50, I think), but second and more to the point this is a more utilitarian, scholarly book than I normally recommend here. It is not, to be clear, badly written (far from it), but it is a bone dry utilitarian book that is exceedingly clear but not particularly lively or engaging. That is perfect for a reference work – which is fundamentally what CRR is – but I thought the warning would be fair: this is not a page-turner.

What CRR is, however, is the only recent complete overview of the functioning of the Roman Republic (focused on the Middle and Late Republic) in English.4 If you want to understand how the Roman Republic worked in greater detail than what you would get in an introductory textbook (or our own How to Roman Republic series), this is where you have to go. Lintott’s real question is about the nature of the republic, which he illustrates by cataloging its institutions and defining their functions and powers. The book is thus structures as a sort of frame: the first few short chapters introduce the question of the nature of the republic (and the difficulties of Polybius’ schematic of it), before the meat of the book works through the institutions of the republic – assemblies, the senate, magistrates (high and low), the courts, religion – in sequence to understand how all of the wheels and gears fit together. And then finally at the end, Lintott turns to Polybius, the nature of the republic and its later reception. That structure makes the book really handy as a reference volume – the scholar or student afflicted with a sudden question about the senate may easily flip to the senate chapter and typically find an answer.

Of course, as we’ve noted, the Romans had no written constitution: the Roman constitution was, as Lintott notes, mostly just what the Romans did and found traditional and right. And yet the system had rules, some written, many unwritten, by which it functioned. Going beyond the summaries provided by Polybius requires assembling and analyzing a huge body of individual examples of behavior within the political system, drawn from all over our sources for the republic (Livy and Cicero make up the largest chunk, though). One great virtue of Lintott’s is that he is open both about chronological variation and also about uncertainty, as there are certainly cases where we’re not entirely clear on how something functioned.

For a reader looking to learn more about the overall shape of the Roman political system, either out of curiosity or as a means of understanding current historical arguments about it, Lintott is the last stop before one reaches the raw material of the sources and extremely narrow and focused scholarly arguments about their interpretation. It is thus something of an achievement that the book that results is, if quite dry, easy enough to digest for the lay reader or early student of Roman history. As a result, Lintott’s work is one of those essential pieces of the library of basically anyone interested in ancient history.

[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

So far, we’ve been working on an alternate design for tracking pointers, but we found that it had the unfortunate property of having potentially-throwing constructors and move assignment operations.

We can make these operations non-throwing by removing the need for a trackable object always to have a ready-made tracker. Instead, we can create a tracker on demand the first time somebody asks to track it. The exception doesn’t go away, but it defers it to the time a tracking pointer is created. This is arguably a good thing because it makes tracking pointers “pay for play”: You don’t allocate a tracker until somebody actually needs it.

template<typename T>
struct trackable_object
{
    trackable_object() noexcept = default;
    ~trackable_object()
    {
        set_target(nullptr);
    }

    // Copy constructor: Separate trackable object
    trackable_object(const trackable_object&) noexcept :
        trackable_object()
    { }

    // Move constructor: Transfers tracker
    trackable_object(trackable_object&& other) noexcept :
        m_tracker(other.transfer_out()) {
        set_target(owner());
    }

    // Copying has no effect on tracking pointers
    trackable_object&
        operator=(trackable_object const&) noexcept
    {
        return *this;
    }

    // Moving abandons current tracking pointers and
    // transfers tracking pointers from the source
    trackable_object&
        operator=(trackable_object&& other) noexcept {
        set_target(nullptr);             
        m_tracker = other.transfer_out();
        set_target(owner());
        return *this;
    }

    tracking_ptr<T> track() /* noexcept */ {
        ensure_tracker();
        return { m_tracker };
    }

    tracking_ptr<const T> track() const /* noexcept */ {
        ensure_tracker();
        return { m_tracker };
    }

    tracking_ptr<const T> ctrack() const /* noexcept */ {
        ensure_tracker();
        return { m_tracker };
    }

private:
    std::shared_ptr<T*> mutable m_tracker;

    T* owner() const noexcept {
        return const_cast<T*>(static_cast<const T*>(this));
    }

    void ensure_tracker() const                       
    {                                                 
        if (!m_tracker)                               
        {                                             
            m_tracker = std::make_shared<T*>(owner());
        }                                             
    }                                                 

    std::shared_ptr<T*> transfer_out()
    {
        return std::move(m_tracker);
    }

    void set_target(T* p)
    {
        if (m_tracker)
        {
            *m_tracker = p;
        }
    }
};

We make the m_tracker mutable because ensure_tracker() might be asked to create a tracker on demand from a const reference.

Creating the tracker on demand removes the exception from the default constructor, the move and copy constructors, and the move and copy assignments. The potentially-throwing behavior moves to the track() and ctrack() methods, but that can be sort of justified on the principle of “pay for play”.

Now, if you look more closely at what we have, you may notice that the shared_ptr is overkill. We don’t use weak pointers, and all of our operations are single-threaded, so the atomic memory barriers inside the shared_ptr operations are not necessary. We’ll create a “limited-use single-threaded” version of the shared_ptr next time.

The post Thoughts on creating a tracking pointer class, part 14: Nonthrowing moves with the shared tracking pointer appeared first on The Old New Thing.

[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

Last time, we created a tracking pointer class based on std::shared_ptr. We found a problem with the move-assignment operator: It didn’t satisfy the strong exception guarantee:

    trackable_object&
        operator=(trackable_object&& other) {
        set_target(nullptr);
        m_tracker = other.transfer_out();
        set_target(owner());
        return *this;
    }

If an exception occurs after we set the target to nullptr we exit with all the tracking pointers expired, which violates the rule that an exception leaves the object state unchanged.

To fix this, we cannot make any irreversible changes until we have passed the point where the last exception could be raised. The exception occurs in the call to new_tracker() inside transfer_out(). When that happens, the exchange into other.m_tracker does not occur, so m_tracker is safely unchanged. So we just need to delay expiring the old tracking pointers until after we have successfully transferred out.

    trackable_object&
        operator=(trackable_object&& other) {
        auto inbound = other.transfer_out();
        set_target(nullptr);
        m_tracker = inbound;
        set_target(owner());
        return *this;
    }

We can code-golf this by using std::exchange to replace the m_tracker while saving the old value, and then updating the target of that tracker manually.

    trackable_object&
        operator=(trackable_object&& other) {
        auto old = std::exchange(m_tracker, other.transfer_out());
        *old = nullptr;                                           
        set_target(owner());
        return *this;
    }

And another iteration of code golfing to inline the result:

    trackable_object&
        operator=(trackable_object&& other) {
        *std::exchange(m_tracker, other.transfer_out()) = nullptr;
        set_target(owner());
        return *this;
    }

We noted last time that the constructors are also potentially-throwing. Many C++ algorithms and classes are significantly more efficient if they know that move operations cannot throw, so making the move constructor and move assignment operator potentially-throwing could end up begin quite expensive. And you probably expect operations like vector::insert and std::sort to move elements rather than copy them. Furthermore, many collection operations (such as vector::insert and vector::erase) leave the vector in an “unspecified” state if a move assignment throws an exception.¹

With the throwing move assignment operator, we have to be careful to consider the state of the trackable object if transfer_out() fails. In that case, we have already disconnected the trackers, so a failure to copy nevertheless breaks tracking pointers, which violates the strong exception guarantee.

To fix that, we don’t abandon the old tracking pointers until we are sure we can get new ones.

Next time, we’ll make the constructors and move-assignment operations non-throwing, though it comes at a cost.

¹ Another side effect is that it prevents trackable objects from being nothrow-swappable, since swapping is based on move operations. We could add a custom swap method and a custom overload of std::swap, but that also creates the onus on the derived class to provide the same customizations on itself so that it can forward the methods into trackable_object.

The post Thoughts on creating a tracking pointer class, part 13: Restoring the strong exception guarantee appeared first on The Old New Thing.

[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

The tracking pointer designs we’ve been using so far have had O(n) complexity on move, where n is the number of outstanding tracking pointers. But we can reduce this to O(1) by the classic technique of introducing another level of indirection.

What we can do is give every trackable object a single shared_ptr<T*> (which we call the “tracker”), which is shared with all tracking pointers. That way, when the object is moved, we can update that single shared_ptr<T*>, and that updates the pointer for all the tracking pointers.

template<typename T> struct trackable_object;

// struct tracking_node { ... };
template<typename T>                                            
struct tracking_ptr_base                                        
{                                                               
    tracking_ptr_base() noexcept = default;                     
                                                                
private:                                                        
    friend struct trackable_object<T>;                          
    tracking_ptr_base(std::shared_ptr<T*> const& ptr) noexcept :
        m_ptr(ptr) { }                                          
                                                                
protected:                                                      
    std::shared_ptr<T*> m_ptr;                                  
};                                                              

template<typename T>
struct tracking_ptr : tracking_ptr_base<std::remove_cv_t<T>>
{
private:
    using base = tracking_ptr_base<std::remove_cv_t<T>>;
    using Source = std::conditional_t<std::is_const_v<T>,
        base, tracking_ptr<std::remove_cv_t<T>>>;

public:
    T* get() const { return this->m_ptr ? *this->m_ptr : nullptr; }

    using base::base;
    tracking_ptr(Source const& other) : base(other) {}
    tracking_ptr(Source&& other) : base(std::move(other)) {}

    tracking_ptr& operator=(Source const& other) {
        static_cast<base&>(*this) = other;
        return *this;
    }
    tracking_ptr& operator=(Source&& other) {
        static_cast<base&>(*this) = std::move(other);
        return *this;
    }
};

The tracking pointer (via the tracking pointer base) holds a copy of the tracker shared pointer. The small catch here is that the tracker m_ptr might be null if the tracking pointer was default-constructed or has been moved-from, so the get method needs to check for a non-null pointer before dereferencing it.

template<typename T>
struct trackable_object
{
    trackable_object() /* noexcept */ = default;

    ~trackable_object()
    {
        set_target(nullptr);
    }

    // Copy constructor: Separate trackable object
    trackable_object(const trackable_object&) /* noexcept */ :
        trackable_object()
    { }

    // Move constructor: Transfers tracker
    trackable_object(trackable_object&& other) /* noexcept */ :
        m_tracker(other.transfer_out()) {
        set_target(owner());
    }

    // Copying has no effect on tracking pointers
    trackable_object&
        operator=(trackable_object const&) noexcept
    {
        return *this;
    }

    // Moving abandons current tracking pointers and
    // transfers tracking pointers from the source
    trackable_object&
        operator=(trackable_object&& other) /* noexcept */ {
        set_target(nullptr);
        m_tracker = other.transfer_out();
        set_target(owner());
        return *this;
    }

    tracking_ptr<T> track() noexcept {
        return { m_tracker };
    }

    tracking_ptr<const T> track() const noexcept {
        return { m_tracker };
    }

    tracking_ptr<const T> ctrack() const noexcept {
        return { m_tracker };
    }

private:
    T* owner() const noexcept {
        return const_cast<T*>(static_cast<const T*>(this));
    }

    std::shared_ptr<T*> new_tracker()
    {
        return std::make_shared<T*>(owner());
    }

    std::shared_ptr<T*> transfer_out()
    {
        return std::exchange(m_tracker, new_tracker());
    }

    void set_target(T* p) noexcept
    {
        *m_tracker = p;
    }

    std::shared_ptr<T*> m_tracker = new_tracker();
};

The trackable object starts out with a new tracker that points to the newly-constructed object. On destruction, the trackable object nulls out the backpointer in the tracker, which causes any existing tracking pointers to expire.

As with our other trackable object implementations, copying a trackable object has no effect on the tracker, and moving it transfers the tracker to the new object, abandoning any existing tracker. When we move the tracker to the new object, we need to leave a fresh (not-yet-shared-with-anybody) tracker behind so that the moved-from object is still trackable if anybody asks.

The helper method new_tracker() makes a fresh tracker that tracks the current object. The helper method transfer_out() relinquishes the current tracker (presumably so it can be given to the moved-to object) and sets up a fresh new tracker.

Although this improves the complexity of moving a trackable object to constant time, the requirement that m_tracker be non-empty means that the constructors and the move-assignment operators are now throwing, because new_tracker() could fail.

So now we have to look at whether the inability to create a new tracker could cause us to violate our invariants.

An exception in the constructors doesn’t affect our invariants because we simply decided not to exist at all.

An exception in the move assignment operator is more troublesome. If transfer_out() fails, we have already disconnected the trackers, so a failure to transfer out causes existing tracking pointers to the destination to expire. This violates the strong exception guarantee, which says that if an exception occurs, the object remains unchanged.

We’ll fix this next time.

The post Thoughts on creating a tracking pointer class, part 12: A shared tracking pointer appeared first on The Old New Thing.

[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

Last time, we made sure that tracking pointers to const objects couldn’t be converted into tracking pointers to non-const objects, but I noted that fixing this introduced a new problem.

We fixed the problem by introducing two new constructors that allow construction of either a tracking_ptr<const T> or tracking_ptr<T> from tracking_ptr<T>. If the destination is a tracking_ptr<T>, then the copy or move construction from tracking_ptr<T> merely overrides the copy or move construction inherited from the base class, so there is no redeclaration conflict.

The problem is that in the case of tracking_ptr<T>, the new constructors are copy and move constructors since they construct from another instance of the same type. And if you declare a move constructor, then the copy and move assignment operators are implicitly declared as deleted.

So we need to bring them back.

template<typename T>
struct tracking_ptr : tracking_ptr_base<std::remove_cv_t<T>>
{
private:
    using base = tracking_ptr_base<std::remove_cv_t<T>>;
    using MP = tracking_ptr<std::remove_cv_t<T>>;

public:
    T* get() const { return this->tracked; }

    using base::base;
    tracking_ptr(MP const& other) : base(other) {}
    tracking_ptr(MP&& other) : base(std::move(other)) {}

    tracking_ptr& operator=(tracking_ptr const&) = default;
    tracking_ptr& operator=(tracking_ptr&&) = default;     
};

But now we have the reverse problem: If you declare a copy or move assignment, then the copy and move constructors are implicitly declared as deleted.

So we have to bring those back too:

template<typename T>
struct tracking_ptr : tracking_ptr_base<std::remove_cv_t<T>>
{
private:
    using base = tracking_ptr_base<std::remove_cv_t<T>>;
    using MP = tracking_ptr<std::remove_cv_t<T>>;

public:
    T* get() const { return this->tracked; }

    using base::base;
    tracking_ptr(tracking_ptr const& other) : base(other) {}      
    tracking_ptr(tracking_ptr&& other) : base(std::move(other)) {}
    tracking_ptr(MP const& other) : base(other) {}
    tracking_ptr(MP&& other) : base(std::move(other)) {}

    tracking_ptr& operator=(tracking_ptr const&) = default;
    tracking_ptr& operator=(tracking_ptr&&) = default;
};

And now we have the double-definition problem we saw last time: In the case of tracking_ptr<T> where T is non-const, we have two declarations for the same copy constructor (and two for the same move constructor), which is not allowed.

There’s another problem: In the case of assigning a tracking_ptr<const T> to a tracking_ptr<T>, we actually perform it in two steps: First we convert the tracking_ptr<const T> to a tracking_ptr<T>, and then we assign the tracking_ptr<T> to its destination. This creates a temporary tracking_ptr<T> that gets linked into the chain, and then unlinked. Can we avoid that inefficiency and just assign it directly?

It turns out the same trick works for both problems.

template<typename T>
struct tracking_ptr : tracking_ptr_base<std::remove_cv_t<T>>
{
private:
    using base = tracking_ptr_base<std::remove_cv_t<T>>;
    using Source = std::conditional_t<std::is_const_v<T>,
        base, tracking_ptr<std::remove_cv_t<T>>>;        

public:
    T* get() const { return this->tracked; }

    using base::base;
    tracking_ptr(Source const& other) : base(other) {}
    tracking_ptr(Source&& other) : base(std::move(other)) {}

    tracking_ptr& operator=(Source const& other) {   
        static_cast<base&>(*this) = other;           
        return *this;                                
    }                                                
    tracking_ptr& operator=(Source&& other) {        
        static_cast<base&>(*this) = std::move(other);
        return *this;                                
    }                                                
};

If creating a tracking_ptr<const T>, then we accept assignment or construction from either tracking_ptr<T> or tracking_ptr<const T>. But if creating a tracking_ptr<T> where T is non-const, then we accept assignment or construction only from another tracking_ptr<T>. This is expressed in the definition of Source, which says that tracking pointers to const things can accept the base type, which means that it will accept any type of tracking pointer to that thing (either to a const or non-const thing). But if it’s a tracking pointer to a non-const thing, then it accepts only tracking pointers to the same non-const thing.

We also have to write out the copy and move assignment operators. We could use = default in the case where the Source is equal to tracking_ptr<T>, but if dealing with a tracking pointer to a const thing, the Source is the base, and the compiler doesn’t know how to default-assign that. So we just write it out explicitly, which works for both cases.

So are we done? I guess.

But wait.

Recall that the complexity of moving a trackable object is linear in the number of tracking pointers because we have to update all the tracking pointers to point to the new location of the moved object. But we can get the cost down to O(1) if we are willing to make some concessions. We’ll look at this alternate design next time.

The post Thoughts on creating a tracking pointer class, part 11: Repairing assignment appeared first on The Old New Thing.

[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

Last time, we added the ability to convert tracking pointers to non-const objects into tracking pointers to const objects, but we noted that there’s a problem.

The problem is that our change accidentally enabled the reverse conversion: From const to non-const.

We want to be able to convert non-const to const, but not vice versa, so let’s require the source to be non-const.

template<typename T>
struct tracking_ptr : tracking_ptr_base<std::remove_cv_t<T>>
{
private:
    using base = tracking_ptr_base<std::remove_cv_t<T>>;
    using MP = tracking_ptr<std::remove_cv_t<T>>;

public:
    T* get() const { return this->tracked; }

    using base::base;
    tracking_ptr(MP const& other) : base(other) {}      
    tracking_ptr(MP&& other) : base(std::move(other)) {}
};

The conversion operators now require a tracking pointer to a non-const object (which to reduce typing we call MP for mutable pointer). The const-to-const version is inherited from the base class.

Inheriting the constructors is particularly convenient because it avoids redefinition conflicts. If we didn’t have inherited constructors, we would have started with

template<typename T>
struct tracking_ptr
{
private:
    using MP = tracking_ptr<std::remove_cv_t<T>>;

public:
    tracking_ptr(tracking_ptr const& other);
    tracking_ptr(MP const& other);

    tracking_ptr(tracking_ptr&& other);
    tracking_ptr(MP&& other);
};

But this doesn’t work with tracking_ptr<Widget> because you now have pairs of identical constructors since the “non-const-to-T” versions are duplicates of the copy and move constructor when T is itself non-const. Substituting T = Widget, we get

template<typename T>
struct tracking_ptr
{
private:
    using MP = tracking_ptr<Widget>;

public:
    tracking_ptr(tracking_ptr<Widget> const& other);
    tracking_ptr(tracking_ptr<Widget> const& other);

    tracking_ptr(tracking_ptr<Widget>&& other);
    tracking_ptr(tracking_ptr<Widget>&& other);
};

And the compiler complains that you declared the same constructor twice. You would have to use SFINAE to remove the second one.

template<typename T>
struct tracking_ptr
{
private:
    using MP = tracking_ptr<std::remove_cv_t<T>>;

public:
    tracking_ptr(tracking_ptr const& other);

    template<typename = std::enable_if<std::is_const_v<T>>>
    tracking_ptr(MP const& other);

    tracking_ptr(tracking_ptr&& other);

    template<typename = std::enable_if<std::is_const_v<T>>>
    tracking_ptr(MP&& other);
};

On the other hand, redeclaring an inherited constructor overrides it, so we can just declare our constructors and not worry about conflicts.

But wait, our attempt to fix this problem introduced a new problem. We’ll look at that next time.

The post Thoughts on creating a tracking pointer class, part 10: Proper conversion appeared first on The Old New Thing.

[syndicated profile] in_the_pipeline_feed

Our public health and science research agencies have already been ransacked by the gang of vandals that makes up the current administration. But last night the damage became even more alarmingly clear.

It has not been an easy story to follow. CDC Director Susan Monarez was only confirmed for her job in July after the administration’s first pick (Dave Weldon) withdrew under a barrage of criticism for his views on autism and vaccines. But the White House announced yesterday that she was “not aligned with the President’s agenda”, and that she was no longer head of the agency. Later in the day came a statement from her lawyers that she “has neither resigned nor received notification from the White House that she was fired”, the implication being that in her position she could only be removed by a direct order from the President rather than by a press release. Meanwhile a White House spokesman said that she had indeed been fired because she refused to resign. As of this morning, the standoff continues, from what I can see.

There are stories sourced to unnamed administration officials that Monarez had attempted to publish an op-ed piece about the recent shooting incident at the agency but had been blocked from doing so, and that she had refused orders to fire other top CDC officials. Her public statements that vaccines save lives also seems to have gotten on the nerves of higher-ups, and that just by itself is a huge flashing red alarm, because this is a simple demonstrable fact that is accepted by physicians, scientists, and public health officials around the world.

And while this was happening, a whole set of other top officials at the agency were resigning as well. Chief Medical Officer Debra Houry left, saying in a letter to staff that “For the good of the nation and the world, the science at CDC should never be censored or subject to political pauses or interpretations” The head of the National Center for Emerging and Zoonotic Infectious Diseases (Daniel Jernigan) resigned as did the director of the Office of Public Health Data, Surveillance, and Technology, Jennifer Layden.

The head of the National Center for Immunization and Respiratory Diseases, Demetre Daskalakis also resigned, and in the most public and detailed fashion of all. “I am not able to serve in this role any longer because of the ongoing weaponization of public health”, he said in a letter to colleagues. Later in the day he posted a public letter on social media, saying:

I am unable to serve in an environment that treats CDC as a tool to generate policies and materials that do not reflect scientific reality and are designed to hurt rather than improve the public’s health. The data analyses that supported [the recent change in vaccination schedules] have never been shared with CDC despite my respectful requests to HHS and other leadership. This lack of meaningful engagement was further compounded by a “frequently asked questions” document written to support the Secretary’s directive that was circulated by HHS without input from CDC subject matter experts and that cited studies that did not support the conclusions that were attributed to these authors. . .I have never experienced such radical non-transparency, nor have I seen such unskilled manipulation of data to achieve a political end rather than the good of the American people. . .Having to retrofit analyses and policy actions to match inadequately thought-out announcements in poorly scripted videos or page-long X posts should not be how organizations responsible for the health of people should function. . .”

He directly addressed the gunfire incident as well (and I should note that these officials and other CDC staff have been working in offices where they are still fixing the bullet holes):

The recent shooting at CDC is not why I am resigning. My grandfather, who I am named after, stood up to fascist forces in Greece and lost his life doing so. I am resigning to make him and his legacy proud. I am resigning because of the cowardice of a leader that cannot admit that HIS and his minions’ words over decades created an environment where violence like this can occur. I reject his and his colleagues’ thoughts and prayers, and advise that they direct those to people that they have not actively harmed

These officials are all, in my view, completely correct about what’s happening to the CDC under HHS Secretary Kennedy and under President Trump. I have gone off on Kennedy many times over his policies on vaccines, which are godawful, but it’s also important to remember that he is someone who stands up in front of cameras and says things like “I’m looking at kids as I walk through the airports today. . .and I see these kids that are just overburdened with mitochondrial challenges, inflammation - you can tell from their faces, movements, and lack of social connection”, and whose response to a school shooting is that “We’re launching studies on the potential contribution of some of the SSRI drugs and some of the other psychiatric drugs that might be contributing to violence”. He also just said that HHS will “reveal” the causes of autism in September. This man does not know what he is talking about, and he does not give a damn. If I had to work under this ignorant, arrogant liar I’d resign, too.

In any normal political world, Kennedy would be forced to resign or face being fired himself. Senator Patty Murray, to her credit, called for exactly this yesterday, and I would very, very much like to hear national Democratic leaders like Chuck Schumer and Hakeem Jeffries say it as well. Right after they do that they can get back to talking about the price of ground beef like their consultants say they should, really.

We are not living in that normal political world. There’s an even larger and more horrible issue here, and that last paragraph from Daskalakis really lays it out. “Fascist” has been a political swear word for many decades, trotted out whenever anyone feels the need for a nasty insult (especially when pointing from left to right). That makes it easy to forget that it is a word with a real meaning. In the same way, for many decades there have been warnings, editorials, dystopian novels and movies and TV shows about a Fascist takeover in the US, and these have a desensitizing effect as well - yeah sure, right, whatever.

Unfortunately, we now have a President who says things like this (these are all from just the last couple of days, on camera in the White House): “I have the right to do anything I want to do. I’m the president of the United States” and “A lot of people are saying maybe we’d like a dictator” and “The line is that I’m a dictator, but I stop crime. So a lot of people say ‘You know if that’s the case, I’d rather have a dictator’” This is a president who has relentlessly singled out particular groups as enemies of the country that have to be arrested and repressed, who has brought the Federal government into taking large stakes in major industries, who has called out the US military into major US cities on his whims who very inauguration was lined with billionaire oligarchs who support him for favors. As I write, soldiers armed with pistols and long rifles are walking along DC streets, some of them under a giant hanging portrait of Trump. The latest fighter jet has been designated the F-47 for him, and there is a report that the training course for ICE agents was specifically shortened to 47 days for the same reason. What the hell else do you call this?

It happened. It’s here. The first key step is realizing it, as awful as that is to take on. At the rate that Trump and his people are pushing down on the accelerator, I don’t even know what this country is going to look like by the midterm elections if he gets his way. We have to do whatever we can to keep it from happening.

Don’t Transition Everything

Aug. 28th, 2025 02:03 pm
[syndicated profile] frontendmasters_feed

Posted by Chris Coyier

This is CSS performance problem I see all too often. The React website has some jank due due to their use of transition. Here is how to find what causes it and how and fix it

Wes Bos (@wesbos.com) 2025-08-07T14:07:33.440Z
[syndicated profile] littletinythings_feed

New comic!

Hey folks, I got some announcements!

You may have seen the website change a little, with ads being removed and a new webcomic jumpbar added beneath the comic page (it's Chimera, a new webcomic collective!)... that's because I've been on the move and leaving Hiveworks!

I won't get into details as to why, but know that it was my decision to make my own way forward, with the conviction that my comics will be better off in the long term.

For now, the departure from Hiveworks (who was crucial in my growth as a webcomic artist, and I'll never forget it!) is shaking things up a bit and leaving a dent in my wallet for sure, as their services covered website hosting, online store management and the Hyvor comment subscription.

The ad revenues were a nice bonus to have too, not to be scoffed at. But in the end, I'm very happy to not have those polluting my websites any longer. I don't know about you, but the WAR TANKS (or whatever) ads didn't really fit with the spirit of Little Tiny Things :D

If ever you're able to support me and my comics financially, then here are a few ways you can do so: 

  •  Patreon: it's been my main income for years now, and that's absolutely fantastic and I feel incredibly grateful for this already! It covers costs for basic living, and monthly bills, but it doesn't cover yearly bills or any extra bits. You can also sign up as a free member if you want to follow without paying anything for now.

  • Ko-Fi: a way to support me financially without the engagement of donating every month

  • itch.io: I sell my nsfw Harley/Ivy fancomics there! They're based off of the Harley Quinn: Animated series, but I hope to make more in the years to cum-- come!!

And of course, if you can't donate anything, that's fine too! Please, take care of yourself first if you need it!! And if my comics can help YOU out, one way or another, then I consider my job well done and my soul will recharge its batteries off of that (if you're okay with it).

And it's not like I'm betting  e v e r y t h i n g  on internet donations and sales; I'm dedicating a good chunk of my time and creativity in working with a house publisher that *should* lead to a nice & juicy contract to help me put money to the side for bigger life projects (fingers crossed!)

Which leads to the next big change: Little Tiny Things will slow its updates to 1 page a week! It really isn't something I'd do without some GOOD reasons, and I knew one day it'd happen but I always pushed it back. Now though, life is giving me the nudges and the signs that it's a good time to apply this change, and to finally give myself more time to develop the other projects I have in store.

Mainly: Headless Bliss' new comic version that I've been working on (for months now) with an editor from a well-known Belgian publisher. No contracts have been signed yet, so it's too soon to celebrate! But if I want this deal to be confirmed, I need more time to draw up some RAD demo pages that will make them go "HOLY SHIT YEAH".

And so, the time for Little Tiny Things to stand back has arrived (sob). At least, temporarily so.

Maybe in the future I'll make newsletters or something to make these announcements more digest, but in the meantime: have a splendid day and thanks for reading!!

clo:ver



[syndicated profile] frontendmasters_feed

Posted by Amit Sheen

There’s a whole layer of CSS that lives just below the surface of most interfaces. It’s not about layout, spacing, or typography. It’s about shape. About cutting through the default boxes and letting your UI move in new directions. This series is all about one such family of features, the kind that doesn’t just style your layout but gives you entirely new ways to shape, animate, and express your interface.

In this first part, we’ll explore clip-path. We’ll start simple, move through the functions and syntax, and work our way up to powerful shape logic that goes way beyond the basic polygons you might be used to. And just when you think things can’t get any more dynamic, part two will kick in with offset-path, where things really start to move.

What is clip-path in CSS?

At its core, clip-path lets us control which parts of an element are visible. It’s like a stencil or cookie cutter for HTML elements. Instead of displaying a rectangular box, you can show just a circle, a triangle, a star, or any complex shape you define. And you can do it with a single line of CSS.

This opens the door to more expressive designs without relying on images, SVG wrappers, or external tools. Want to crop a profile picture into a fancy blob shape? Easy. Want to reveal content through a custom cutout as a hover effect? Done. That’s exactly where clip-path shines. But to use it effectively, we need to understand what it’s made of.

Before the syntax

To really get clip-path, let’s break it into two basic concepts: clip and path. No joke, each one of those carries an important lesson of its own.

This is not the “clip” you know

We’ve all seen clipping in CSS before, usually through the overflow property, set to hidden or clip. By doing so, anything that spills out of the element’s box just vanishes.

But here’s the key difference. While the overflow property clips the content of the element (on the padding box for hidden, and on the overflow clip edge for clip), the clip-path property clips the element itself.

This means that even the simplest clip-path, which visually mimics overflow clipping, will still hide parts of the element itself. That includes things like a box-shadow you were expecting to see, or an outline on a button that suddenly disappears and breaks accessibility.

Also worth noting: just like overflow, clip-path lives entirely in two dimensions. No depth, no perspective. It flattens everything. That means transform-style: preserve-3d is ignored, and any 3D motion will stay locked to the element’s plane.

The “path” to success

This one trips people up. Especially when you’re working with functions like polygon(), it’s tempting to think of the shape as just a bunch of points. But it’s not just the points that matter, it’s the order they come in. You’re not dumping coordinates into a bucket, you’re connecting them, one by one, like a game of “connect the dots.”

A connect-the-dots illustration of a dinosaur character, featuring numbered dots in a sequence, set against a black background.

The path is the journey from one point to the next. The way you sequence them defines the outline, the curves, and eventually the clipped shape. If the points are out of order, your shape won’t behave the way you expect.

Values and Coordinates

You can set the coordinates for your shapes in absolute units like pixels, which stay fixed regardless of the element’s size, or in relative units like percentages, which adapt based on the element’s dimensions. Absolute values give you precision, while relative values make your shapes more responsive. In practice, you’ll often mix the two to balance consistency and flexibility.

By default, every shape you define with clip-path is calculated relative to the element’s border-box. This means the point 0 0 sits at the top-left corner of that box, and all coordinates extend from there. Positive X values move to the right, and positive Y values move down.

Note that you’re not limited to the border-box; the clip-path property also accepts an optional <geometry-box> value, which lets you choose the reference box for your shape, giving you more control over how the clip is applied.

Basic Shapes

Let’s begin with the simplest shape of all. The circle() function creates a circular clipping path that allows you to cut content into a perfect circle shape. This function accepts two main parameters: the radius of the circle and its center position.

The basic syntax follows this pattern:

clip-path: circle(radius at position);

The radius can be specified in various units, like pixels (px), percentages (%), or viewport units (vw, vh). The position defines where the center of the circle should be placed, using coordinates relative to the element’s dimensions.

This demo shows a live preview of the circle() function in action. You can drag the control nodes to adjust both the center position and radius of the circular clip path. As you manipulate these controls, you’ll see the clipped area update in real time, and the corresponding CSS values will be displayed below the preview.

Use the checkbox to toggle between pixel and percentage values to see how the result can be expressed in different units. This is particularly useful when you need responsive clipping that adapts to different screen sizes.

Using Keywords

Beyond specific coordinate values, CSS also supports several convenient keywords for positioning the circle’s center. You can use keywords like center, top, bottom, left, and right, or combine them for more precise placement, such as top left or bottom right. These keywords provide a quick way to achieve common positioning without calculating exact pixel or percentage values.

You can also use special keywords for the radius: closest-side and farthest-side. The closest-side keyword sets the radius to the distance from the center to the closest edge of the element, while farthest-side extends the radius to the farthest edge.

For example:

clip-path: circle(50px at left);
clip-path: circle(30% at top right);
clip-path: circle(closest-side at top 25%);
clip-path: circle(farthest-side at center);

Slightly stretched: ellipse()

Now let’s take that circle and give it two radii instead of one. The ellipse() function works similarly to circle(), but instead of creating a perfect circle, it produces an oval shape by accepting two separate radius values. This gives you independent control over both the horizontal and vertical dimensions of the clipping shape.

The syntax extends the circle pattern with an additional radius parameter:

clip-path: ellipse(radiusX radiusY at position);

This demo shows the ellipse() function with three control nodes, that allow you to independently adjust the horizontal and vertical radii. Notice how you can create anything from a wide, flat oval to a tall, narrow shape by manipulating these controls separately.

Rectangular Shapes

While circle() and ellipse() create curved clipping paths, CSS also provides several functions for creating rectangular clips. These functions offer different approaches to defining the same basic shape: a rectangle with straight edges.

inset(), rect(), and xywh()

These three are all about boxes, but each one approaches it differently.

  • inset() defines distances to clip inward from each edge. Its like padding in reverse, instead of adding space inside the box, you remove it.
  • rect() uses absolute coordinates from the top-left corner to define the rectangle’s edges. A legacy function from the old clip property, but still valid and supported in CSS.
  • xywh() defines a rectangle by position and size. The first two values set the X and Y coordinates for the top-left corner, and the next two define the width and height. Clean and straightforward.

This demo lets you compare all three rectangular functions using the same visual controls. Drag the red control lines to adjust the clipping boundaries, and use the dropdown to switch between the different function syntaxes. Notice how the same visual result produces different coordinate values depending on which function you choose.

The inset() function is generally the most intuitive since it works similarly to CSS padding, while rect() follows the traditional clipping rectangle approach. The newer xywh() function uses a more familiar x, y, width, height pattern commonly found in graphics programming.

Now for the fun part: polygon()

Here’s where things get interesting. While circles, ellipses, and rectangles are useful, they’re also predictable. The polygon() function is where you start building custom shapes, point by point, corner by corner.

At its heart, polygon() is wonderfully straightforward. You define a series of coordinate pairs, and CSS connects them in order to create your shape:

clip-path: polygon(x1 y1, x2 y2, x3 y3, ...);

Remember when we talked about the “path” concept earlier? This is where it really shows. Each coordinate pair is a waypoint, and CSS draws straight lines between them in the exact sequence you provide. Here’s a perfect example of why order matters. Take these five points:

/* Pentagon-like shape */
clip-path: polygon(50% 0%, 98% 35%, 79% 91%, 21% 91%, 2% 35%);

/* Same points, different order - creates a star */
clip-path: polygon(50% 0%, 79% 91%, 2% 35%, 98% 35%, 21% 91%);

Same coordinates, completely different shapes. The first creates a neat pentagon-like outline, while the second forms a classic five-pointed star. It’s that simple connection from point to point that builds your final shape.

Polygon Builder

Here’s a demo that lets you create and modify polygons in real time. You can drag the red control nodes to reshape your polygon, add or remove points, and see the resulting CSS code update instantly. Toggle the checkbox to switch between pixel and percentage values for responsive design.

Use the “Add Node” button to introduce new points along your polygon’s edges, or “Remove Node” to simplify the shape. Notice how each modification creates a completely new path—and how the order of your points defines the final appearance.

When Straight Lines Aren’t Enough

Polygons are powerful, but they have one fundamental limitation: they’re made entirely of straight lines. Sometimes your design calls for curves, smooth transitions, or complex shapes that can’t be achieved by connecting points with straight edges. That’s where path() and shape() step in.

path(): Raw Power, Borrowed from SVG

The path() function brings the full power of SVG path syntax directly into CSS. If you’ve ever worked with vector graphics, this will feel familiar. The syntax is identical to SVG’s <path> element:

clip-path: path("M 10,10 L 50,10 L 50,50 Z");

You can use any SVG path command: M for move, L for line, C for cubic curves, Q for quadratic curves, and so on. This gives you incredible precision and the ability to create complex shapes with smooth curves and sharp angles exactly where you want them.

If you’re not comfortable writing path commands by hand, there are plenty of free online SVG path editors like SVG Path Editor or Boxy SVG that can generate the path string for you.

Here’s a simple heart shape as an example:

clip-path: path("M100,178 L87.9,167 C45,128 16.7,102 16.7,71 C16.7,45 37,25 62.5,25 C77,25 90.9,32 100,42 C109.1,32 123,25 137.5,25 C163,25 183.3,45 183.3,71 C183.3,102 155,128 112.1,167 Z");

But here’s the catch: because path() comes from the SVG world, it only works with absolute values. There are no percentages, no responsive units. If your element changes size, your clipping path stays exactly the same. For truly flexible, responsive shapes, we need something more modern.

shape(): The Modern Approach

Enter shape() – CSS’s answer to the limitations of path(). It provides the same curve capabilities as path() but with a more CSS-friendly syntax and support for relative units like percentages.

Here’s the same heart shape, but using shape() with relative coordinates:

clip-path: shape(
  from 50% 89%,
  line to 43.95% 83.5%,
  curve to 8.35% 35.5% with 22.5% 64% / 8.35% 51%,
  curve to 31.25% 12.5% with 8.35% 22.5% / 18.5% 12.5%,
  curve to 50% 21% with 38.5% 12.5% / 45.45% 16%,
  curve to 68.75% 12.5% with 54.55% 16% / 61.5% 12.5%,
  curve to 91.65% 35.5% with 81.5% 12.5% / 91.65% 22.5%,
  curve to 56.05% 83.5% with 91.65% 51% / 77.5% 64%,      
  close);

This demo shows the same heart shape created with both methods. The key difference becomes apparent when you resize the containers. Grab the bottom-right corner of each shape and drag to change its size.

Notice how the path() version maintains its fixed pixel dimensions regardless of the container size, while the shape() version scales proportionally thanks to its percentage-based coordinates. This responsiveness is what makes shape() particularly powerful for modern web design and represents the future of CSS clipping paths.

Syntax Table

If you’re coming from an SVG background, you’ll find the transition to shape() remarkably intuitive. The syntax translates beautifully from SVG path commands, maintaining the same logic while embracing CSS’s flexible unit system.

Just as SVG paths distinguish between absolute (uppercase) and relative (lowercase) commands, shape() uses the keywords to and by. Commands with to are positioned relative to the element’s origin, while commands with by are positioned relative to the previous point in the path.

SVG PathShape EquivalentDescription
M/mfromSet first point
M 10 20
m 10 20
move to 10px 20px
move by 10px 20px
Move point
L 30 40
l 30 40
line to 30px 40px
line by 30px 40px
Draw line
H 50
h 50
hline to 50px
hline by 50px
Horizontal line
V 60
v 60
vline to 60px
vline by 60px
Vertical line
C x1 y1 x2 y2 x y
c x1 y1 x2 y2 x y
curve to x y with x1 y1 / x2 y2
curve by x y with x1 y1 / x2 y2
Cubic curve with two control points
S x1 y1 x y
s x1 y1 x y
curve to x y with x1 y1
curve by x y with x1 y1
Cubic curve with one control point
Q x1 y1 x y
q x1 y1 x y
smooth to x y with x1 y1
smooth by x y with x1 y1
smooth curve with one control point
T x y
t x y
smooth to x y
smooth by x y
smooth curve with no control point
A rx ry angle la sw x y
a rx ry angle la sw x y
arc to x y of rx ry sw la angle
arc by x y of rx ry sw la angle
Arc with radii, rotation, and flags
Z/zcloseClose the path

Self-Intersecting Polygons and Fill Rules

Here’s where things get mathematically interesting. When you create shapes where lines cross over each other, CSS has to decide which areas should be visible and which should remain transparent. This is controlled by fill rules, and understanding them unlocks some powerful creative possibilities.

CSS supports two fill rules: evenodd and nonzero. The difference becomes clear when you see them in action. Here’s a simple rounded star with both fill rules:

  • Even-odd rule: (on the left) Think of it as a simple counting game. Draw an imaginary line from any point to the edge of your element. Every time that line crosses a path edge, count it. If you end up with an odd number, that area gets filled. Even number? It stays transparent. This is why star centers appear hollow, the crossing lines create even-numbered intersections there.
  • Nonzero rule: (default value, on the right) This one’s about direction and flow. As your path travels around the shape, it creates a “winding” effect. Areas that get wound in one direction stay filled, while areas where clockwise and counter-clockwise paths cancel each other out become transparent. In most simple shapes like our star, everything winds the same way, so everything stays filled.

This gives you precise control over complex self-intersecting shapes, letting you create intricate patterns with internal cutouts or solid fills, all depending on which fill rule you choose.

Wrapping up

We’ve covered a lot of ground here. From simple circles to complex self-intersecting stars, clip-path gives you an entirely new vocabulary for shaping your interface. We started with basic geometry, built up to custom polygons, and finally broke free from straight lines with curves and precision.

But here’s the thing: everything we’ve explored so far has been about containment. About cutting away, hiding, cropping. We’ve been thinking inside the box, even when we’re changing its shape.

What if I told you there’s another way to think about paths in CSS? What if, instead of using them to constrain and contain, you could use them to guide and direct? What if your elements could follow curves, travel along custom routes, and move through space in ways that feel natural and intentional?

That’s exactly where we’re heading in part two. We’re going to shift from static shapes to dynamic motion, from clip-path to offset-path. Your elements won’t just be differently shaped—they’ll be dancing along curves you design, following trajectories that bring your interface to life.

The path of least resistance is about to get a whole lot more interesting.

Lithium Orotate Revisited

Aug. 25th, 2025 12:39 pm
[syndicated profile] in_the_pipeline_feed

After that big lithium-and-Alzheimer’s paper recently, I thought a look at the chemistry of the lithium orotate used therein would be worthwhile. So let’s get into ion behavior for a bit:

As the chemists in the crowd know, there are several general behaviors that you see for ionic compounds in solution. If you think of all ionic substances as fully solvent-separated solvated ions once they're in solution, just ions, all the same, the other possibilities are going to sneak up on you. And these vary according to both the anion and cation, naturally, and according to the concentration, and very much so with the nature of the solvent and whatever other species might be floating around in there (overall ionic strength is certainly a factor, for one). Let’s stick with water as the solvent for the three most distinct classifications:

1. A fully solvated ion pair. That’s what you’d see with (for example) a low concentration of sodium chloride in water. The most energetically favorable state has the sodium cation and the chloride anion each surrounded by their own “solvation shells” of water molecules; it’s like they are each in their own bubbles of slightly-more-orderly water. The ions are not really “seeing” each other at all.

2. A solvent-separated ion pair, which can also be known as an “outer-sphere complex”. In this situation the anion and cation are separated by (pretty much) a single layer of water molecules (or indeed a single water molecule itself). In this case there certainly is an electrostatic interaction between the two ions, but the lowest energetic state of the system includes a solvent molecule in there too.

3. A contact ion pair, which can also be known as an “inner-sphere complex”. Here the anion and cation are right next to each other, fully electrostatically paired. Indeed, this situation can usually be described as “partially covalent”; the interaction is that tight. It’s like the far end of the spectrum of polarized covalent bonds, like drawing a sulfoxide as an S-plus connected to an O-minus. The two ions are surrounded by a common solvation shell of water molecules; there’s nothing between them.

There are several factors that go into the thermodynamics of these states. There’s outright Coulombic attraction (positive charges and negative ones), but note that Coulomb’s Law includes a term in the denominator for the dielectric constant of the medium (so water is going to be rather different than less polar solvents and more apt to separate things). And you’ll also have to keep in mind that your ions are going to have a polarizing effect on those nearby solvent molecules, somewhat cancelling out the situation compared to “naked charges” alone. You’ve also got enthalpic contributions from all those solvation interactions with the water molecules, balanced with the entropy changes that come from making more orderly solvation shells out of those waters. And there’s the loss of entropy that comes from having ions associated with each other rather than swimming around randomly.

OK, now what do we know about lithium orotate’s behavior? I ask because many people (in the comments here and elsewhere) have had a hard time imagining that it can be all that different from any other lithium salt. With lithium chloride or lithium carbonate, you would absolutely expect the two ions to go off on their separately solvated adventures by themselves, so why shouldn’t any lithium whateverate do the same?

It is a question with a surprisingly long and controversial history, which is very well summed up here. and in even more detail in this article. In short, claims were made in 1973 that lithium orotate dosing led to higher CNS concentrations than lithium carbonate dosing. A followup study in 1976 did not confirm this, but another in 1978 apparently did see such differences (up to threefold higher concentrations with the orotate). A 1979 followup, though, suggested that this could be an artifact of impaired renal function after the orotate dosing, and that report seems to have shut down this area of inquiry for some time. More recent toxicological investigations have not seen any such effects, however. In fact, lithium carbonate seems to have more renal toxicity problems itself - it’s possible that lithium orotate is a safer compound, pharmacokinetic and efficacy claims aside.

But what about those pharmacokinetic differences? Are they real, and if so, how does this occur? Well, the PK of lithium salts in general seems to be a battleground (see section 6.1 here). Most lithium dosing in the psychiatric field is lithium carbonate, but that’s due to its easier formulation compared to lithium chloride (it’s non-hygroscopic, i.e. it does not soak up moisture from the air). Lithium chloride itself has some regulatory issues left over from its (over)use in salt substitutes in the 1930s and 40s as well. Lithium citrate is available as a substitute for people who have difficulty swallowing the lithium carbonate caplets, and there are varying reports of whether it has any PK differences compared to the carbonate. Lithium sulfate seems to have no real differences.

Orotate salts, though, may well be a different matter. It’s been observed, for example, that magnesium orotate does not have the laxative effects of common magnesium salts, which suggests that it does not ionize under physiological conditions the way that those do. The lithium/Alzheimer’s paper showed that lithium orotate solutions showed notably lower conductivity than other lithium salt solutions, and that is indeed a measure of their degree of ionization (i.e., more contact ion pairing than for the other salts). It is possible that the lithium-orotate pair is handled as a single substance. At the destination end, there is evidence that orotate is transported via a urate receptor (URAT1) which is found in both the kidney and the choroid plexus (for entry into the brain), and it may be taken up through nucleotide transporters as well. And once in the cell, orotate is already an intermediate in pyrimidine synthesis, which might be a way to finally liberate the lithium counterion.

More needs to be done to shore up all these ideas, but they are not implausible. This paper goes a way towards that, showing that lithium-driven mouse behavioral assays are significantly different with the orotate salt, and that inhibition of anion transport pathways (or of the pentose phosphate pathway for nucleotide synthesis) seem to shut off these effects. So there is reason to think that lithium orotate could indeed be different from other lithium salts, and that these differences are exploitable for its use in lithium supplementation into the CNS. That of course is a separate issue from “Is lithium deficiency the cause of Alzheimer’s” and from “Would lithium supplementation be a useful Alzheimer’s therapy”. But it would behoove us to figure this out in case the answer to either of those latter questions is “yes”.

[syndicated profile] frontendmasters_feed

Posted by Chris Coyier

Say you’ve got a page with a bunch of <details> elements on it.

Your goal is to be able to send someone to that page with one particular details element open.

I was doing just this recently, and my first thought was to do it server-side. If the URL was like website.com/#faq-1 I’d see if faq-1 matches an ID of the details element and I’d put the open attribute on it like <details id="faq-1" open>. But no, you don’t get to have the #hash as part of the URL server side (for whatever reason 🤷‍♀️).

Then I started writing JavaScript to do it, where you definitely can access the hash (window.location.hash). I’d just querySelector for the hash and if I found a matching details element, I’d open it up.

Then I was reminded you don’t need to do this at all. What you need is (drumroll)… HTML.

The trick is hash-linking to an element inside the <details>. So like:

<details>
  <summary>What was Rosebud in Citizen Kane?</summary>
  <div id="faq-1">A sled.</div>
</details>

Now, if you hash-link to #faq-1, the browser will know that it has to open that <details> in order for it to be seen, so it does. You don’t normally need a <div> wrapper or anything inside the details element, but we’re doing it here as it’s obviously handy.

Here’s a demo of a page that is set up in this way:

It’s probably more interesting to just visit this hash-link URL and see it open right up and work.

This came up for me while working on this documentation page where I wanted to be able to link to specific things that were otherwise “buried” in details elements.

As a bit of an enhancement, you might want to consider CSS like this:

:target {
  background: yellow;
  scroll-margin-block-start: 4rem;
}

That will apply some styling to the element that matches the hash in the URL, as well as push it away from the top edge of the browser window a little bit. In this case, it helps make sure the FAQ question is actually visible, not just the answer.

Go Make Me Fifty Kilos of This Stuff

Aug. 26th, 2025 12:14 pm
[syndicated profile] in_the_pipeline_feed

Here's a nice paper that goes into the details of scale-up of a common reaction, and even if you’re not into organic synthesis it’ll show you the sorts of issues that drug manufacturing has to deal with. The authors (from Takeda in Japan) were looking at the reaction shown at right, which the medicinal chemists in the audience will assure you is a 100%-right-down-the-middle example of a typical drug synthesis step. The Suzuki-Miyaura palladium-catalyzed coupling conditions are the obvious choice for this transformation, but that phrase encompasses hundreds, thousands, who knows how many variations in catalysts, solvents, temperatures, additives and so on.

The team found that a set of very standard conditions worked well (70/30 mix of isopropanol/water, bis(triphenylphosphine) palladium dichloride as catalyst, potassium carbonate as base), and indeed that setup is almost bound to give you some product for most of these reactions. By adjusting the ratios of reagents, the concentrations, and the temperature the reaction was well-optimized for yield and for purity. The amount of boronic acid coupling partner was the biggest variable, as it turned out, and the peskiest side product was dimerization of the starting chloro compound. Another step that was optimized was treatment of the crude product with a solution of L-cysteine as a palladium scavenger (getting the Pd out of there is always a consideration after these reactions).

But then came time to do all of this on a fifty-kilo scale, and that’s when folks like me wave goodbye and wish everyone well, because I have never done a fifty-kilo reaction in my life and never plan to. (I think my experience has maxed out at between one and two kilos, and I did not enjoy those reactions very much, in retrospect). The first interesting wrinkle was that the contract manufacturing organization that was hired to do this work was located at notably higher altitude, so the boiling-solvent-mixture conditions for the original reaction took place at a lower temperature for them. The reaction just could not be driven to completion under those conditions. Just as you do when baking a cake at altitude, adjustments had to be made - in this case, it meant running things in a sealed vessel to get the temperature back up again under a bit of pressure.

Unfortunately, the first runs at 50kg scale, while going to completion, produced material that had too many impurities as compared to the smaller reference runs. Specifically, the leftover palladium levels were “alarmingly high”. And there were a whole list of things that could have changed: there’s that overpressure, to start with. There’s stirring and mixing, which is always going to be different on large scale. There’s heating, very much likewise (for example, the lab-batch reactions were immersed in a 100C oil bath, while the 50KG reactions were brought up to temperature in steam-jacketed reactors whose walls reached higher temperatures).

Further experiments showed that the only one of these that really affected the reaction seemed to be the higher-temperature heating, and that wasn’t hurting the yield, just the Pd-residue purity. Even further work showed that a big factor in those palladium levels was the presence (or absence) of air and oxygen. The lab-scale batches were exposed to ambient air at several points, while the 50kg reactors were strictly nitrogen-purged. And it turns out that you need some air in there, especially at higher temperatures. Pd species were produced at those hot spots in the walls of the reactor that were difficult to remove under the standard cysteine workup if they had not seen oxygen, but were much easier to clear out if air was introduced (probably because they were being oxidized to Pd (II). So a tube bubbling air into the reactors was introduced, although that had to be done as a separate step at the end of the reaction time (which was still effective at oxidizing Pd species). Furthermore, the nitrogen atmosphere was switched to static as opposed to nitrogen-flow, to keep from stripping residual oxygen out of the reaction.

Normally you feel safer keeping air (and especially oxygen) out of your reaction mixtures, not least because you don’t want anything igniting, but in this case it was crucial. This is a potential blind spot for scale-up, as the paper notes, particularly with the temperature changes producing new Pd species that had to be dealt with. Every little detail counts in this work!

Go Make Me Fifty Kilos of This Stuff

Aug. 26th, 2025 12:14 pm
[syndicated profile] in_the_pipeline_feed

on the details of scale-up of a common reaction, and even if you’re not into organic synthesis it’ll show you the sorts of issues that drug manufacturing has to deal with. The authors (from Takeda in Japan) were looking at the reaction shown at right, which the medicinal chemists in the audience will assure you is a 100%-right-down-the-middle example of a typical drug synthesis step. The Suzuki-Miyaura palladium-catalyzed coupling conditions are the obvious choice for this transformation, but that phrase encompasses hundreds, thousands, who knows how many variations in catalysts, solvents, temperatures, additives and so on.

The team found that a set of very standard conditions worked well (70/30 mix of isopropanol/water, bis(triphenylphosphine) palladium dichloride as catalyst, potassium carbonate as base), and indeed that setup is almost bound to give you some product for most of these reactions. By adjusting the ratios of reagents, the concentrations, and the temperature the reaction was well-optimized for yield and for purity. The amount of boronic acid coupling partner was the biggest variable, as it turned out, and the peskiest side product was dimerization of the starting chloro compound. Another step that was optimized was treatment of the crude product with a solution of L-cysteine as a palladium scavenger (getting the Pd out of there is always a consideration after these reactions).

But then came time to do all of this on a fifty-kilo scale, and that’s when folks like me wave goodbye and wish everyone well, because I have never done a fifty-kilo reaction in my life and never plan to. (I think my experience has maxed out at between one and two kilos, and I did not enjoy those reactions very much, in retrospect). The first interesting wrinkle was that the contract manufacturing organization that was hired to do this work was located at notably higher altitude, so the boiling-solvent-mixture conditions for the original reaction took place at a lower temperature for them. The reaction just could not be driven to completion under those conditions. Just as you do when baking a cake at altitude, adjustments had to be made - in this case, it meant running things in a sealed vessel to get the temperature back up again under a bit of pressure.

Unfortunately, the first runs at 50kg scale, while going to completion, produced material that had too many impurities as compared to the smaller reference runs. Specifically, the leftover palladium levels were “alarmingly high”. And there were a whole list of things that could have changed: there’s that overpressure, to start with. There’s stirring and mixing, which is always going to be different on large scale. There’s heating, very much likewise (for example, the lab-batch reactions were immersed in a 100C oil bath, while the 50KG reactions were brought up to temperature in steam-jacketed reactors whose walls reached higher temperatures).

Further experiments showed that the only one of these that really affected the reaction seemed to be the higher-temperature heating, and that wasn’t hurting the yield, just the Pd-residue purity. Even further work showed that a big factor in those palladium levels was the presence (or absence) of air and oxygen. The lab-scale batches were exposed to ambient air at several points, while the 50kg reactors were strictly nitrogen-purged. And it turns out that you need some air in there, especially at higher temperatures. Pd species were produced at those hot spots in the walls of the reactor that were difficult to remove under the standard cysteine workup if they had not seen oxygen, but were much easier to clear out if air was introduced (probably because they were being oxidized to Pd (II). So a tube bubbling air into the reactors was introduced, although that had to be done as a separate step at the end of the reaction time (which was still effective at oxidizing Pd species). Furthermore, the nitrogen atmosphere was switched to static as opposed to nitrogen-flow, to keep from stripping residual oxygen out of the reaction.

Normally you feel safer keeping air (and especially oxygen) out of your reaction mixtures, not least because you don’t want anything igniting, but in this case it was crucial. This is a potential blind spot for scale-up, as the paper notes, particularly with the temperature changes producing new Pd species that had to be dealt with. Every little detail counts in this work!

Lithium Orotate Revisited

Aug. 25th, 2025 12:39 pm
[syndicated profile] in_the_pipeline_feed

After that big lithium-and-Alzheimer’s paper recently, I thought a look at the chemistry of the lithium orotate used therein would be worthwhile. So let’s get into ion behavior for a bit:

As the chemists in the crowd know, there are several general behaviors that you see for ionic compounds in solution. If you think of all ionic substances as fully solvent-separated solvated ions once they're in solution, just ions, all the same, the other possibilities are going to sneak up on you. And these vary according to both the anion and cation, naturally, and according to the concentration, and very much so with the nature of the solvent and whatever other species might be floating around in there (overall ionic strength is certainly a factor, for one). Let’s stick with water as the solvent for the three most distinct classifications:

1. A fully solvated ion pair. That’s what you’d see with (for example) a low concentration of sodium chloride in water. The most energetically favorable state has the sodium cation and the chloride anion each surrounded by their own “solvation shells” of water molecules; it’s like they are each in their own bubbles of slightly-more-orderly water. The ions are not really “seeing” each other at all.

2. A solvent-separated ion pair, which can also be known as an “outer-sphere complex”. In this situation the anion and cation are separated by (pretty much) a single layer of water molecules (or indeed a single water molecule itself). In this case there certainly is an electrostatic interaction between the two anions, but the lowest energetic state of the system includes a solvent molecule in there too.

3. A contact ion pair, which can also be known as an “inner-sphere complex”. Here the anion and cation are right next to each other, fully electrostatically paired. Indeed, this situation can usually be described as “partially covalent”; the interaction is that tight. It’s like the far end of the spectrum of polarized covalent bonds, like drawing a sulfoxide as an S-plus connected to an O-minus. The two ions are surrounded by a common solvation shell of water molecules; there’s nothing between them.

There are several factors that go into the thermodynamics of these states. There’s outright Coulombic attraction (positive charges and negative ones), but note that Coulomb’s Law includes a term in the denominator for the dielectric constant of the medium (so water is going to be rather different than less polar solvents and more apt to separate things). And you’ll also have to keep in mind that your ions are going to have a polarizing effect on those nearby solvent molecules, somewhat cancelling out the situation compared to “naked charges” alone. You’ve also got enthalpic contributions from all those solvation interactions with the water molecules, balanced with the entropy changes that come from making more orderly solvation shells out of those waters. And there’s the loss of entropy that comes from having ions associated with each other rather than swimming around randomly.

OK, now what do we know about lithium orotate’s behavior? I ask because many people (in the comments here and elsewhere) have had a hard time imagining that it can be all that different from any other lithium salt. With lithium chloride or lithium carbonate, you would absolutely expect the two ions to go off on their separately solvated adventures by themselves, so why shouldn’t any lithium whateverate do the same?

It is a question with a surprisingly long and controversial history, which is very well summed up here. and in even more detail in this article. In short, claims were made in 1973 that lithium orotate dosing led to higher CNS concentrations than lithium carbonate dosing. A followup study in 1976 did not confirm this, but another in 1978 apparently did see such differences (up to threefold higher concentrations with the orotate). A 1979 followup, though, suggested that this could be an artifact of impaired renal function after the orotate dosing, and that report seems to have shut down this area of inquiry for some time. More recent toxicological investigations have not seen any such effects, however. In fact, lithium carbonate seems to have more renal toxicity problems itself - it’s possible that lithium orotate is a safer compound, pharmacokinetic and efficacy claims aside.

But what about those pharmacokinetic differences? Are they real, and if so, how does this occur? Well, the PK of lithium salts in general seems to be a battleground (see section 6.1 here). Most lithium dosing in the psychiatric field is lithium carbonate, but that’s due to its easier formulation compared to lithium chloride (it’s non-hygroscopic, i.e. it does not soak up moisture from the air). Lithium chloride itself has some regulatory issues left over from its (over)use in salt substitutes in the 1930s and 40s as well. Lithium citrate is available as a substitute for people who have difficulty swallowing the lithium carbonate caplets, and there are varying reports of whether it has any PK differences compared to the carbonate. Lithium sulfate seems to have no real differences.

Orotate salts, though, may well be a different matter. It’s been observed, for example, that magnesium orotate does not have the laxative effects of common magnesium salts, which suggests that it does not ionize under physiological conditions the way that those do. The lithium/Alzheimer’s paper showed that lithium orotate solutions showed notably lower conductivity than other lithium salt solutions, and that is indeed a measure of their degree of ionization (i.e., more contact ion pairing than for the other salts). It is possible that the lithium-orotate pair is handled as a single substance. At the destination end, there is evidence that orotate is transported via a urate receptor (URAT1) which is found in both the kidney and the choroid plexus (for entry into the brain), and it may be taken up through nucleotide transporters as well. And once in the cell, orotate is already an intermediate in pyrimidine synthesis, which might be a way to finally liberate the lithium counterion.

More needs to be done to shore up all these ideas, but they are not implausible. This paper goes a way towards that, showing that lithium-driven mouse behavioral assays are significantly different with the orotate salt, and that inhibition of anion transport pathways (or of the pentose phosphate pathway for nucleotide synthesis) seem to shut off these effects. So there is reason to think that lithium orotate could indeed be different from other lithium salts, and that these differences are exploitable for its use in lithium supplementation into the CNS. That of course is a separate issue from “Is lithium deficiency the cause of Alzheimer’s” and from “Would lithium supplementation be a useful Alzheimer’s therapy”. But it would behoove us to figure this out in case the answer to either of those latter questions is “yes”.

[syndicated profile] frontendmasters_feed

Posted by Chris Coyier

WebKit/Safari has rolled out a preview version of random() in CSS.

Random functions in programming languages are amazing. You can use them to generate variations, to make things feel spontaneous and fresh. Until now there was no way to create a random number in CSS. Now, the random() function is on its way.

Upon first play, it’s great!

This is only in Safari Technical Preview right now. I’ll post videos below so you can see it, and link to live demos.

CSS processors like Sass have been able to do this for ages, but it’s not nearly as nice in that context.

  1. The random() numbers in Sass are only calculated at compile time. So they are only random at the time. Refreshing the page doesn’t mean newly random values.
  2. Random numbers are usually paired with a loop. So if you want 1,000 randomly placed elements, you need 1,000 :nth-child() selectors with a randomly generated value in each, meaning bulky CSS.

With random() in vanilla CSS, no such loops are needed and making the code quite simple and satisfying.

I found a 12-year-old Sass demo of mine playing with random() that is like this:

This compiled to over 200 lines of CSS.

But now it’s just like this:

Demo

Much of the magic, to me, is how each matching element gets its own random values. So if you had three things like this:

<div class="thing"></div>
<div class="thing"></div>
<div class="thing"></div>

Then this simple CSS could make them all quite different:

.thing {
  position: absolute;
  background: red;
  width: 100px;
  height: 100px;
  
  top: random(10px, 200px);
  left: random(100px, 400px);
  
  background: rgb(
    random(0, 255),
    random(0, 255),
    random(0, 255)
  )
}
Demo

The blog post doesn’t mention “unitless” numbers like I’ve used above for the color, but they work fine. If you’re using units, they need to be the same across all parameters.

The “starfield” demo in the blog post is pretty darn compelling!

Demo

I found another old demo where I used a bit of randomized animation-delay where in the SCSS syntax I did it like this:

animation-delay: (random(10) / 40) + s;

Notice I had to append the “s” character at the end to get units there. Now in vanilla CSS you just declare the range with the units on it, like:

animation-delay: calc(random(0s, 10s) / 40);

And it works great!


The feature does have a spec, and I’m pleased that it has covered many things that I hadn’t considered before but are clearly good ideas. The blog post covers this nicely, but allow me to re-iterate:

.random-rect {
  width: random(100px, 200px);
  height: random(100px, 200px);
}

Both the width and height will be different random values. But if you want them to be the same random value, you can set a custom ident value that will “cache” that value for one element:

.random-square {
  width: random(--foo, 100px, 200px);
  height: random(--foo, 100px, 200px);
}

Nice!

But if you had 20 of these elements, how could you make sure all had the same random values? Well there is a special keyword for that, ensuring all matched elements share the same random values:

.shared-random-rect {
  width: random(element-shared, 100px, 200px);
  height: random(element-shared, 100px, 200px);
}

But in that case, all matched elements would share the same random values, but the width and height wouldn’t be equal. So you’d do both to make sure it’s all equal:

.shared-random-squares {
  width: random(--foo element-shared, 100px, 200px);
  height: random(--foo element-shared, 100px, 200px);
}

That’s all very nicely considered, I think.


Ranges are also handled with a final parameter:

top: random(10px, 100px, 20px);
transition-delay: random(0s, 5s, 1s);

The top value above can only be: 10px, 30px, 50px, 70px, or 90px.

The transition-delay value above can only be: 0s, 1s, 2s, 3s, 4s, or 5s.

Otherwise you can get decimal values of either which might be more random than you want. Even 1px for random pixel values as an increment seems to be suggested.

(Note the WebKit blog has a code sample with by 20deg in it, which I think is a typo as that doesn’t work for me.)


I didn’t have a chance to try it yet — but doesn’t it make you wanna force a re-render and see if it will work with document.startViewTransition??

Profile

mathemagicalschema: A blonde-haired boy asleep on an asteroid next to a flower. (Default)
schema

January 2019

S M T W T F S
   12345
6789101112
13141516171819
20212223242526
2728293031  

Most Popular Tags

Style Credit

  • Style: Midnight for Ciel by nornoriel

Expand Cut Tags

No cut tags
Page generated Aug. 30th, 2025 03:32 am
Powered by Dreamwidth Studios