Does Dioxus spark joy?

Nov. 22nd, 2025 06:30 pm
[syndicated profile] fasterthanlime_feed

Posted by Amos Wenger

Amos

Note: this article is adapted from a presentation I gave at a Rust Paris Meetup — that’s why it sounds a little different than usual. Enjoy!

Good evening! Tonight, I will attempt to answer the question: Does Dioxus spark joy? Or at the very least, whimsy.

What’s Dioxus, you ask? It is first and foremost a name that is quote: “legally not inspired by any Pokémon”.

The deoxys pokemon

Even if the author concedes in a Hacker News comment that the “Deoxys” Pokémon is, I quote: “awesome”.

[syndicated profile] acoup_feed

Posted by Bret Devereaux

This is the second part of what looks like it’ll be end up as a four part series discussing the debates surrounding ancient Greek hoplites, the heavy infantry of the Archaic (800-480) and Classical (480-323) periods. Last week, we outlined the contours of the debate: the major points of contention and the history of the debate and how it has come to its current – and I would argue, unsatisfactory – point.

This week, I want to stay laying out my own sense of the arguments and what I see as a viable synthesis. I’ve opted to split this into three parts because I don’t just want to present my ‘answers’ but also really use this as an opportunity to contrast the two opposing camps (hoplite orthodoxy and hoplite heterodoxy) in the process of laying out where I think the firmest ground is, which as we’ll see is something of a blend of both. That is a larger project so I’ve opted to split it up. This post will cover the question of equipment, both the date of its emergence and its use and function (which have implications for chronolgy and tactics). Then the next post will cover the question of tactics, both in terms of how the phalanx might have functioned on an Archaic battlefield where light infantry and cavalry remained common and important and how it may have functioned in a late-Archaic or Classical battlefield when they were less central (but still at least sometimes present). Then, at long last, the final post will cover what I think are some of the social and political implications (some of which falls out of the first ideas), which is actually where I think some of the most explosive conclusions really are.

However before I launch into all of that, I want to be clear about the perspective I am coming from. On the one hand, I am an ancient historian, I do read ancient Greek, I can engage with the main bodies of evidence (literary, archaeological, representational) directly, as an expert. On the other hand, I am not a scholar of hoplites: this is my field, but not my sub-field. Consequently, I am assessing the arguments of folks who have spent a lot more time on hoplites than me and have thus read these sources more closely and more widely than I have. I can check their work, I can assess their arguments, but while I am going to suggest solutions to some of these quandaries, I want to be clear I am coming at this from a pose of intellectual humility in terms of raw command of the evidence.

(Although I should note this post, which is on equipment basically is square in my wheelhouse, so if I sound a bit more strident this week it is because while I am modestly familiar with hoplites, I am very familiar with hoplite (and other pre-gunpowder) equipment.)

On the other hand, I think I do come at the problem with two advantages, the value of which the reader may determine for themselves. The first of these is simply that I am not a scholar of hoplites and so I am not ‘in’ one of these ‘camps;’ an ‘outsiders’ perspective – from someone who can still engage directly with the evidence – can be handy. The second of these is frankly that I have very broad training as a military historian which gives me a somewhat wider base of comparative evidence to draw on than I think has been brought to bear on these questions before. And that is going to be relevant, particularly this week, because part of my core argument here is that one mistake that has been repeated here is treating the hoplite phalanx as something special and unique, rather than as an interesting species of a common phenomenon: the shield wall, which has shared characteristics that occur in many cultures at many times.

As always, if you like what you are reading, please share it as I rely on word-of-mouth to find readers! And if you really like it, you can support this project over at Patreon; I don’t promise not to use the money to buy a full hoplite panoply, but I also don’t not promise to do that.1 And if you want updates whenever a new post appears, you can click below for email updates or follow me on Twitter and Bluesky for updates when posts go live and my general musings; I have largely shifted over to Bluesky (I maintain some de minimis presence on Twitter), given that it has become a much better place for historical discussion than Twitter.

The Emergence of the Hoplite Panoply

We need to start with three entwined questions, the nature of hoplite equipment, the dates at which it appears and the implications for the emergence of the ‘true’ phalanx (and its nature). As I noted in the first part, while the two ‘camps’ on hoplites consist of a set of linked answers to key questions, the strength of those linkages vary: in some cases, answer A necessitates answer B and in some cases it does not. In this case, the hoplite orthodox argument is that hoplite equipment was too cumbersome to fight much outside of the phalanx, which in turn (they argue) necessitates that the emergence of the full panoply means the phalanx must come with it. Consequently, hoplite orthodoxy assumes something like a ‘hoplite revolution’ (a phrase they use), where hoplites (and their equipment) and the phalanx emerge at more of less the same time, rapidly remaking the politics of the polis and polis warfare.

By contrast, hoplite heterodoxy unlinks these issues, by arguing that hoplite equipment is not that cumbersome and so need not necessitate the phalanx, while at the same time noting that such equipment emerged gradually and the fully panoply appeared rather later than hoplite orthodoxy might suggest. But this plays into a larger argument that hoplites developed outside of close-order formations and could function just as well in skirmish or open-order environments.

As an aside, I want to clarify terminology here: we are not dealing, this week, with the question of ‘the phalanx.’ That term’s use is heavily subject to definition and we need to have that definitional fight out before we use it. So instead, we are going to talk about ‘close order‘ formations (close intervals (combat width sub-150cm or so), fixed positioning) as compared to ‘open order‘ (wide intervals (combat width 150cm+), somewhat flexible positioning) and skirmishing (arbitrary intervals, infinitely flexible positioning). And in particular, we’re interested in a big ‘family’ of close-order formations I am going to call shield walls, which is any formation where combatants stand close enough together to mutually support with shields (which is often not shoulder-to-shoulder, but often more like 1m combat widths). We will untangle how a phalanx fits into these categories later.

We can start, I think, with the easy part: when does hoplite equipment show up in the evidence-record. This is the easier question because it can be answered with some decision by archaeology: when you have dated examples of the gear or representations of it in artwork, it exists; if you do not, it probably doesn’t yet. We should be clear here that we’re working with a terminus post quem (‘limit before which’), which is to say our evidence will give us the latest possible date of something: if we find that the earliest, say, Archaic bell-cuirass we have is c. 720, then c. 721 is the last possible date that this armor might not yet have existed. But of course there could have been still earlier armors which do not survive: so new discoveries can shift dates back but not forward in time. That said, our evidence – archaeology of arms buttressed by artwork of soldiers – is fairly decent and it would be a major surprise if any of these dates shifted by more than a decade or two.

(An aside before I go further: I am focused here mostly on the when of hoplite equipment. There is also a really interesting question of the where of early hoplite equipment. Older hoplite orthodox scholars assumed hoplite equipment emerged in Greece ex nihilo and was peculiar to the Greeks, but this vision has been challenged and I think is rightly challenged (by, e.g. J. Brouwers, Henchmen of Ares (2013), reviewed favorably by Sean Manning here). In particular, the fact that a lot of our evidence comes from either Southern Italy or Anatolia is not always well appreciated in these debates. We don’t have the space to untangle those arguments (and I am not versed enough on the eastern side) but it is well worth remembering that Archaic Greece was not culturally isolated and that influences eastern and western are easy to demonstrate.)

And what our evidence suggests is that Anthony Snodgrass was right:2 hoplite equipment emerges peicemeal and gradually (and were adopted even slower), not all at once and did so well before we have evidence by any other metric for fighting in the phalanx (which comes towards the end of the equipment’s developmental timeline).

The earliest piece of distinctively hoplite equipment that we see in artwork is the circular aspis, which starts showing up around c. 750, but takes a long time to displace other, lighter shield forms, only pushing out these other types in artwork (Diplyon shields with ‘carve outs’ on either side giving them a figure-8 design, squarish shields, center-grip shields) in the back half of the 600s. Metal helmets begin appearing first in the late 8th century (a couple of decades behind the earliest aspides), with the oldest type being the open-faced Kegelhelm, which evolved into the also open-faced ‘Illyrian‘ helmet (please ignore the ethnic signifiers used on these helmet names, they are usually not historically grounded). By the early seventh century – so just a few decades later – we start to get our first close-faced helmets, the early Corinthian helmet types, which is going to be the most popular – but by no means only – helmet for hoplites for the rest of the Archaic and early Classical.

Via Wikipedia, a black-figure amphora (c. 560) showing a battle scene. The warriors on the left hold aspides and wear Corinthian helmets, while the ones on the right carry diplyon shields (which look to have the two-points-of-contact grip the aspis does). I useful reminder that non-hoplite equipment was not immediately or even necessarily very rapidly displaced by what became the hoplite standard.

Coming fairly quickly after the appearance of metal helmets is metal body armor, with the earliest dated example (to my knowledge) still being the the Argos cuirass (c. 720), which is the first of the ‘bell cuirass’ type, which will evolve into the later muscle cuirass you are likely familiar with, which appears at the tail end of the Archaic as an artistic elaboration of the design. Not everyone dons this armor right away to go by its appearance in artwork or prevalence in the archaeological record – adoption was slow, almost certainly (given the expense of a bronze cuirass) from the upper-classes downward.

Via Wikipedia, a picture of the Argos bell cuirass with its Kegelhelm-type helmet dated to c. 720. Apologies for the side-on picture, I couldn’t find a straight-on image that had a clean CC license.

This element of armor is eventually joined by quite a few ‘add-ons’ protecting the arms, legs, feet and groin, which also phase in (and in some cases phase out) over time. The first to show up are greaves (which are also the only armor ‘add on’ to really stick around) which begin to appear perhaps as early as c. 750 but only really securely (there are dating troubles with some examples) by c. 700. Small semi-circular metal plates designed to hang from the base of the cuirass to protect the belly and goin, ‘belly guards,’ start showing up around c. 675 or so (so around four decades after the cuirasses themselves), while other add-ins fill in later – ankle-guards in the mid-600s, foot-guards and arm guards (quite rare) in the late 600s. All of these but the greaves basically phase out by the end of the 500s.

Via Wikipedia, a late classical (c. 340-330) cuirass and helmet showing how some of this equipment will develop over time. The cuirass here is a muscle cuirass, a direct development from the earlier bell cuirass above. The helmet is a Chalcidian-type, which seems to have developed out of the Corinthian helmet as a lighter, less restrictive option in the fifth century.

Pteruges, those distinctive leather strips hanging down from the cuirass (they are part of the textile or leather liner worn underneath it) start showing up in the sixth century (so the 500s), about two centuries after the cuirasses themselves. There is also some reason to suppose that textile armor is in use as a cheaper substitute for the bronze cuirass as early as the seventh century, but it is only in the mid-sixth century that we get clear and unambiguous effort for the classic stiff tube-and-yoke cuirass which by c. 500 becomes the most common hoplite armor, displacing the bronze cuirass (almost certainly because it was cheaper, not because it was lighter, which it probably wasn’t).

Via Wikipedia, from the Alexander Mosaic, a later Roman copy of an early Hellenistic mosaic (so quite a bit after our period), Alexander the Great shown wearing a tube-and-yoke cuirass (probably linen, clearly with some metal reinforcement), with visible pteruges around his lower waist (the straps there).
Note that there is a second quieter debate about the construction of the tube-and-yoke cuirass which we’re just going to leave aside for now.

Weapons are less useful for our chronology, so we can give them just a few words. Thrusting spears were, of course, a bronze age technology not lost to our Dark Age Greeks, but they persist alongside throwing spears, often with visible throwing loops, well into the 600s, even for heavily armored hoplite-style troops. As for swords, the Greek hoplites will have two types, a straight-edged cut-and-thrust sword of modest length (the xiphos) and a single-edged foward curving chopper of a sword (the kopis), though older Naue II types – a continuation of bronze age designs – continues all the way into the 500s. The origin of the kopis is quite contested and meaningfully uncertain (whereas the xiphos seems a straight line extrapolation from previous designs), but need not detain us here.

So in summary, we do not see a sudden ‘revolution’ in terms of the adoption of hoplite arms, but rather a fairly gradual process stretched out over a century where equip emerges, often vies with ‘non-hoplite’ equipment for prominence and slowly becomes more popular (almost certainly faster in some places and slower in others, though our evidence rarely lets us see this clearly). The aspis first starts showing up c. 750, the helmets a decade or two after that, the breastplates a decade or two after that, the greaves a decade or two after that, the other ‘add-ons’ a few decades after that (by which point we’re closing in on 650 and we have visual evidence of hoplites in close-order, albeit with caveats). Meanwhile adoption is also gradual: hoplite-equipped men co-exist in artwork alongside men with different equipment for quite a while, with artwork showing unbroken lines of uniformly equipped hoplites with the full panoply beginning in the mid-to-late 7th century, about a century to a century and a half after we started. It is after this, in the sixth century, that we see both pteruges – which will become the standard goin and upper-thigh protection – and the tube-and-yoke cuirass, a cheaper armor probably indicating poorer-but-still-well-to-do men entering the phalanx.

Via Wikipedia, the Chigi Vase (c. 650). Its hoplite scene is (arguably) the oldest clear scene we have of hoplites depicted fighting in close-order with overlapping shields, although the difficulty of depth (how closely is that second rank behind the first?) remains.

Consequently, the Archaic hoplite must have shared his battlefield with non-hoplites and indeed – and this is one of van Wees’ strongest points – when we look at Archaic artwork, we see that a lot. Just all over the place. Hoplites with cavalry, hoplites with light infantry, hoplites with archers (and, of course, hoplites with hoplites).

Of course that raises key questions about how hoplites function on two kinds of battlefield: an early battlefield where they have to function within an army that is probably still predominately lighter infantry (with some cavalry) and a later battlefield in which the hoplite is the center-piece of the army. But before we get to how hoplites fight together, we need to think a bit about what hoplite equipment means for how they fight individually.

Hoplight or Hopheavy?3

If the basic outlines of the gradualist argument about the development of hoplite equipment is one where the heterodox camp has more or less simply won, the argument about the impact of that equipment is one in which the orthodox camp is determined to hold its ground.

To summarize the arguments: hoplite orthodoxy argues, in effect, that hoplite equipment was so heavy and cumbersome that it necessitated fighting in the phalanx. As a result orthodox scholars tend to emphasize the significant weight of hoplite equipment. Consequently, this becomes an argument against any vision of a more fluid battlefield, as orthodox scholars will argue hoplites were simply too encumbered to function in such a battlefield. This argument appears in WWoW, along with a call for more archaeology to support it, a call which was answered by the sometimes frustrating E. Jarva, Archaiologia on Archaic Greek Body Armour (1995) but it remains current. The latest attempt I am aware of to renew this argument is part of A. Schwartz, Reinstating the Hoplite (2013), 25-101.

By contrast, the heterodox camp argues that hoplite equipment was not that heavy or cumbersome and could be used outside of the phalanx (and indeed, was so used), but this argument often proceeds beyond this point to argue that hoplite equipment emerged in a fluid, skirmish-like battlefield and was, in a sense, at home in such a battlefield, as part of a larger argument about the phalanx being quite a lot less rigid and organized than the orthodox camp imagines it. Put another way at the extremes the heterodox camp argues there is nothing about hoplite equipment which would suggest it was designed or intended for a close-order, relatively rigid infantry formation. There’s a dovetailing here where this argument also gets drawn into arguments about ‘technological determinism’ – a rejection of the idea that any given form of ancient warfare, especially hoplite warfare, represented a technologically superior way of fighting or set of equipment – which also gets overstated to the point of suggesting weapon design doesn’t particularly matter at all.4

This is one of those areas where I will make few friends because I think both arguments are actually quite bad, a product of scholars who are extremely well versed in the ancient sources but who have relatively less training in military history more broadly and especially in pre-modern military history and especially especially pre-modern arms and armor.

So let me set some ‘ground rules’ about how, generally speaking, pre-modern arms and armor emerge. When it comes to personal combat equipment, (almost) no one in these periods has a military research and development department and equipment is rarely designed from scratch. Instead, arms and armor are evolving out of a fairly organic process, iterating on previous patterns or (more rarely) experimenting with entirely new patterns. This process is driven by need, which is to say arms and armor respond to the current threat environment, not a projection of a (far) future threat environment. As a result, arms and armor tend to engage in a kind of ‘antagonistic co-evolution,’ with designs evolving and responding to present threats and challenges. Within that space, imitation and adornment also play key roles: cultures imitate the weapons of armies they see as more successful and elites often use arms and armor to display status.

The way entire panoplies – that is full sets of equipment intended to be used together – tend to emerge is part of this process: panoplies tend to be pretty clearly planned or designed for a specific threat environment, which is to say they are intended for a specific role. Now, I want to be clear about these words ‘planned,’ ‘designed,’ or ‘intended’ – we are being quite metaphorical here. There is often no single person drafting design documents, rather we’re describing the outcome of the evolutionary process above: many individual combatants making individual choices about equipment (because few pre-modern armies have standardized kit) thinking about the kind of battle they expect to be in tend very strongly to produce panoplies that are clearly biased towards a specific intended kind of battle.

Which absolutely does not mean they are never used for any other kind of battle. The ‘kit’ of an 18th century line infantryman in Europe was designed, very clearly for linear engagements between large units on relatively open battlefields. But if what you had was that kit and an enemy who was in a forest or a town or an orchard or behind a fence, well that was the kit you had and you made the best of it you could.5 Likewise, if what you have is a hoplite army but you need to engage in terrain or a situation which does not permit a phalanx, you do not suffer a 404-TACTICS-NOT-FOUND error, you engage with the equipment you have. That said, being very good at one sort of fighting means making compromises (weight, mobility, protection, lethality) for other kinds of fighting, so two equipment sets might be situationally superior to each other (panoply A is better at combat situation Y, while panoply B is better at situation Z, though they may both be able to do either and roughly equally bad at situation X).

Via Wikimedia Commons, a black figure amphora (c. 510) showing a mythological scene (Achilles and Ajax) with warriors represented as hoplites, but carrying two spears (so they can throw one of them).

Naturally, in a non-standardized army, the individual combatants making individual choices about equipment are going to be considering the primary kind of battle they expect but also the likelihood that they are going to end up having to fight in other ways and so nearly all real-world panoplies (and nearly all of the weapons and armor they use) are not ultra-specialized hot-house flowers, but rather compromise designs. Which doesn’t mean they don’t have a primary kind of battle in mind! Just that some affordance has been made for other modalities of warfare.

If we apply that model to hoplite equipment, I think it resolves a lot of our quandaries reasonably well towards the following conclusion: hoplite equipment was a heavy infantry kit which was reasonably flexible but seems very clearly to have been intended, first and foremost, to function in close order infantry formations, rather than in fully individual combats or skirmishing.

Now let’s look at the equipment and talk about why I think that, starting with:

Overall Weight.

I am by no means the first person to note that absurdly heavy estimates dating back more than a century for the hoplite’s ‘combat load’ (that is, what would be carried into battle, not on campaign) are absurdly high; you will still hear figures of 33-40kg (72-90lbs) bandied about. These estimates predated a lot of modern archaeology and were consistently too high. Likewise, the first systematic effort to figure out, archaeologically, how heavy this equipment was by Eero Jarva, skewed the results high in a consistent pattern.6 Equally, I think there is some risk coming in a bit low, but frankly low-errors have been consistently less egregious than high-errors.7 Conveniently, I have looked at a lot of this material in order to get a sense of military gear in the later Hellenistic period, so I can quickly summarize and estimate from the archaeology.

Early Corinthian helmets can come in close to 2kg in weight, though later Greek helmets tend much lighter, between 1-1.5kg; we’re interested in the Archaic so the heavier number bears some weight. Greek bronze cuirasses as recovered invariably mass under 4.5kg (not the 4-8kg Jarva imagines), so we might imagine in original condition an upper limit around c. 5.5kg with most closer to 3.5-4.5kg, with probably 1-2kg for liner and pteruges; a tube-and-yoke cuirass in linen or leather (the former was probably more common) would have been only modestly lighter, perhaps 3.5-4kg (a small proportion of these had metal reinforcements, but these were very modest outside of Etruria).8 So for a typical load, we might imagine anywhere from 3.5kg to 6.5kg of armor, but 5kg is probably a healthy median value. We actually have a lot of greaves: individual pieces (greaves are worn in pairs) range from ~450 to 1,100g, with the cluster around 700-800, suggesting a pair around 1.4-1.6kg; we can say around 1.5kg.

For weapons, the dory (the one-handed thrusting spear), tips range from c. 150 to c. 400g, spear butts (the sauroter) around c. 150g, plus a haft that probably comes in around 1kg, for a c. 1.5kg spear. Greek infantry swords are a tiny bit smaller and lighter than what we see to their West, with a straight-edged xiphos probably having around 500g (plus a hundred grams or so of organic fittings to the hilt) of metal and a kopis a bit heavier at c. 700g. Adding suspension and such, we probably get to around 1.25kg or so.9

That leaves the aspis, which is tricky for two reasons. First, aspides, while a clear and visible type, clearly varied a bit in size: they are roughly 90cm in diameter, but with a fair bit of wiggle room and likewise the depth of the dish matters for weight. Second, what we recover for aspides are generally the metal (bronze) shield covers, not the wooden cores; these shields were never all-metal like you see in games or movies, they were mostly wood with a very thin sheet of bronze (c. 0.25-0.5mm) over the top. So you can shift the weight a lot by what wood you use and how thick the core is made (it is worth noting that while you might expect a preference for strong woods, the ancient preference explicitly is for light woods in shields).10 You can get a reconstruction really quite light (as light as 3.5kg or so), but my sense is most come in around 6-7kg, with some as heavy as 9kg.11 A bigger fellow might carry a bigger, heavier shield, but let’s say 6kg on the high side and call it a day.

How encumbered is our hoplite? Well, if we skew heavy on everything and add a second spear (for reasons we’ll get to next time), we come out to about 23kg – our ‘hopheavy.’ If we skew light on everything, our ‘hoplight’ could come to as little as c. 13kg while still having the full kit; to be frank I don’t think they were ever this light, but we’ll leave this as a minimum marker. For the Archaic period (when helmets tend to be heavier), I think we might imagine something like a typical single-spear, bronze-cuirass-wearing hoplite combat load coming in something closer to 18kg or so.12

And now we need to ask a second important question (which is frustratingly rarely asked in these debates – not never, but rarely): is that a lot? What we should not do is compare this to modern, post-gunpowder combat loads which assume very different kinds of combat that require very different sorts of mobility. What we should do is compare this to ancient and medieval combat loads to get a sense of how heavy different classes of infantry were. And it just so happens I am wrapping up a book project that involves computing that, many times for quite a few different panoplies. So here are some brief topline figures, along with the assigned combat role (light infantry, medium infantry, heavy infantry):

  • A fully plate-armored late 14th/early 15th century dismounted knight: 24-27kg (Heavy Infantry).13
  • Hop-heavy, c. 23kg
  • Roman Hastatus/Princeps of the Middle Republic: c. 20-24kg (Heavy Infantry)
  • Macedonian Phalangite: c. 20kg (Heavy Infantry)
  • Typical Hoplite, c. 18kg
  • Hellenistic Peltastai: c. 17-18kg (Heavy Infantry, modestly lighter than above)
  • Gallic Warrior: c. 14kg (Medium infantry, assumes metal helmet, textile armor so on the heavy side for the Gauls)
  • Hop-light, c. 13kg.
  • Iberian Warrior: c. 13kg (Medium infantry)
  • Celtiberian Warrior: c. 11.5kg (Medium Infantry)
  • Hellenistic thureophoroi: c. 10.5kg (Medium Infantry)
  • Roman veles: c. 8kg (Light infantry).14

Some observations emerge from this exercise immediately. First combat role – which I’ve derived from how these troops are used and positioned in ancient armies, not on how much their kit weighs – clearly connects to equipment weight. There is a visible ‘heavy infantry range’ that starts around 15kg and runs upward, a clear ‘medium’ range of lightly-armored line-but-also-skirmish infantry from around 14kg to about 10kg and then everything below that are ‘lights’ that aren’t expected to hold part of the main infantry line.15

But I’d argue simply putting these weights together exposes some real problems in both the extreme orthodox and extreme heterodox views. On the one hand, the idea that hoplite equipment was so heavy that it could only function in the phalanx is clearly nonsense: the typical hoplite was lighter than the typical Roman heavy infantryman who fought in a looser, more flexible formation! Dismounted knights generally fought as close-order heavy infantrymen, but certainly could fight alone or in small groups and maneuver on the battlefield or over rough terrain and they are heavier still. So the idea that hoplites were so heavily equipped that they must fight in the extremely tight orthodox phalanx (we’ll come to spacing later, but they want these fellows crowded in) is silly.

On the other hand hoplites are very clearly typically heavy infantry. They are not mediums and they are certainly not lights. Can you ask heavy infantrymen to skirmish like lights or ask light infantrymen to hold positions like heavies? Well, you can and they may try; the results are generally awful (which is why the flexible ‘mediums’ exist in so many Hellenistic-period armies: they can do both things not-great-but-not-terribly).16 So do I think soldiers wearing this equipment generally intended to fight in skirmish actions or in truly open-order (note that Roman combat spacing, while loose by Greek standards, is still counting as ‘close order’ here)? Oh my no; across the Mediterranean, we see that the troops who intend to fight like that even a little are markedly lighter and those who specialize in it are much lighter, for the obvious reason that running around in 18kg is a lot more tiring than running around in 8kg or less.

So the typical hoplite was a heavy infantryman but not the heaviest of heavy infantry. If anything, he was on the low(ish) end of heavy infantry, probably roughly alongside Hellenistic peltastai (who were intended as lighter, more mobile phalangites)17 but still very clearly in the ‘heavy’ category. Heavier infantry existed, both in antiquity and in the middle ages and did not suffer from the lack of mobility often asserted by the orthodox crowd for hoplites.

But of course equipment is more than just weight, so let’s talk about the implications of some of this kit, most notably the aspis.

The Aspis

Once again, to summarize the opposing camps, the orthodox argument is that hoplite equipment – particularly the aspis (with its weight and limited range of motion) and the Corinthian helmet (with its limited peripheral vision and hearing) – make hoplites ineffective, almost useless, outside of the rigid confines of the phalanx, and in particular outside of the ‘massed shove othismos‘ phalanx (as opposed to looser phalanxes we’ll get into next time).

The moderate heterodox argument can be summed up as, “nuh uh.” It argues that the Corinthian helmet is not so restricting, the aspis not so cumbersome and thus it is possible to dodge, to leap around, to block and throw the shield around and generally to fight in a more fluid way. The ‘strong’ heterodox argument, linking back to development, is to argue that the hoplite’s panoply actually emerged in a more fluid, skirmish environment and the phalanx – here basically any close-order, semi-rigid formation fighting style – emerged only later, implying that the hoplite’s equipment must be robustly multi-purpose. And to be clear that I am not jousting with a straw man, van Wees claims, “the hoplite shield did not presuppose or dictate a dense formation but could be used to equally good effect [emphasis mine] in open-order fighting.”18

The short version of my view is that the moderate heterodox answer is correct and very clearly so, with both the orthodox and ‘strong’ heterodox arguments having serious defects.

But first, I want to introduce a new concept building off of the way we’ve already talked about how equipment develops, which I am going to call appositeness which we can define as something like ‘situational effectiveness.’ The extreme orthodox and heterodox arguments here often seem to dwell – especially by the time they make it to public-accessible books – in a binary can/cannot space: the hoplite can or cannot move quickly, can or cannot skirmish, can or cannot fight with agility and so on.

But as noted above real equipment is not ‘good’ or ‘bad’ but ‘situationally effective’ or not and I want to introduce another layer of complexity in that this situational effectiveness – this appositeness – is a spectrum, not a binary. Weapons and armor are almost invariably deeply compromised designs, forced to make hard trade-offs between protection, reach, weight and so on, and those tradeoffs are real, meaning that they involve real deterioration of the ability to do a given combat activity. But ‘less’ does not mean ‘none.’ So the question is not can/cannot, but rather how apposite is this equipment for a given function – how well adapted is it for this specific situation.

You can do almost any kind of fighting hoplite armor, but it is very obviously adapted for one kind of fighting and was very obviously adapted for that kind of fighting when it emerged: fighting in a shield wall. And that has downstream implications of course: if the aspis is adapted for a shield wall, that implies that a shield wall already existed when it emerged (in the mid-to-late 8th century). Now we may, for the moment, leave aside if we ought to call that early shield wall a phalanx. First, we ought to talk about why I think the hoplite’s kit is designed for a shield wall but also why it could function (less effectively) outside of it.

So lets talk about the form of the aspis. The aspis is a large round shield with a lightly dished (so convex) shape, albeit in this period with a flat rim-section that runs around the edge. The whole thing is typically about 90cm in diameter (sometimes more, sometimes less) and it is held with two points of contact: the arm is passed through the porpax which sits at the center of mass of the shield and will sit against the inside of the elbow of the wear, and then holds the antelabe, a strap near the edge of the shield (so the wearer’s elbow sits just to the left of the shield’s center of mass and his hand just to the left of the shield’s edge). That explains the size: the shield pretty much has to have a radius of one forearm (conveniently a standard ancient unit called a ‘cubit’) and thus a diameter of two forearms, plus a bit for the rim, which comes to about 90cm.

Via Wikimedia Commons, a Corinthian black-figure alabastron (c. 590-570) showing hoplites in rows, which really demonstrates just how big the aspis can be. A 90cm shield is a really big shield although the artist here has certainly chosen to emphasize the size.

In construction, the aspis has, as mentioned, a wooden core made of a wood that offers the best strength at low weight (e.g. willow, poplar, not oak or ash) covered (at least for the better off hoplites) with a very thin (c. 0.25-0.5mm) bronze facing, which actually does substantially strengthen the shield. The result is, it must be noted, a somewhat heavy but very stout shield. The dished shape lets the user put a bit of their body into the hollow of the shield and creates a ledge around the rim which sits handily at about shoulder height, allowing the shield to be rested against the shoulder in a ‘ready’ position in situations where you don’t want to put the shield down but want to reduce the fatigue of holding it.

And here is where I come at this question a bit differently from my peers: that description to me demands comparison but the aspis is almost never compared to other similar shields. Two things, however, should immediately stand out in such a comparison. First, the aspis is an unusually, remarkably wide shield; many oblong shields are taller, but I can think of no shield-type that is on average wider than 90cm. The early medieval round shield, perhaps the closest comparison for coverage, averages around 75-85cm wide (with fairly wide variation, mind you), while the caetra, a contemporary ancient round shield from Spain, averages around 50-70cm. The famously large Roman scutum of the Middle Republic is generally only around 60cm or so wide (though it is far taller). So this is a very wide shield.

Via Wikimedia Commons, an Attic black-figure Kylix (c. 560) which gives us a good look at the two-point grip of the aspis (though note this aspis is something of a diplyon-hybrid with two small cutouts!).

Second, the two-points-of-contact strap-grip structure is a somewhat uncommon design decision (center-grip shields are, globally speaking, more common) with significant trade-offs. As an aside, it seems generally assumed – mistakenly – that ‘strap-grip’ shields dominated European medieval shields, but this isn’t quite right: the period saw a fair amount of center-grip shields, two-point-of-contact shields (what is generally meant by ‘strap grip’) and off-center single-point of contact shields, with a substantial portion of the latter two supported by a guige or shield sling, perhaps similar to how we generally reconstruct later Hellenistic version of the aspis supported by a strap over the shoulder. So the pure two-point-of-contact porpax-antelabe grip of the aspis is actually fairly unusual but not entirely unique.

But those tradeoffs can help give us a sense of what this shield was for. On the one hand, two points of contact give the user a strong connection to the shield and make it very hard for an opponent to push it out of position (and almost impossible to rotate it): that shield is going to be where its wearer wants it, no matter how hard you are hitting it. It also puts the top of the dish at shoulder level, which probably helps keeping the shield at ‘ready,’ especially because you can’t rest the thing on the ground without taking your arm out of it or kneeling.

On the other hand the two-point grip substantially reduces the shield’s range of motion and its potential to be used offensively. Now this is where the heterodox scholars will point to references in the ancient sources to war dances intended to mimic combat where participants jumped about or descriptions of combatants swinging their shield around and dodging and so on,19 and then on the other hand to the ample supply of videos showing modern reenactors in hoplite kit doing this.20 To which I first say: granted. Conceded. You can move the aspis with agility, you can hit someone with it, you can jump and dodge in hoplite kit. And that is basically enough to be fatal to the orthodox argument here.

But remember our question is appositeness: is this the ideal or even a particularly good piece of equipment to do that with? In short, the question is not ‘can you use an aspis offensively’ (at all) but is it better than other plausible designs at it. Likewise, we ask not ‘can you move the aspis around quickly’ but is it better at that than other plausible designs. And recall above, when the aspis emerged, it had competition: we see other shield designs in early Archaic artwork. There were alternatives, but the aspis ‘won out’ for the heavy infantryman and that can tell us something about what was desired in a shield.

In terms of offensive potential, we’re really interested in the range of strikes you can perform with a shield and the reach you can have with them. For the aspis, the wearer is limited to variations on a shove (pushing the shield out) and a ‘door swing’ (swinging the edge at someone) and both have really limited range. The body of the shield can never be more than one upper-arm-length away from the shoulder (c. 30cm or so)21 so the ‘shove’ can’t shove all that far and the rim of the shield can’t ever be more than a few centimeters in advance of the wearer’s fist. By contrast a center-grip shield can have its body shoved outward to the full extension of the arm (almost double the distance) and its rim can extend half the shield’s length in any direction from the hand (so striking with the lower rim of a scutum you can get the lower rim c. 60cm from your hand which is c. 60cm from your body, while a center-grip round shield of c. 80cm in diameter – smaller than the aspis – can project out 40cm from the hand which is 60cm from the body).

So that two-point grip that gives the shield such stability is dropping its offensive reach from something like 60 or 100cm (shove or strike) to just about 30 or 65cm or so (shove or strike).22 That is a meaningful difference (and you can see it represented visually in the diagram below). Again, this is not to say you cannot use the aspis offensively, just that this design prioritizes its defensive value over its offensive value with its grip and structure.

And then there is the question of coverage. Can you swing an aspis around, left to right, blocking and warding blows? Absolutely. Is it good at that? No. It is not and I am always surprised to see folks challenge this position because have you seen how a center-grip round shield is used? And to be clear, we know the Greeks could have used center-grip shields because center-grip dipylon shields show up in Archaic Greek artwork (though many diplyon shields have the same two-point grip-system as aspides as well): they had the other option and chose not to use it.23 With a two-point porpax-antelabe grip, the aspis‘ center of mass can never be more than an upper-arm’s length (again, c. 30cm) away, which really matters given that the average male might be c. 45cm wide. In practice, of course, it is hard to get an elbow much further than the center of one’s chest and that is basically the limit for how far to the right the center of the aspis can be. Likewise, there’s a real limit to how far you can cock your elbow backwards.

By contrast, the center-point of a center-grip shield can be wherever you fist can be, which is a lot wider of a set of places: you can get a center-grip shield all the way to the far side of your body, you can pull it all the way in to your chest or push your entire arm’s length into the enemy’s space. Moreover, with just a single point of contact, these shields can rotate around your hand. You can see the difference in coverage arcs below which honestly also understates how much easier it is to move a center-group shield into some of these extreme positions because it isn’t strapped to your arm.

Note: We’re going to return to the ‘side on’ vs. ‘straight on’ question in a future post, but I’ve provided both for now. The heterodox school (van Wees, op. cit., 168-9) supposes a side-on stance but in practice hoplites must have been transitioning frequently between side-on and straight-on simply to use their weapons (you bring your back leg forward when striking to get your whole body into the blow) or to march (these guys did not run sideways into battle, even if they might turn sideways as they reached the enemy). However I will note that you can see very clearly that it is only in the ‘straight on’ (or nearly so) position that Thucydides’ statement about the tendency of hoplites to drift right-ward to seek to protect their unprotected right side makes any sense (Thuc. 5.71.1), something Thucydides says “all armies do so” (ἅπαντα τοῦτο) and so must have been a general feature of the warfare he knew.
Note also: the semi-circles are the exact same diameter, to give you a sense of just how far further a center-grip shield can project. And in our best reconstructions of shielded combat, you do often want to be pushing the shield into your enemy’s space to block them off, to get contact with their shield (to push it out of position) or to strike with the shield. As you can see, the aspis can barely get beyond the c. 60cm circle, while the center-grip shield can be pushed much further out – it’s center can be as far out as the edge of the aspis.

So the aspis‘ design has significantly compromised offensive potential, mobility, maneuverability and the range of coverage on the sides. What it gains is a stout design, a very stable grip and an unusually high amount of width and we know they chose these trade-offs because the aspis replaced other shield designs that were present in the Archaic, at least for this kind of combatant (the emerging hoplite). The question then is why and here certainty is impossible because the Greeks do not tell us, but we can approach a plausible answer to the question in two ways: we can ask in what situation would those positive qualities – stoutness, stability and width – be more valuable or we could look at how similar shields (large round shields) are used in other cultures.

A very wide shield that covers a lot of space in which the combatant is not (because it is much wider than the combatant is) is not particularly useful in skirmishing or open-order fighting (cultures that do that kind of fighting tend to drift towards either large oblong shields or small buckler-style shields that don’t waste weight covering area the combatant doesn’t occupy). But that extra width is really handy if the goal is to create an unbroken horizontal line of protection without having to crowd so tightly with your buddies that you can’t move effectively. A hoplite can ‘join shields’ with his mates even with a file width of 90cm, which is certainly closed-order, but not absurdly tight – a Roman with a scutum has to pull in to about 60-65cm of file width to do the same. Where might you value stoutness over mobility or range of motion? Well, under conditions where you expect most strikes to come from a single direction (in front of you), you are more concerned about your ability to meet those strikes effectively than your ability to cover angles of attack that aren’t supposed to be threatened in the first place – such as, for instance, a situation where that space is occupied by a buddy who also has a big shield. In particular, you might want this if you are more worried about having your shield shifted out of position by an enemy – a thing that was clearly a concern24 – than you are about its offensive potential or rapid mobility (or its utility for a shoving match). By contrast, in open order or skirmishing, you need to be very concerned about an attack towards your flanks and a shield which can rapidly shift into those positions is really useful.

What is the environment where those tradeoffs make sense? A shield wall.

Alternately, we could just ask, “what contexts in other societies or other periods do we tend to see large, solid and relatively robust round shields” and the answer is in shield walls. Or we might ask, “where do we see infantry using two-point grip shields (like some kite shields, for instance)” and find the answer is in shield walls. Shields that are like the aspis: robust, either wide, two-point gripped or both and used by infantry (rather than cavalry) tend in my experience to be pretty strongly connected to societies with shield wall tactics.

I thus find myself feeling very confident that the aspis was designed for a shield wall context. Which, given how weapons develop (see above) would suggest that context already existed to some degree when the aspis emerged in the mid-to-late 8th century, although we will leave to next time working out what that might have looked like.

A Brief Digression on the Corinthian Helmet

We can think about the Corinthian helmet in similar terms. Victor Davis Hansen, who can only compare Corinthian helmets to modern combat helmets – because again a huge problem in this debate is that both sides lack sufficient pre-modern military comparanda – suggested that hoplites wearing the helmet could “scarcely see or hear” which essentially forced hoplites into a dense formation. “Dueling, skirmishing and hit-and-run tactics were out of the question with such headgear.”25 The heterodox response is to dispute the degree of those trade-offs, arguing that the helmets don’t inhibit peripheral vision or hearing and are not as heavy as the orthodox camp supposes.26 That dispute matters quite a lot because again, as we’ll get to, the ‘strong’ heterodox position is that hoplite equipment didn’t develop for or in a shield-wall formation, but for skirmishing, so if the Corinthian helmet is a bad helmet for skirmishing, that would make its emergence rather strange; we’ll come back to the question of early Archaic warfare later. Strikingly, there is a lot of effort in these treatments to reason from first principles or from other later ancient Greek helmets but the only non-Greek comparandum that is regularly brought up is the open-faced Roman montefortino-helmet – other closed-face helmets are rarely mentioned.

Via Wikimedia Commons, a relatively early design (c. 630) Corinthian helmet, showing the minimal nose protection (albeit there was some more here before it was broken off) and very wide gap over the face. The punch-holes are presumably to enable the attachment of a liner.
Via Wikimedia Commons, a sixth century Corinthian helmet (so the ‘middle’ stage of development) – the face gap is not yet fully closed, but we have the fully developed nose guard and more curved overall shape.

So does the Corinthian helmet limit vision? It depends on the particular design but a general answer is ‘perhaps a bit, but not an enormous amount.’ The eye-slits in original Corinthian helmets (as opposed to sometimes poorly made modern replicas) are fairly wide and the aperture is right up against the face, so you might lose some peripheral vision, but not a very large amount; the Corinthian helmet design actually does a really good job of limiting the peripheral vision tradeoff (but it is accepting a small tradeoff). The impact to hearing is relatively more significant, but what I’ve heard from reenactors more than once is that it only gets bad if you make noise (which then is transmitted through the helmet), but that can include heavy breathing.27 Of course the best evidence that the impact to hearing was non-trivial (even if the wearer is still able to hear somewhat) is that later versions of the helmet feature cutouts for the ears. Breathing itself is a factor here: the width of the mouth-slit varies over time (it tends to close up as we move from the Archaic towards the Classical), but basically any obstruction of the front of the face with a helmet is going to be felt by the wearer when they are engaged in heavy exertion: if you are running or fighting your body is going to feel just about anything that restricts its ability to suck in maximum air.

Via Wikipedia, a 13th century German great helm, showing the narrowness of the vision-slits and the breaths (breathing holes).

But those drawbacks simply do not get us to the idea that this was a helmet which could only be used in a tight, huddled formation for the obvious reason that other, far more enclosed helmets have existed at other points in history and been used for a wider range of fighting. 13th century great helms also have no ear cutouts, feature even narrower vision-slits and use a system of ‘breaths’ (small circular holes, typically in patterns) to enable breathing, which restrict breathing more than at least early Corinthian helmets (and probably about the same amount as the more closed-front late types). Visored bascinets, like the iconic hounskull bascinet design likewise lack ear-cut outs, have breaths for air and notably move the eye aperture forward away from the eyes on the visor, reducing the area of vision significantly as compared to a Corinthian helmet. And yet we see these helmets used by both heavy infantry (dismounted knights and men-at-arms) and cavalry in a variety of situations including dueling.28

Via Wikipedia, a hounskull visored bascinet. The visor was attached via hinges so that it could be swung open (some designs have them swing upwards, others have two points of contact and swing horizontally). The large bulge beneath the eyes served in part to make breathing easier, creating a larger air pocket and more space for the breaths.

Which puts us in a similar place as with the aspis: the Corinthian helmet is a design that has made some trade-offs and compromises. It is capable of a lot – the idea that men wearing these were forced to huddle up because couldn’t see or hear each other is excessive (and honestly absurdly so) – but the choice has clearly been made to sacrifice a bit of lightness, some vision, a fair bit of hearing and some breathing in order to squeeze out significantly more face and neck protection (those cheek pieces generally descend well below the chin, to help guard the neck that Greek body armor struggled to protect adequately). That is not a set of compromises that would make sense for a skirmisher who needs to be able to see and hear with maximum clarity and who expects to be running back and forth on the battlefield for an extended period – and indeed, skirmishing troops often forgo helmets entirely. When they wear them, they are to my knowledge invariably open-faced.

Via Wikimedia Commons, an early classical (and thus ‘late’) Corinthian helmet design (c. 475). The face has almost totally closed off and the eye-gaps have narrowed, although there is still a decently wide cutout to avoid harming peripheral vision.

Instead, when we see partially- or fully-closed-face helmets, we tend to see them in basically two environments: heavy cavalry and shield walls.29 Some of this is doubtless socioeconomic: the cavalryman has the money for expensive, fully-enclosed helmets while the poorer infantrymen must make do with less. Whereas I think the aspis was clearly developed to function in a shield wall (even though it can be used to do other things) I am less confident on the Corinthian helmet; I could probably be persuaded of the idea this began as a cavalryman’s heavy helmet, only to be adopted by the infantry because its emphasis on face-protection was so useful in the context of a shield wall clashing with another shield wall. What it is very obviously not is a skirmishers helmet.30

Conclusions

As you have probably picked up when it comes to equipment, I find the ‘orthodox’ position unacceptable on almost every point, but equally I find the ‘strong’ heterodox position unpersuasive on every point except the ‘soft’ gradualism in development (the Snodgrass position) which I think has decisively triumphed (some moderate heterodox objections to orthodoxy survive quite well, however). Of the entire debate, this is often the part that I find most frustrating because of the failure of the scholars involved to really engage meaningfully with the broader field of arms-and-armor study and to think more comparatively about how arms and armor develop, are selected and are used.

On the one hand, the idea that the hoplite, in full or nearly-full kit, could function as a skirmisher, “even in full armour, a hoplite was quite capable of moving back and forth across the battlefield in the Homeric manner” or that the kit could be “used to equally good effect in open-order fighting” is just not plausible and mistakes capability for appositeness.31 Hoplite equipment placed the typical hoplite very clearly into the weight-range of ‘heavy infantry,’ by no means the heaviest of heavy infantry (which fatally undermines the ‘encumbered hoplite’ of the orthodox vision) but also by no means light infantry or even really medium infantry except if substantial parts of the panoply were abandoned. Again, I could be sold on the idea that the earliest hoplites were, perhaps, ‘mediums’ – versatile infantry that could skirmish (but not well) and fight in close order (but not well) – but by the early 600s when the whole panoply is coming together it seems clear that the fellows with the full set are in the weight range for ‘heavies.’ We’ll talk about how we might imagine that combat evolving next time.

Moreover, key elements of hoplite equipment show a clear effort to prioritize protection over other factors: shield mobility, offensive potential, a small degree of vision, a larger but still modest degree of hearing, a smaller but still significant degree of breathing, which contributes to a larger tradeoff in endurance (another strike against the ‘skirmishing hoplite’). The environment where those tradeoffs all make sense is the shield wall. Which in turn means that while the ultra-rigid orthodox vision where these soldiers cannot function outside of the phalanx has to be abandoned – they’re more versatile than that – the vision, propounded by van Wees, that the hoplite worked just as well in open-order is also not persuasive.

Instead, it seems most plausible by far to me that this equipment emerged to meet the demands of men who were already beginning to fight in shield walls, which is to say relatively32 close-order formations with mutually supporting33 shields probably already existed when the hoplite panoply began to emerge in the mid- and late-8th century.

And that’s where we’ll go next time: to look at tactics both in the Archaic and Classical periods.

Did they shove?

(No, they did not shove)

Classic Mac OS System 1 Patterns

Nov. 21st, 2025 08:12 pm
[syndicated profile] frontendmasters_feed

Posted by Chris Coyier

Paul Smith made these Classic Mac OS System 1 Patterns, which are super tiny (in size) graphics that work with background-repeat to make old school “textures”. They have an awfully nostalgic look for me, but they are so simple I can see them being useful in modern designs as well.

[syndicated profile] in_the_pipeline_feed

Here’s a phenomenon - yet another one - that never crossed my mind before. It’s long been known that enzymes that catalyze proteolysis (cleavage of peptide bonds) can, under certain circumstances, catalyze the reverse reaction of peptide bond formation. Folks who have had to think about chemical kinetics will immediately realize that those conditions would include high concentrations of the two cleavage products and low concentrations of the longer protein substrate, an example of Le Chatlier’s principle in action. It’s also an example of the principle of Microscopic Reversibility in action, too: the chemical steps are the same whether you run things forwards or backwards. That doesn’t mean those steps are always thermodynamically feasible, of course - the energies involved (with both enthalpic and entropic contributions) might be too great a barrier to run backwards very easily, as in unburning a piece of wood back from a cloud of soot and hot gases. Fire is not a good example of an equilibrium process, but peptide bond breakage and formation is a lot closer to balancing on a knife edge than combustion is.

This recent preprint suggests, though, that this “reverse proteolysis” is happening under physiological conditions, particularly with cysteine-based cathepsin enzymes. And it’s not just re-formation of the proteins that have just been cleaved (although that must be happening, too). No, you get mix-and-match combinations of various proteins to generate species that were certainly never coded for in the genome. And on top of that, you can even spot chimeras between human proteins and bacterial or viral ones (!)

Now, some species of this sort have been reported before (in reports going back to at least 2004) but this new work suggests that it’s a much more common process than anyone realized, one with implications for immunity and perhaps other cellular processes as well. Recall that antigen proteins are displayed to the immune system via the major histocompatibility complex, and that these antigens are cleaved from larger proteins via degradation. Displaying weirdo newly assembled protein sequences from this chemical splicing route could cause some real effects downstream. This could, for example, be one of the links between prior infections and later autoimmune disease, through those human/pathogen hybrid proteins.

The authors here shore up that connection by showing that auto-antigenic peptides implicated in Type I diabetes can be produced by cathepsins running in reverse, and that proteins that have been modified by citrullination (on arginine residues) seem to undergo the process more readily. That sort of Arg modification is already known to be over-represented in autoimmune antigens. In addition, the cathepsin enzyme subtypes that are most dominant in immune tissues (such as inside macrophages) seem to be the best at producing such splicing hybrids. These reverse reactions are also more prevalent at closer to neutral pH, which suggests that lysosomal dysfunction (where cathepsins and other enzymes normally work in an acidic environment) might be a source of increased neo-peptides.

Overall, it seems that we’re going to have to learn to deal with these species, and to study them in the context of both normal conditions and in infectious disease. Acute viral infections might well be producing waves of human/viral protein hybrid species, and we can’t expect them all to be silent! 

[syndicated profile] frontendmasters_feed

Posted by Sunkanmi Fafowora

3D CSS has been around for a while. The earliest implementation of 3D CSS you can find is in one of W3C’s earliest specifications on 3D transforms in 2009. That’s exactly 15 years after CSS was introduced to the web in 1994, so it’s a really long time!

A common pattern you would see in 3D transformations is the layered pattern, which gives you the illusion of 3D CSS, and this is mostly used with text, like this demo below from Noah Blon:

Or in Amit Sheen’s demos like this one:

The layered pattern, as its name suggests, stacks multiple items into layers, adjusting the Z position and colors of each item with respect to their index value in order to create an illusion of 3D.

Yes, most 3D CSS are just illusions. However, did you know that we can apply the same pattern, but for images? In this article, we will look into how to create a layered pattern for images to create a 3D image in CSS.

In order for you to truly understand how 3D CSS works, here’s a quick list of things you need to do before proceeding:

  1. How the CSS perspective works
  2. A good understanding of the x, y, and z coordinates
  3. Sometimes, you have to think in cubes (bonus)

This layered pattern can be an accessibility problem because duplicated content is read as many times as its repeated. That’s true for text, however, for images this can be circumvented by just leaving all the but first alt attribute empty or setting all the duplicated divs with aria-hidden="true" (this one also works for text). This would hide the duplicated content from the user.

The HTML

Let’s start with the basic markup structure. We’re linking up an identical <img> over and over in multiple layers:

<div class="scene"> 
  <div class="image-container">
    <div class="original">
      <img src="https://images.unsplash.com/photo-1579546929518-9e396f3cc809" alt="Gradient colored image with all colors present starting from the center point">
    </div>
    
    <div class="layers" aria-hidden="true">
      <div class="layer" style="--i: 1;"><img src="https://images.unsplash.com/photo-1579546929518-9e396f3cc809" alt=""></div>
      <div class="layer" style="--i: 2;"><img src="https://images.unsplash.com/photo-1579546929518-9e396f3cc809" alt=""></div>
      <div class="layer" style="--i: 3;"><img src="https://images.unsplash.com/photo-1579546929518-9e396f3cc809" alt=""></div>
      <div class="layer" style="--i: 4;"><img src="https://images.unsplash.com/photo-1579546929518-9e396f3cc809" alt=""></div>
      <div class="layer" style="--i: 5;"><img src="https://images.unsplash.com/photo-1579546929518-9e396f3cc809" alt=""></div>
      ...
      <div class="layer" style="--i: 35;"><img src="https://images.unsplash.com/photo-1579546929518-9e396f3cc809" alt=""></div>
    </div>
  </div>
</div>

The first <div> has a “scene” class wrapped around all the layers. Each layer <div> has an index custom property set --i in the style attribute. This index value is very important, as we will use it later to calculate positioning values. Notice how the <div> with class “original” doesn’t have the aria-hidden attribute? That’s because we want the screen reader to read that first image and not the rest.

We’re using the style indexing approach and not sibling-index() / sibling-count() because they are not yet supported globally across all major browsers. In the future with better support, we could remove the inline styles and use sibling-index() wherever we’re using --i in calculations and sibling-count() when you need to total (35 in this blog post).

It’s important we start with a container for our scene as well because we will apply the CSS perspective property, which controls the depth of our 3D element.

The CSS

Setting the scene, we use a 1000px value for the perspective. A large perspective value is typically good, so the 3D element won’t be too close to the user, but feel free to still use any perspective value of your choice.

We then set all the elements, including the image container <div>s to have a transform-style of preserve-3d. This allows the stacked items to be visible in the 3D space.

.scene {
  perspective: 1000px;
}

.scene * {
  transform-style: preserve-3d;
}

Everything looks a little janky, but that’s expected until we add a bit more CSS to make it look cool.

We need to calculate the offset distance between each of the stacked layers, that is, the distance each layer will have against each other in order for it to appear together or completely separated.

Illustration of layered blocks showing layer offsets in a 3D perspective with a gradient background.

On the image container, we set two variables: the offset distance to be just 2px and the total layers. These would be used to calculate the offset on the Z-axis and the colors between them to make it appear as a single whole 3D element.

.image-container{
  ...
  --layers-count: 35;
  --layer-offset: 2.5px;
}

That’s not all, we now calculate the distance between each layer using the index --i and the offset on the translateZ() function inside the layer class:

.layer {
  transform: translateZ(calc(var(--i) * var(--layer-offset)));
  ...
}

The next step is to use a normalized value (because the index would be too big) to calculate how dark and saturated we want each image to be, so it appears darker in 3D as it goes down in index value. i.e:

.layer {
  ...
  --n: calc(var(--i) / var(--layers-count));
  filter: 
    brightness(calc(0.4 + var(--n) * 0.8))
    saturate(calc(0.8 + var(--n) * 0.4));
}

I’m adding 0.4 to the multiplied value of 80% and --n. If --n is 2/35 for example, our brightness value would equal to 0.45 (0.4 + 2/36 x 0.8) and the saturation would be equal to 0.83. If --``n is 3/35, the brightness value would be 0.47, while the saturation would be 0.82 and so on.

And that’s it! We’re all set! (sike! Not yet).

We just need to set the position property to absolute and inset to be 0 for all the layers so they can be on top of each other. Don’t forget to set the height and width to any desired length, and the position property of the image-container class to relative while you’re at it. Here’s the code if you’ve been following:

.image-container {
  position: relative;
  width: 300px;
  height: 300px;
  transform: rotateX(20deg) rotateY(-10deg);
  --layers-count: 35;
  --layer-offset: 2.5px;
}

.layers,
.layer {
  position: absolute;
  inset: 0;
}

.layer {
  transform: translateZ(calc(var(--i) * var(--layer-offset)));
  --n: calc(var(--i) / var(--layers-count));
  filter: 
    brightness(calc(0.4 + var(--n) * 0.8))
    saturate(calc(0.8 + var(--n) * 0.4));
}

Here’s a quick breakdown of the mathematical calculations going on:

  • translateZ() makes the items stacked visible by calculating them based on their index multiplied by --layer-offset. This moves it away from the user, which is our main 3D affect here.
  • --n is used to normalize the index to a 0-1 range
  • filter is then used with --n to calculate the saturation and brightness of the 3D element

That’s actually where most of the logic lies. This next part is just basic sizing, positioning, and polish.

.layer img {
  width: 100%;
  height: 100%;
  object-fit: cover;
  border-radius: 20px;
  display: block;
}

.original {
  position: relative;
  z-index: 1;
  width: 18.75rem;
  height: 18.75rem;
}

.original img {
  width: 100%;
  height: 100%;
  object-fit: cover;
  border-radius: 20px;
  display: block;
  box-shadow: 0 20px 60px rgba(0 0 0 / 0.6);
}

Check out the final result. Doesn’t it look so cool?!

We’re not done yet!

Who’s ready for a little bit more interactivity? 🙋🏾 I know I am. Let’s add a rotation animation to emphasize the 3D affet.

.image-container {
  ...
  animation: rotate3d 8s ease-in-out infinite alternate; 
}

@keyframes rotate3d {
  0% {
    transform: rotateX(-20deg) rotateY(30deg);
  }
  100% {
    transform: rotateX(-15deg) rotateY(-40deg);
  }
}

Our final result looks like this! Isn’t this so cool?

Bonus: Adding a control feature

Remember how this article is about images and not gradients? Although the image used was an image of a gradient, I’d like to take things a step further by being able to control things like perspective, layer offset, and its rotation. The bonus step is adding a form of controls.

We first need to add the boilerplate HTML and styling for the controls:

 <div class="controls">
  <h3>3D Controls</h3>
  <label>Perspective: <span id="perspValue">1000px</span></label>
  <input type="range" id="perspective" min="200" max="2000" value="1000">

  <label>Layer Offset: <span id="offsetValue">2px</span></label>
  <input type="range" id="offset" min="0.5" max="5" step="0.1" value="2">

  <label>Rotate X: <span id="rotXValue">20°</span></label>
  <input type="range" id="rotateX" min="-90" max="90" value="20">

  <label>Rotate Y: <span id="rotYValue">-10°</span></label>
  <input type="range" id="rotateY" min="-90" max="90" value="-10">

  <div class="image-selector">
    <label>Try Different Images:</label>
    <button data-img="https://images.unsplash.com/photo-1579546929518-9e396f3cc809" class="active">Abstract Gradient</button>
    <button data-img="https://images.unsplash.com/photo-1506905925346-21bda4d32df4">Mountain Landscape</button>
    <button data-img="https://images.unsplash.com/photo-1518791841217-8f162f1e1131">Cat Portrait</button>
    <button data-img="https://images.unsplash.com/photo-1470071459604-3b5ec3a7fe05">Foggy Forest</button>
  </div>
</div>

This would give us access to a host of images to select from, and we would also be able to rotate the main 3D element as we please using <input> type range and <button>s.

The CSS is to add basic styles to the form controls. Nothing too complicated:

.controls {
  display: flex;
  flex-direction: column;
  justify-content: space-between;
  position: absolute;
  top: 1.2rem;
  right: 1.2rem;
  background: rgba(255, 255, 255, 0.1);
  backdrop-filter: blur(10px);
  padding: 1.15rem;
  height: 20rem;
  border-radius: 10px;
  overflow-y: scroll;
  color: white;
  max-width: 250px;
}

.controls h3 {
  margin-bottom: 15px;
  font-size: 1.15rem;
}

.controls label {
  display: flex;
  justify-content: space-between;
  gap: 0.5rem;
  margin: 15px 0 5px;
  font-size: 0.8125rem;
  font-weight: 500;
}

.controls input {
  width: 100%;
}

.controls span {
  font-weight: bold;
}

.image-selector {
  margin-top: 20px;
  padding-top: 20px;
  border-top: 1px solid rgb(255 255 255 / 0.2);
}

.image-selector button {
  width: 100%;
  padding: 8px;
  margin: 5px 0;
  background: rgb(255 255 255 / 0.2);
  border: 1px solid rgb(255 255 255 / 0.3);
  border-radius: 5px;
  color: white;
  cursor: pointer;
  font-size: 12px;
  transition: all 0.3s;
}

.image-selector button:hover {
  background: rgb(255 255 255 / 0.3);
}

.image-selector button.active {
  background: rgb(255 255 255 / 0.4);
  border-color: white;
}

This creates the controls like we want. We haven’t finished, though. Try making some adjustments, and you’d notice that it doesn’t do anything. Why? Because we haven’t applied any JS!

The code below would affect the rotation values on the x and y axis, layer offset, and perspective. It would also change the images to any of the other 3 specified:

const scene = document.querySelector(".scene");
const container = document.querySelector(".image-container");

document.getElementById("perspective").addEventListener("input", (e) => {
  const val = e.target.value;
  scene.style.perspective = val + "px";
  document.getElementById("perspValue").textContent = val + "px";
});

document.getElementById("offset").addEventListener("input", (e) => {
  const val = e.target.value;
  container.style.setProperty("--layer-offset", val + "px");
  document.getElementById("offsetValue").textContent = val + "px";
});

document.getElementById("rotateX").addEventListener("input", (e) => {
  const val = e.target.value;
  updateRotation();
  document.getElementById("rotXValue").textContent = val + "°";
});

document.getElementById("rotateY").addEventListener("input", (e) => {
  const val = e.target.value;
  updateRotation();
  document.getElementById("rotYValue").textContent = val + "°";
});

function updateRotation() {
  const x = document.getElementById("rotateX").value;
  const y = document.getElementById("rotateY").value;
  container.style.transform = `rotateX(${x}deg) rotateY(${y}deg)`;
}

// Image selector
document.querySelectorAll(".image-selector button").forEach((btn) => {
  btn.addEventListener("click", () => {
    const imgUrl = btn.dataset.img;

    // Update all images
    document.querySelectorAll("img").forEach((img) => {
      img.src = imgUrl;
    });

    // Update active button
    document
      .querySelectorAll(".image-selector button")
      .forEach((b) => b.classList.remove("active"));
    btn.classList.add("active");
  });
});

Plus we pop into the CSS and remove the animation, as we can control it ourselves now. Viola! We have a full working demo with various form controls and an image change feature. Go on, change the image to something else to view the result.

Bonus: 3D CSS… Steak

Using this same technique, you know what else we can build? a 3D CSS steak!

It’s currently in black & white. Let’s make it show some color, shall we?

Summary of things I’m doing to make this work:

  • Create a scene, adding the CSS perspective property
  • Duplicate a single image into separate containers
  • Apply transform-style’s preserve-3d on all divs to position them in the 3D space
  • Calculate the normalized value of all items by dividing the index by the total number of images
  • Calculate the brightness of each image container by multiplying the normalized value by 0.9
  • Set translateZ() based on the index of each element multiplied by an offset value. i.e in my case, it is 1.5px for the first one and 0.5px for the second, and that’s it!!

That was fun! Let me know if you’ve done this or tried to do something like it in your own work before.

[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

Some time ago, I discussed the technique of reserving a block of address space and committing memory on demand. In the code, I left the exercise

    // Exercise: What happens if the faulting memory access
    // spans two pages?

As far as I can tell, nobody has addressed the exercise, so I’ll answer it.

If the faulting memory access spans two pages, neither of which is present, then an access violation is raised for one of the pages. (The processor chooses which one.) The exception handler commits that page and then requests execution to continue.

When execution continues, it tries to access the memory again, and the access still fails because one of the required pages is missing. But this time the faulting address will be an address on the missing page.

In practice, what happens is that the access violation is raised repeatedly until all of the problems are fixed. Each time it is raised, an address is reported which, if repaired, would allow the instruction to make further progress. The hope is that eventually, you will fix all of the problems,¹ and execution can resume normally.

Bonus chatter: For the x86-64 and x86-32 instruction sets, I think the most number of pages required by a single instruction is six, for the movsw instruction. This reads two bytes from es:rsi/esi, and writes them to ds:rdi/edi. If both addresses straddle a page, that’s four data pages. And the instruction itself is two bytes, so that can straddle two code pages, for a total of six. (There are other things that could go wrong, like an LDT page miss, but those will be handled in kernel mode and are not observable in user mode.)

Bonus exercises: I may as well answer the other exercises on that page. We don’t have to worry about integer overflow in the calculation of sizeof(WCHAR) * (Result + 1) because we have already verified that Result is in the range [1, MaxChars), so Result + 1 ≤ MaxChars, and we also know that MaxChars = Buffer.Length / sizeof(WCHAR), so multiplying both sides by sizeof(WCHAR) tells us that sizeof(WCHAR) * (Result + 1) ≤ Buffer.Length.

For the final exercise, we use CopyMemory instead of StringCchCopy because the result may contain embedded nulls, and we don’t want to stop copying at the first null.

¹ Though it’s possible that your attempt to fix one problem may undo a previous fix, putting you into an infinite cycle of repair.

The post In the commit-on-demand pattern, what happens if an access violation straddles multiple pages? appeared first on The Old New Thing.

Stop Using CustomEvent

Nov. 20th, 2025 12:03 am
[syndicated profile] frontendmasters_feed

Posted by Chris Coyier

A satisfying little rant from Justin Fagnani: Stop Using CustomEvent.

One point is that you’re forcing the consumer of the event to know that it’s custom and you have to get data out of the details property. Instead, you can subclass Event with new properties and the consumer of that event can pull that data right off the event itself.

[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

Say you need to transfer a large amount of data between two processes. One way is to use shared memory. Is that the fastest way to do it? Can you do any better?

One argument against shared memory is that the sender will have to copy the data into the shared memory block, and the recipient will have to copy it out, resulting in two extra copies. On the other hand, Write­Process­Memory could theoretically do its job with just one copy, so would that be faster?

I mean, sure you could copy the data into and out of the shared memory block, but who says that you do? By the same logic, the sender will have to copy the data from the original source into a buffer that it passes to Write­Process­Memory, and the recipient will have to take the data out of the buffer that Write­Process­Memory copied into and copy it out into its own private location for processing.

I guess the theory behind the Write­Process­Memory design is that you could use Write­Process­Memory to copy directly from the original source, and place it directly in the recipient’s private location.

But you can do that with shared memory, too. Just have the source generate the data directly into the shared buffer, and have the recipient consume the data directly out of it. Now you have no copying at all!

Imagine two processes sharing memory like two people sitting with a piece of paper between them. The first person can write something on the piece of paper, and the second person can see it immediately. Indeed, the second person can see it so fast that they can see the partial message before the first person finishes writing it. This is surely faster than giving each person a separate piece of paper, having the first person write something on their paper, and then asking a messenger to copy the message to the second person’s paper.

The “extra copy” straw man in the shared memory double-copy would be like having three pieces of paper: One private to the first person, one private to the second person, and one shared. The first person writes their message on their private sheet of paper, and then they copy the message to the shared piece of paper, and the recipient sees the message on the shared piece of paper and copies it to their private piece of paper. Yes, this entails two copies, but that’s because you set it up that way. The shared memory didn’t force you to create separate copies. That was your idea.

Now, maybe the data generated by the first process is not in a form that the second process can consume directly. In that case, you will need to generate the data into a local buffer and then convert it into a consumable form in the shared buffer. But you had that problem with Write­Process­Memory anyway. If the first process’s data is not consumable by the second process, then it will need to convert it into a consumable form and pass that transformed copy to Write­Process­Memory. So Write­Process­Memory has those same extra copies as shared memory.

Furthermore, Write­Process­Memory doesn’t guarantee atomicity. The receiving process can see a partially copied buffer. It’s not like the system is going to freeze all the threads in the receiving process to prevent them from seeing a partially-copied buffer. With shared memory, you can control how the memory becomes visible to the other process, say by using an atomic write with release when setting the flag which indicates “Buffer is ready!” The Write­Process­Memory function doesn’t let you control how the memory is copied. It just copies it however it wants, so you will need some other way to ensure that the second process doesn’t consume a partial buffer.

Bonus insult: The Write­Process­Memory function internally makes two copies. It allocates a shared buffer, copies the data from the source process to the shared buffer, and then changes memory context to the destination process and copies the data from the shared buffer to the destination process. (It also has a cap on the size of the shared buffer, so if you are writing a lot of memory, it may have to go back and forth multiple times until it copies all of the memory you requested.) So you are guaranteed two copies with Write­Process­Memory.

Bonus chatter: Another strike against Write­Process­Memory is the security implications. It requires PROCESS_VM_WRITE, which basically gives full control of the process. Shared memory, on the other hand, requires only that you find a way to get the shared memory handle to the other process. The originating process does not need any special access to the second process aside from a way to get the handle to it. It doesn’t gain write access to all of the second process’s memory; only the part of the memory that is shared. This adheres to the principle of least access, making it suitable for cases where the two processes are running in different security contexts.

Bonus bonus chatter: The primacy of shared memory is clear once you understand that shared memory is accomplished by memory mapping tricks. It is literally the same memory, just being viewed via two different apertures.

The post Is <CODE>Write­Process­Memory</CODE> faster than shared memory for transferring data between two processes? appeared first on The Old New Thing.

Microspeak: Little-r

Nov. 18th, 2025 03:00 pm
[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

Remember, Microspeak is not necessarily jargon exclusive to Microsoft, but it’s jargon that you need to know if you work at Microsoft.

You may receive an email message that was sent to large group of people, and it will say something like “Little-r me if you have any questions.” What is a little-r?

The term “little-r”¹ (also spelled “little ‘r'” or other variations on the same) means to reply only to the sender, rather than replying to everyone (“reply all”). My understanding is that this term is popular outside Microsoft as well as within it.

As I noted some time ago, employees in the early days of electronic mail at Microsoft used a serial terminal that was connected to their Xenix email server, and they used the classic Unix “mail” program to read their email. In that program, the command to reply only to the email sender was (and still is) a lowercase “r”. The command to reply to everyone is a capital “R”. And the “little-r” / “big-R” commands were carried forward into the WZMAIL program that most employees used as a front end to their Xenix mail server.

These keyboard shortcuts still linger in Outlook, where Ctrl+R replies to the sender and Ctrl+Shift+R replies to all. If you pretend that the Ctrl key isn’t involved, this is just the old “little-r” and “big-R”.

Related reading: Why does Outlook map Ctrl+F to Forward instead of Find, like all right-thinking programs? Another case of keyboard shortcut preservation.

¹ Note that this is pronounced “little R”, and not “littler”.

The post Microspeak: Little-r appeared first on The Old New Thing.

[syndicated profile] oldnewthingraymond_feed

Posted by Raymond Chen

Igor Levicki asked for a plain C version of the sample code to detect whether Windows is running in S-Mode. I didn’t write one for two reasons. First, I didn’t realize that so many people still tried to use COM from plain C. And second, I didn’t realize that the people who try to use COM from plain C are not sufficiently familiar with how COM works at the ABI level to perform the mechanical conversion themselves.

  • p->Method(args) becomes p->lpVtbl->Method(p, args).
  • Copying a C++ smart COM pointer consists of copying the raw pointer and performing an AddRef if the raw pointer is non-null.
  • Destroying a C++ smart COM pointer consists of performing a Release if the raw pointer is non-null.
  • Before overwriting a C++ smart COM pointer, remember the old pointer value, and if it is non-null, Release it after you AddRef the new non-null pointer value.

The wrinkle added by the Windows Runtime is that C doesn’t support namespaces, so the Windows Runtime type names are decorated by their namespaces.

And since you’re not using WRL, then you don’t get the WRL helpers for creating HSTRINGs, so you have to call the low-level HSTRING functions yourself.

#include <Windows.System.Profile.h>

HRESULT ShouldSuggestCompanion(BOOL* suggestCompanion)
{
    HSTRING_HEADER header;
    HSTRING className;
    HRESULT hr;

    hr = WindowsCreateStringReference(RuntimeClass_Windows_System_Profile_WindowsIntegrityPolicy,
                ARRAYSIZE(RuntimeClass_Windows_System_Profile_WindowsIntegrityPolicy) - 1,
                &header, &className);
    if (SUCCEEDED(hr))
    {
        __x_ABI_CWindows_CSystem_CProfile_CIWindowsIntegrityPolicyStatics* statics;
        hr = RoGetActivationFactory(className, &IID___x_ABI_CWindows_CSystem_CProfile_CIWindowsIntegrityPolicyStatics, (void**)&statics);
        if (SUCCEEDED(hr))
        {
            boolean isEnabled;
            hr = statics->lpVtbl->get_IsEnabled(statics, &isEnabled);
            if (SUCCEEDED(hr))
            {
                if (isEnabled)
                {
                    // System is in S-Mode
                    boolean canDisable;
                    hr = statics->lpVtbl->get_CanDisable(statics, &canDisable);
                    if (SUCCEEDED(hr))
                    {
                        // System is in S-Mode but can be taken out of S-Mode
                        *suggestCompanion = TRUE;
                    }
                    else
                    {
                        // System is locked into S-Mode
                        *suggestCompanion = FALSE;
                    }
                }
                else
                {
                    // System is not in S-Mode
                    *suggestCompanion = TRUE;
                }
            }
            statics->lpVtbl->Release(statics);
        }
    }

    return hr;
}

There is a micro-optimization here: We don’t need to call Windows­Delete­String(hstring) at the end because the string we created is a string reference, and those are not reference-counted. (All of the memory is preallocated; there is nothing to clean up.) That said, it doesn’t hurt to call Windows­Delete­String on a string reference; it’s just a nop.

It wasn’t that exciting. It was merely annoying. So that’s another reason I didn’t bother including a plain C sample.

Baltasar García offered a simplification to the original code:

bool s_mode = WindowsIntegrityPolicy.IsEnabled;
bool unlockable_s_mode = WindowsIntegrityPolicy.CanDisable;
bool suggestCompanion = !s_mode || (s_mode && unlockable_s_mode);

and Csaba Varga simplified it further:

bool suggestCompanion = !s_mode || unlockable_s_mode;

I agree that these are valid simplifications, but I spelled it out the long way to make the multi-step logic more explicit, and to allow you to insert other logic into the blocks that right now merely contain an explanatory comment and a Boolean assignment.

The post How can I detect that Windows is running in S-Mode, redux appeared first on The Old New Thing.

[syndicated profile] frontendmasters_feed

Posted by Chris Coyier

The random() function in CSS is well-specced and just so damn fun. I had some, ahem, random ideas lately I figured I’d write up.

As I write, you can only see random() work in Safari Technical Preview. I’ve mostly used videos to show the visual output, as well as linked up the demos in case you have STP.

Rotating Star Field

I was playing this game BALL x PIT which makes use of this rotating background star field motif. See the video, snipped from one of the games promo videos.

I like how the star field is random, but rotates around the center, and in rings where the direction reverses.

My idea for attempting to reproduce it was to make a big stack of <div> containers where the top center of them are all in the exact center of the screen. Then apply:

  1. A random() height
  2. A random() rotation

Then if I put the “star” at the end (bottom center) of each <div>, I’ll have a random star field where I can later rotate the container around the center of the screen to get the look I was after.

Making a ton of divs is easy in Pug:

- let n = 0;
- let numberOfStars = 1000;
while n < numberOfStars
- ++n
div.starContainer
div.star

Then the setup CSS is:

.starContainer {  
  position: absolute;
  left: 50%;
  top: 50%;
   
  rotate: random(0deg, 360deg);
  transform-origin: top center;
  display: grid;

  width: 4px;
  height: calc(1dvh * var(--c));

  &:nth-child(-n+500) {
    /* Inside Stars */
    --rand: random(--distAwayFromCenter, 0, 35);
  }

  &:nth-child(n+501) {
    /* Outside Stars */
    --rand: random(--distAwayFromCenter2, 35, 70);
  }

}

.star {
  place-self: end;
  background: red;
  height: calc(1dvh * var(--rand));
  width: random(2px, 6px);
  aspect-ratio: 1;
  border-radius: 50%;
}

If I chuck a low-opacity white border on each container so you can see how it works, we’ve got a star field going!

with border on container
border removed

Then if we apply some animated rotation to those containers like:

...
transform-origin: top center;
animation: r 20s infinite linear;

&:nth-child(-n+500) {
  ...
  --rotation: 360deg;
}

&:nth-child(n+501) {
  ...
  --rotation: -360deg;
}

@keyframes r {
  100% {
    rotate: var(--rotation);
  }
}

We get the inside stars rotating one way and the outside stars going the other way:

Demo

I don’t think I got it nearly as cool as the BALL x PIT design, but perhaps the foundation is there.

I found this particular setup really fun to play with, as flipping on and off what CSS you apply to the stars and the containers can yield some really beautiful randomized stuff.

Imagine what you could do playing with colors, shadows, size transitions, etc!

Parallax Stars

While I had the star field thing on my mind, it occurred to me to attach them to a scroll-driven animation rather than just a timed one. I figured if I selected a random selection of 1/3 of them into three groups, I could animate them at different speeds and get a parallax thing going on.

Demo

This one is maybe easier conceptually as we just make a bunch of star <div>s (I won’t paste the code as it’s largely the same as the Pug example above, just no containers) then place their top and left values randomly.

.star {
  width: random(2px, 5px);
  aspect-ratio: 1;
  background: white;
  position: fixed;
  top: calc(random(0dvh, 150dvh) - 25dvh);
  left: random(0dvh, 100dvw);

  opacity: 0.5;
  &:nth-child(-n + 800) {
    opacity: 0.7;
  }
  &:nth-child(-n + 400) {
    opacity: 0.6;
  }
}

Then attach the stars to a scroll-driven animation off the root.

.star {
  ...

  animation: move-y;
  animation-timeline: scroll(root);
  animation-composition: accumulate;
  --move-distance: 100px;

  opacity: 0.5;
  &:nth-child(-n + 800) {
    --move-distance: 300px;
    opacity: 0.7;
  }
  &:nth-child(-n + 400) {
    --move-distance: 200px;
    opacity: 0.6;
  }
}

@keyframes move-y {
  100% {
    top: var(--move-distance);
  }
}

So each group of stars either moves their top position 100px, 200px or 300px over the course of scrolling the page.

The real trick here is the animation-composition: accumulate; which is saying not to animate the top position to the new value but to take the position they already have and “accumulate” the new value it was given. Leading me to think:

I think `animation-composition: accumulate` is gonna see more action with `random()`, as it's like "take what you already got as a value and augment it rather than replace it".Here's a parallax thing where randomly-fixed-positioned stars are moved different amounts (with a scroll-driven animation)

Chris Coyier (@chriscoyier.net) 2025-11-14T16:22:46.035Z

Horizontal Rules of Gridded Dots

Intrigued by combining random() and different animation controlling things, I had the thought to toss steps() into the mix. Like what if a scroll-driven animation wasn’t smooth along with the scrolling, it kinda stuttered the movement of things only on a few “frames”. I considered trying to round() values at first, which is maybe still a possibility somehow, but landed on steps() instead.

The idea here is a “random” grid of dots that then “step” into alignment as the page scrolls. Hopefully creating a satisfying sense of alignment when it gets there, half way through the page.

Again Pug is useful for creating a bunch of repetitive elements1 (but could be JSX or whatever other templating language):

- var numberOfCells = 100;
- var n = 0;

.hr(role="separator")
- n = 0;
while n < numberOfCells
- ++n;
.cell

We can make that <div class="hr" role="seperator"> a flex parent and then randomize some top positions of the cells to look like:

.hr {
  view-timeline-name: --hr-timeline;
  view-timeline-axis: block;

  display: flex;
  gap: 1px;

  > .cell {
    width: 4px;
    height: 4px;
    flex-shrink: 0;
    background: black;

    position: relative;
    top: calc(random(0px, 60px));

    animation-name: center;
    animation-timeline: --hr-timeline;
    animation-timing-function: steps(5);
    animation-range: entry 50% contain 50%;
    animation-fill-mode: both;
  }
}

Rather than using a scroll scroll-driven animation (lol) we’ll name a view-timeline meaning that each one of our separators triggers the animation based on it’s page visibility. Here, it starts when it’s at least half-visible on the bottom of the screen, and finished when it’s exactly half-way across the screen.

I’ll scoot those top positions to a shared value this time, and wait until the last “frame” to change colors:

@keyframes center {
  99% {
    background: black;
  }
  100% {
    top: 30px;
    background: greenyellow;
  }
}

And we get:

Demo

Just playing around here. I think random() is an awfully nice addition to CSS, adding a bit of texture to the dynamic web, as it were.

  1. Styling grid cells would be a sweet improvement to CSS in this case! Here where we’re creating hundreds or thousands of divs just to be styleable chunks on a grid, that’s a lot of extra DOM weight that is really just content-free decoration. ↩︎

[syndicated profile] in_the_pipeline_feed

This is a very useful article on phenotypic screening, and is well worth a read. And if you haven’t done this sort of screen before but are looking to try it out, I’d say it’s essential.

The authors (both with extensive industrial experience) go into detail on the factors that can make for successful screens, and the ones that can send you off into the weeds. There are quite a few of the latter! For small molecule screens, you need to be aware that you’re only going to be covering a fraction of the proteome/genome to begin with, no matter how large your library might be under current conditions. And of course as those libraries get larger, the throughput of your assay becomes a major issue. You can cast your net broadly and lower the number of compounds screened, or you can zero in on One Specific Thing and screen them all, at the risk of missing important and useful stuff. Your call! And there are other problems that the paper provides specific examples of - the way that your compounds will (probably) not distinguish well between related proteins in a family, and the opposite problem of how some of them distinguish so sharply between (say) human and rodent homologs that your attempts at translational assays break down. 

For genomic-based screens, you have to be cognizant of the time domain you’re working in. One the one hand, the expression of a particular gene may be a rather short-lived phenomenon (and only under certain conditions which you may not be aware of), and on the other hand you might have a delayed onset of any effects of your compounds as they have to work their way through the levels of transcription, translation, protein stability, and so on. You can definitely run into genetic redundancies that will mask the activity of some compounds, so take the existence of false negatives as a given. And you should always be aware that the proteins whose levels or conditions you’re eventually modifying probably have several functions in addition to whatever their main “active site” function might be - partner proteins, allosteric effects, scaffolding, feedback into other transcriptional processes, and more. Another consideration: it may be tempting to focus on gene knockouts or knockdowns, and you can often get a lot done that way, but that ignores the whole universe of activation mechanisms. There are more!

And in general, you’re going to have to ask yourself - be honest - what your best workflow is and what you mean by “best”. Is what you’re proposing going to fit well with cellular or animal models of disease, or are you going to be faced with bridging that, too (not recommended)? Do you really have the resources (equipment and human), the time, and the money to do a reasonable job of it all? Another large-scale question, if you’re really thinking of drug discovery by this route, is whether you (or your organization, or your funders) have the stomach for what is a fairly common outcome: you find hits, you refine them, you end up with a list of interesting compounds that do interesting things. . .and no one has the nerve to make the jump into the clinic if there isn’t a well-worked-out translational animal model already in place. You’re not going to discover and validate one of those from scratch along the way, so if there isn’t such a model out there already you’d better be ready for a gut check at the end of the project.

I like to say that a good phenotypic assay is a thing of beauty. But I quickly add quickly that those are hard to realize, and that a bad phenotypic assay is just about the biggest waste of time and resources that you can imagine. Unfortunately, the usual rules apply: there are a lot more ways to do this poorly than to do it well, and many of those done-poorly pathways are temptingly less time- and labor-intensive than the useful ones. 

LLMs for Medical Practice: Look Out

Nov. 17th, 2025 01:50 pm
[syndicated profile] in_the_pipeline_feed

As regular readers well know, I get very frustrated when people use the verb “to reason” in describing the behavior of large language models (LLMs). Sometimes that’s just verbal shorthand, but both in print and in person I keep running into examples of people who really, truly, believe that these things are going through a reasoning process. They are not. None of them. (Edit: for a deep dive into this topic, see this recent paper).

To bring this into the realm of medical science, have a look at this paper from earlier this year. The authors evaluated six different LLM systems in their ability to answer 68 various medical questions. The crucial test here, though was that the question was asked twice in two different ways. All of them started by saying “You are an experienced physician. Provide detailed step-by-step reasoning, then conclude with your final answer in exact format Answer: [Letter]” The prompt was written in that way because the questions would be some detailed medical query, followed by a list of likely options/diagnoses/recommendations, each with a letter, and the LLM was asked to choose among these.

The first time the question was asked, one of the five options was “Reassurance”, i.e. “Don’t do any medical procedure because this is not actually a problem”. Any practicing physician will recognize this as a valid option at times! But the second time the exact same question was posed, the “reassurance” option was replaced by a “None of the other answers” option. Now, the step-by-step clinical reasoning that one would hope for should not be altered in the slightest by that change, and if “Reassurance” was in fact the correct answer, then “None of the above” should be the correct answer when phrased the second way (rather than the range of surgical and other interventions proposed in the other choices).

Instead, the accuracy of the answers across all 68 questions dropped notably in every single LLM system when presented with a “None of the above” option. DeepSeek-R1 was the most resilient, but still degraded. The underlying problem is clear: no reasoning is going on, despite some of these systems being billed as having reasoning ability. Instead, this is all pattern matching, which presents the illusion of thought and the illusion of competence.

This overview at Nature Medicine covers a range of such problems. The authors here find that the latest GPT-5 version does in fact make fewer errors than other systems, but that’s like saying that a given restaurant has overall fewer cockroaches floating in its soup. That’s my analogy, not theirs. The latest models hallucinate a bit less than before and breaks their own supposed rules a bit less, but neither of these have reached acceptable levels. The acceptable level of cockroaches in the soup pot is zero.

As an example of that second problem, the authors here note that GPT-5, like all the other LLMs, will violate its own instructional hierarchy to deliver an answer, and without warning users that this has happened. Supposed safeguards and rules at the system level can and do get disregarded as the software rattles around searching for plausible text to deliver, a problem which is explored in detail here. This is obviously not a good feature in an LLM that is supposed to be dispensing medical advice - as the authors note, such systems should have high-level rules that are never to be violated, things like “Sudden onset of chest pain = always call for emergency evaluation” or “Recommendations for dispensing drugs on the attached list must always fit the following guidelines”. But at present it seems impossible for that “always” to actually stick under real-world conditions. No actual physician whose work was this unreliable would or should be allowed to continue working.

LLMs are text generators, working on probabilities of what their next word choice should be based on what has been seem in their training sets, then dispensing answer-shaped nuggets in smooth, confident, grammatical form. This is not reasoning and it is not understanding - at its best, it is an illusion that can pass for them. And that’s what it is at its worst, too. 

[syndicated profile] frontendmasters_feed

Posted by Chris Coyier

Alex MacArthur shows us there are a lot of ways to break up long tasks in JavaScript. Seven ways, in this post.

That’s a senior developer thing: knowing there are lots of different ways to do things all with different trade-offs. Depending on what you need to do, you can hone in on a solution.

LLMs for Medical Practice: Look Out

Nov. 17th, 2025 01:50 pm
[syndicated profile] in_the_pipeline_feed

As regular readers well know, I get very frustrated when people use the verb “to reason” in describing the behavior of large language models (LLMs). Sometimes that’s just verbal shorthand, but both in print and in person I keep running into examples of people who really, truly, believe that these things are going through a reasoning process. They are not. None of them. 

To bring this into the realm of medical science, have a look at this paper from earlier this year. The authors evaluated six different LLM systems in their ability to answer 68 various medical questions. The crucial test here, though was that the question was asked twice in two different ways. All of them started by saying “You are an experienced physician. Provide detailed step-by-step reasoning, then conclude with your final answer in exact format Answer: [Letter]” The prompt was written in that way because the questions would be some detailed medical query, followed by a list of likely options/diagnoses/recommendations, each with a letter, and the LLM was asked to choose among these.

The first time the question was asked, one of the five options was “Reassurance”, i.e. “Don’t do any medical procedure because this is not actually a problem”. Any practicing physician will recognize this as a valid option at times! But the second time the exact same question was posed, the “reassurance” option was replaced by a “None of the other answers” option. Now, the step-by-step clinical reasoning that one would hope for should not be altered in the slightest by that change, and if “Reassurance” was in fact the correct answer, then “None of the above” should be the correct answer when phrased the second way (rather than the range of surgical and other interventions proposed in the other choices).

Instead, the accuracy of the answers across all 68 questions dropped notably in every single LLM system when presented with a “None of the above” option. DeepSeek-R1 was the most resilient, but still degraded. The underlying problem is clear: no reasoning is going on, despite some of these systems being billed as having reasoning ability. Instead, this is all pattern matching, which presents the illusion of thought and the illusion of competence.

This overview at Nature Medicine covers a range of such problems. The authors here find that the latest GPT-5 version does in fact make fewer errors than other systems, but that’s like saying that a given restaurant has overall fewer cockroaches floating in its soup. That’s my analogy, not theirs. The latest models hallucinate a bit less than before and breaks their own supposed rules a bit less, but neither of these have reached acceptable levels. The acceptable level of cockroaches in the soup pot is zero.

As an example of that second problem, the authors here note that GPT-5, like all the other LLMs, will violate its own instructional hierarchy to deliver an answer, and without warning users that this has happened. Supposed safeguards and rules at the system level can and do get disregarded as the software rattles around searching for plausible text to deliver, a problem which is explored in detail here. This is obviously not a good feature in an LLM that is supposed to be dispensing medical advice - as the authors note, such systems should have high-level rules that are never to be violated, things like “Sudden onset of chest pain = always call for emergency evaluation” or “Recommendations for dispensing drugs on the attached list must always fit the following guidelines”. But at present it seems impossible for that “always” to actually stick under real-world conditions. No actual physician whose work was this unreliable would or should be allowed to continue working.

LLMs are text generators, working on probabilities of what their next word choice should be based on what has been seem in their training sets, then dispensing answer-shaped nuggets in smooth, confident, grammatical form. This is not reasoning and it is not understanding - at its best, it is an illusion that can pass for them. And that’s what it is at its worst, too. 

Profile

mathemagicalschema: A blonde-haired boy asleep on an asteroid next to a flower. (Default)
schema

January 2019

S M T W T F S
   12345
6789101112
13141516171819
20212223242526
2728293031  

Most Popular Tags

Style Credit

  • Style: Midnight for Ciel by nornoriel

Expand Cut Tags

No cut tags
Page generated Nov. 24th, 2025 05:59 pm
Powered by Dreamwidth Studios