Emergence

What happens when all the parts start working together to become something new? Issue #002 of Galaxy Q.

Hey!

This is Galaxy Q #002.

This issue is about emergence. The idea that some of the properties of the universe only appear at the level of the whole, not the parts. Simple components following simple rules can produce behavior that is complex, surprising, and impossible to predict in advance — from ant colonies and brains to markets, cities, and consciousness — without central control. Emergence is the moment when more than the sum of its parts becomes reality.

In the A block, Justin McLachlan explains how Ian Malcom was actually right about everything (and how its less obvious than it seems), and how his mass comm professor really felt about his (obvious) writing. In the B block we look at what can emerge from just a few simple rules — and how it all usually goes wrong.

In the Arcade the editors have picked a few of their favorite, emergent things — including the quintessential guide to complexity, and a game where you get to draw a map! Also: ant farms for adults. You’re welcome.

Finally, we wrap it all up with a famous experiment that offers a solution for one of life’s most pressing problems. Should you go out tonight, or not?

So, with that, please — get ready for some chaos ... theory.

Your friends,
the Editors.

Ian Malcom Was Right

by Justin McLachlan

The first time I ever wrote about Jurassic Park was in college. I, a first-year mass comm major, argued that the film contains an ironic layer of meaning: it satirizes tightly controlled corporate branding while functioning as a commercial blockbuster built on the same signs and symbols. The logos on the custom-painted jeeps and the gift shop merchandise are the same emblems printed on t-shirts, posters, and the lunchboxes sold in stores. By the end, the banner that falls in the rotunda during the T. rex attack literalizes brand failure: the logo remains intact while the system it advertises disintegrates. It’s a blockbuster movie satirizing the commercialization machine that supports the blockbuster as an economic phenomenon.

Yes, I was a budding semiotician. My professor, who always gave notes and a chance at revision before a final pass-or-fail grade, did not appreciate semiotics because she only valued gender studies, her area of expertise. She suggested perhaps I was tired when I first wrote the paper, given the “level of obviousness” in my premise. Ian Malcolm, she pointed out, states it outright: “…before you even KNEW what you had, you patented it, and packaged it, and slapped it on a plastic lunchbox, and NOW you’re selling it, you wanna sell it.”

(She also wondered, years later, if I’d perhaps been on a plane or something when I wrote my senior seminar thesis, a semiotic structural analysis of M. Night Shyamalan’s first three movies, given the paper’s “turbulent style.” She also made a point of telling me she didn’t remember me from the first-year seminar. She was also right that I had written the paper on a plane).

In my revision, I argued that the obviousness was the point. Ian Malcom’s direct, clear warnings were best understood narratively, in contrast to how they were directly and clearly ignored by the powers that be. The hubris of John Hammond, park owner, was so powerful that it could override even the plainly obvious. Lauren Dern’s Dr. Ellie Satler put it succinctly, after several of the park’s inhabitants tried to eat her, her boyfriend, and Hammond’s grandchildren: “You never had control! That’s the illusion.”

A few decades later, if I had the paper to write all over again, I’d take a different tack, one that acknowledges the point my professor was trying to get through to me all along. The superficial signs of commercialization were symptoms of much larger systemic pathologies at play, and those were the signs the film really wanted to communicate. And oh, also that Ian Malcolm was right. About everything.

The lunchboxes were just the surface; the deeper pathology is Hammond’s belief that a complex system with emergent behavior can be made predictable through design, oversight, and will—and that kind of confidence is exactly what turns a contained risk into a cascade and, finally, a catastrophe. Complexity, and our inability to understand it, is why we face intractable, global problems that put our very survival at risk.

As a self-styled “chaotician,” Malcolm points again and again to a specific problem: even deterministic systems can be effectively unpredictable, and once you scale that unpredictability into a complex, tightly coupled environment, like a park filled with long-extinct dinosaurs or a planet with 7 billion individuals, the surprises don’t stay small—they compound.

What Malcolm keeps naming is the gap between engineering and reality. You can build the parts, you can create a plan, you can rehearse a pitch—but when the components of any system start interacting with itself, it becomes something else, and that “something else” is where emergence can happen — a new property that can’t be predicted by studying the individual parts of the system. That gap is also where disaster can live.

> MALCOLM: You see? The tyrannosaur doesn’t obey set patterns or park schedules. The essence of Chaos.
> ELLIE: I’m still not clear on Chaos.
> MALCOLM: It simply deals with unpredictability in complex systems. The shorthand is the Butterfly Effect. A butterfly can flap its wings in Peking and in Central Park you get rain instead of sunshine.
> Ellie gestures with her hand to show that this has gone right over her head.
> MALCOLM: Are you saying I’m going too fast? I go too fast, I did a fly-by.
> Looking out of the opposite window, Grant spots something. He sits upright, trying to get a better look.
> MALCOLM: Give me that glass of water. We are going to conduct an experiment. It should be still, the car is bouncing up and down, but that’s ok, it’s just an example.
> He dips his hand into the glass of water and takes Ellie’s hand in his own.
> MALCOLM (cont’d): Now, put your hand flat like a hieroglyphic. Now, let’s say a drop of water falls on your hand. Which way is the drop going to roll off.
> He flicks his fingers and a drop falls on the back of Ellie’s hand.
> MALCOLM (cont’d): Off which finger or the thumb, what would you say?
> ELLIE: Thumb, I’d say.
> MALCOLM: Aha, ok. Now freeze your hand, freeze your hand, don’t move. I’m going to do the same thing, start with the same place again. Which way is it going to roll off?
> ELLIE: Let’s say, back the same way.
> MALCOLM: It changed. Why? Because tiny variations, the orientation of hairs on your hand- -
> ELLIE: Alan, listen to this.
> MALCOLM: - - the amount of blood distending your vessels, imperfections in the skin - -
> ELLIE: Imperfections in the skin?
> MALCOLM: Microscopic microscopic - - and never repeat, and vastly affect the outcome. That’s what?
> ELLIE: Unpredictability....
> Grant throws the door open and bolts out of the moving car.
> MALCOLM (cont’d): Look at this, see, see?! I’m right again! No one could predict that Dr. Grant would suddenly jump out of a moving vehicle.
> ELLIE: Alan?
> She jumps out too and follows him into the field.
> MALCOLM: There’s another example! See, here I am now, by myself, talking to myself. That’s Chaos Theory!

For decades, centuries even, we’ve made sense of our universe through the kind of reductionist approach Ian’s arguing against. It’s been spectacularly successful — our knowledge of chemistry and physics has irrevocably altered the world — it’s catapulted our technology to the moon and to Mars, to the depths of the ocean and into the smallest parts of our very cells. We’ve deconstructed the edges of observable reality and looked at the edges of our observable universe. But that reductionist approach starts to falter when we try to reassemble all the pieces.

Ironically, biologists were some of the first to notice. It makes sense given that biology is where the reductionist sciences start to cohere into genuinely complex systems. Cells become tissues, tissues become organs, organs become bodies, and bodies become ecologies. What look like simple rules can, in aggregate, produce unwieldy complexity we still don’t fully grasp.

One of the best illustrations is a classic wolves-and-sheep thought experiment formalized a century ago by biologists Alfred Lotka and Vito Volterra. Imagine a meadow with sheep that reproduce quickly when left alone, and wolves that slowly starve when there’s nothing to hunt. Put them together, though, and a pattern emerges: when sheep are plentiful, wolves thrive; but as wolf numbers rise, sheep are eaten down; with fewer sheep, wolves begin to die off; and with fewer wolves, sheep rebound — starting the cycle again. The striking point is that you don’t need complicated motives or a mastermind to get dramatic, repeating booms and busts, oscillations that coalesce around Ian’s “strange attactors.” You just need interaction, feedback, and time. Once you have those, the system’s behavior becomes bigger than any one part of it.

Traffic is a real-world example. Each driver on the road follows a handful of simple rules: keep a safe distance, match speed, tap the brakes when the car ahead slows, etc. These decisions are rational (also lawful and required). But on a crowded freeway, they produce stop-and-go waves that appear without any discernible reason. A single small brake tap gets amplified as it travels backward through the line of cars, forcing a complete stop miles away over and over again. Instead of a high-speed thoroughfare, we get a parking lot. Nothing causes the traffic jam in the usual, local sense. Instead, it emerges from feedback loops, where causes become effects and effects become causes.

Ant colonies are another example. For as long as we’ve studied ants, their behavior has mystified us — until we started looking at them through the lenses of complexity and emergency. Just like a driver in a car, each ant in a colony follows simple rules: wander, lay a chemical trail when you find food, follow stronger trails more often than weaker ones, reinforce the trail if the food is still there, and abandon it if it fades. Out of that, the colony reliably produces large-scale order. There’s no central controller and no master map. Coordination emerges from feedback loops (chemical trail reinforcement), thresholds (a trail becomes “real” only after enough ants commit), and continual adaptation to changing conditions. Once again, we see simple behaviors and interactions producing real, useful, and hard-to-predict system-level intelligence.

Jurassic Park is the same story, except the meadow is a theme park and the animals are patented assets. Hammond and his team treat the park like a machine. They identify variables, set constraints, write procedures, add redundancy, and then assume the whole will behave as predictably as the parts they’re trying to control. Malcolm’s warning is that they’ve built a complex and adaptive system—biology, weather, software, human incentives, understaffed labor, security protocols, and executive pressure—where feedback loops and edge cases are the operating environment. The park collapses because people keep pulling levers that make sense to them from their limited view, while the system responds globally. In Malcolm’s own words: “Life finds a way.”

And, as in Jurassic Park, failing to grasp complexity and what can emerge from it makes our real-world problems worse. We reach for the exact wrong lever because we anticipate local — or even just linear — effects, without stopping to consider how those effects can themselves become causes that loop back into the system.

Take the often-told “cobra bounty” story from British India — its documentation is murky, but the mechanism it describes is real enough. Colonial officials, trying to reduce snake deaths, offered a cash reward for every dead cobra turned in. It seemed a good solution. Deputizing farmers to kill snakes would definitely decrease the number of snakes, and therefore logically decrease the number of snake deaths. Farmers, acting rationally under a new rule with strong financial encouragement, began breeding cobras to kill them and generate reliable government-sponsored income. A lever pulled to shrink the problem instead changed the system’s incentives, and the system adapted — producing more of the very thing it was supposed to eliminate.

A similar story with better provenance is the Great Hanoi Rat Massacre of 1902. French colonizers paid for rat tails, and the citizens of Hanoi responded by cutting off tails and releasing rats to breed, so they would have more rats, more tails to cut off, and more payments to collect. Like the British and the cobras, a logical policy meant to reduce a population instead created a market for its continued survival, because the incentive targeted an easy-to-measure token of success rather than the system producing the harm.

Today, we keep repeating the same mistake at larger scales. When it comes to existential threats, a park full of dinosaurs notwithstanding, the stakes are a lot higher than rats and cobras. The pattern, however, is the same: we apply tidy, local incentives to sprawling, adaptive systems—climate, housing, labor, wealth—and then watch the feedback loops turn our good intentions into accelerants. Our modern world has created new technologies and institutions built with enormous confidence, released into environments too complex to predict, governed by incentives that reward speed and easy metrics over understanding. Malcolm’s real warning — the real lesson of Jurassic Park — is again, not about dinosaurs—but about hubris. John Hammond ignored the plain, obvious warning signs because he thought all he needed to do was crack the genetic code and build some electric fences around what he created. He thought he just needed to control the variables. Today, nearly all of us — our institutions, our leaders, or employers, our very selves — operate under the same belief that because we can assemble or even invent the components, we can foresee the whole. Again, reductionist science has been enormously successful, so it’s no wonder that our scientific communities are completely organized around a method designed to strip everything and anything down to its components. But studying the parts in isolation can truly teach us only about the parts. We have to take new approaches to understand what happens when those parts are interacting, a challenge we’ve never really risen to, and at great cost.

In Jurassic Park, Hammond denigrated Malcom as a “rockstar” when he brought true scientists into the mix. Hammond thought listening was the same as understanding. He heard the warning and treated it as something to manage, to contain, to insure against. That’s the temptation of every dinosaur park: to confuse technical mastery over parts with moral and political mastery over consequences. In the end, his own experts, understatedly, “decline” to endorse his park after barely escaping with their lives.

“So have I,” Hammond says, finally.

There is beauty in unexpected complexity, and I think that’s what animated Malcom’s passion for chaos, for the idea that one simple change can propagate as an earthquake through any system. He warned us of the danger of building systems whose consequences we only discover once they’re out of our hands and already alive. We never had control; that was just the illusion. The question then is why we keep building dinosaur parks anyway.

The question now is what catastrophes will wake us up to the same conclusion, and whether there will be enough of us left to change course when they do? Twenty years ago, I thought I was writing about lunchboxes. My professor thought I was missing the point because Malcolm says the quiet part out loud. S wasn’t wrong. The warning in Jurassic Park isn’t subtle, and that’s what makes it horrifying.

Complex systems are often stable right up until the moment they aren’t, and then they fail fast. Our refusal to understand that doesn’t make it untrue; it just makes failure more likely. We have better instruments to study complexity now than we did even a few decades ago — better models, more data, more visibility into the feedback loops we’re creating — but instruments don’t matter if we keep treating complexity as safe to ignore. Otherwise, we’re still in a dinosaur park, and the fences are down. The alarms might sound, but an alarm is just noise if no one listens.

Ian Malcolm was right — about everything. Hopefully, that’s a lesson we learn faster than John Hammond, because no helicopters are coming to save us.

Simple Rules, Endless Worlds

The complex systems that make up the world don’t always collapse when dysfunction takes hold — many rumble on in a pathological state, worsening over time. Complexity scientists have identified a number of pathological states that lead to problematic outcomes. Sometimes these stem from our own choices, sometimes they’re embedded in the structure that’s taken hold. Here’s a look at some of the most common.

* Red Queen effect. People keep working harder just to avoid falling behind, with no lasting "win." This is the classic arms race: for example, if I have one nuclear weapon, you want two, so I want three, and so on. Hustle culture is similar. People keep taking more courses, working longer, and networking just to maintain their place.
* Vicious cycle. A problem creates conditions that worsen it, and it feeds on itself. Poverty is an example—it leads to a lack of education, which in turn leads to more poverty. Anxiety can cause avoidance, increasing fear, and anxiety. Even a small miscommunication in a relationship can lead to a communication breakdown that leads to even greater communication breakdowns.
* Fixes that backfire. A quick fix relieves symptoms now but worsens problems later. See above: cobras and rat tails. Widening highways to ease traffic often backfires when increased lanes spur more demand, causing more congestion.
* Shifting the burden. Short-term patches replace real fixes, and the system gets worse at solving the root issue. Borrowing money on high-interest credit cards is one example of shifting the burden, but so is using caffeine to boost wakefulness instead of getting more sleep.
* Goodhart’s law. When a number becomes the goal, people optimize the number instead of what it was meant to represent. Take click-through rates on online content. Optimizing for clicks leads to clickbait and lower content quality.
* Rich-get-richer. Small advantages compound over time, creating widening gaps. If your money goes to survival, you have little to invest. But even starting early with a small amount can produce an outsized effect. Just $50 a month at 6 percent interest can give you $100,000 by retirement if you start at age 20. Waiting until 30 will net you just $50,000.
* Coordination failure (collective action gridlock). Everyone would benefit from aligning, but mistrust, friction, or misaligned incentives keep people stuck. For example, nobody wants to be the first to reduce workloads, stop replying after hours, or pay more for the shared upkeep unless they believe others will too.
* Lock-in, or path dependence. An early choice becomes hard to change, even if better options appear. We’re all familiar, for example, with the QWERTY keyboard — even though safer, more ergonomic options exist. Early investments in fossil fuel infrastructure, such as pipelines and gas stations, make it harder to adopt renewables, even if they’re less costly and more efficient.
* Tragedy of the commons. Shared resources get worn down because each person’s small overuse adds up. This one is so common, we could write a whole piece on it: traffic jams, worsening air pollution, disappearing forests, rising antibiotic resistance, collapsing fisheries — each example shows individuals, acting rationally for themselves, rapidly depleting our collective resources.
* Boom–bust cycle. Overreaction and delays create repeating swings between “too much” and “too little.” Oscillations are common emergent behaviors of complex systems — like when a company can’t keep its products in stock. The Great Recession of 2009 is a stark example: a housing boom propelled by cheap credit and subprime mortgages led to a financial crisis.

SPONSORED
CTA Image

The EOS10: Our Lost Time Script is online and best of all, the standard edition is free to download! Get your copy today.

The Galaxy Q Arcade

Some of our favorite things!

Complexity, a Guided Tour by Melanie Mitchell

A book by Melanie Mitchell. It covers the territory between order and randomness, where simple parts interacting can produce adaptive behaviors without any central control. Mitchell walks us through the core ideas and tools in complexity science: chaos and limits of prediction, self-organization and emergence, models like cellular automata, information and computation as ways to measure structure, and finally applications to evolution, brains, immune systems, markets, and networks. A quintessential guide.

The Quiet Year

The Quiet Year is “part roleplaying game, part cartographic poetry,” according to its designer, Avery Alder. 52 playing cards, one for every week of your quiet year after the collapse of civilization, trigger problems and changes in luck as you attempt to rebuild “something good” before the Frost Shepherds arrive. The game — which involves drawing your on map — is for two to four players, ages 12 and up and takes two to four hours to play. The box set comes in a nifty little craft box with everything you need, but you can also get started right away with a downloadable version.

Ant Shacks

The toy ant farm’s marketed to kids aren’t exactly the most natural and realistic environments to observe an ant colony’s true behavior. For that, you need what the scientits use, a soil formicarium. These are naturalistic ant nests that use a special sand mix to mimic an ant colony's underground environment. It gives them a place to dig and build tunnels, allowing a truly emergent colony to form. If this your first time raising ants, we recommend a starter kit from Ant Shack. They’ve got everything you need to get started as new ant keeper.

The El Faro Bar Problem

El Farol Bar problem (inspired by a real bar in Santa Fe) is a classic experiment in complexity science by economist W. Brian Arthur: 100 people — let’s say you’re one of them — decide each week whether to go to a bar that’s only enjoyable if fewer than 60 people show up.

Every Thursday night, they have live music. You, and everyone else, wants to go and enjoy the show if it won’t be too crowded (but also not too empty), and you, and everyone else, wants to stay home if it’s gonna be packed.

The outcome, the total number of people at the bar any given Thursday, depends entirely on what everyone else decides. If you predict that the bar won’t be crowded, you go out to the bar. If you predict it will be crowded, you stay home. Everyone else follows the same two simple rules.

For the experiment, there is no way to know in advance how many people will be there, and your predictions each don’t affect your prediction the next week. The only thing you know for sure, the only information you have access to, is how many people went out to El Farro in the previous weeks.

“At this stage,” writes Arthur, “I invite the reader to pause and ponder how attendance might behave dynamically over time. Will it converge, and if so to what? Will it become chaotic? How might predictions be arrived at?”

You can read Arthur’s paper for the answer yourself, or wait for the next edition of Galaxy Q.

Houdini’s Telegram Solution


Last time we told you the story of what happened when Harry Houdini was stuck waiting in a telegram office and encoded message came through. Did you crack the code?

“I managed, after some worry, to solve the message,” he writes in Houdini on Magic. The answer:

“'Your ma dying; please return; ask her to forgive. Father.'”

And the reply:

“'Caught express; arrive noon. Your little Alice.'”

Houdini explains: “It’s a very simple cipher, and all there is to it is to alter the alphabet, and instead of writing the letter required, simply write the letter in front of it. For instance, if writing the word ‘yes’, according to your code, you have to write 'XDR.'”

Wpvkj pgzv vkog!