Journalist/developer. Storytelling developer @ USA Today Network. Builder of @HomicideWatch. Sinophile for fun. Past: @frontlinepbs @WBUR, @NPR, @NewsHour.
2051 stories
·
45 followers

Let’s build and optimize a Rust extension for Python

1 Share

If your Python code isn’t fast enough, you have many options for compiled languages to write a faster extension. In this article we’ll focus on Rust, which benefits from:

  • Modern tooling, including a package repository called crates.io, and built-in build tool (cargo).
  • Excellent Python integration and tooling. The Rust package (they’re known as “crates”) for Python support is PyO3. For packaging you can use setuptools-rust, for integration with existing setuptools projects, or for standalone extensions you can use Maturin.
  • Memory- and thread-safe, so it’s much less prone to crashes or memory corruption compared to C and C++.

In particular, we’ll:

  • Implement a small algorithm in Python.
  • Re-implement it as a Rust extension.
  • Optimize the Rust version so it runs faster.

Counting unique values, the approximate way

As a motivating example, we’re going to look at the task of counting how many unique values there are in a list. If we want to get an exact answer, this is very easy to implement:

def count_exact(items):
    return len(set(items))

A set() can only contain an item once, so this de-duplicates all items and then counts them.

The problem with this solution is memory usage. If we have 10,000,000 unique values, we’ll create a set with 10,000,000 entries in it, which will use quite a lot of memory.

So if we worry about memory usage, we can use a probabilistic algorithm that gives us an approximate answer. In many situations this will be good enough, and it can use a lot less memory. We’ll be using a very simple algorithm by Chakraborty, Vinodchandran, and Meel:

import random
import math

# Implementation of <a href="https://arxiv.org/abs/2301.10191" rel="nofollow">https://arxiv.org/abs/2301.10191</a>
def count_approx_python(items, epsilon=0.5, delta=0.001):
    # Will be used to scale tracked_items upwards:
    p = 1
    # Items we're currently tracking, limited in length:
    tracked_items = set()
    # Max length of tracked_items:
    max_tracked = round(
        ((12 / (epsilon ** 2)) *
        math.log2(8 * len(items) / delta)
    )
    for item in items:
        tracked_items.discard(item)
        if random.random() < p:
            tracked_items.add(item)
        if len(tracked_items) == max_tracked:
            # Drop tracked values with 50% probability.
            # Every item in tracked_items now implies the
            # final length is twice as large.
            tracked_items = {
                item for item in tracked_items
                if random.random() < 0.5
            }
            p /= 2
            if len(tracked_items) == 0:
                raise RuntimeError(
                    "we got unlucky, no answer"
                )
    return int(round(len(tracked_items) / p))

Running an example

Let’s look at an example set of words:

WORDS = [str(i) for i in range(100_000)] * 100
random.shuffle(WORDS)

We have 100,000 distinct words, which means count_exact() will create a set of size 100,000. Meanwhile, count_approx() will have a set of size 1739. It ends up having two sets at a time occasionally, but it will still use only 3% as much memory as the accurate algorithm.

We can compare the results:

print("EXACT", count_exact(WORDS))
print("APPROX", count_approx_python(WORDS))

I ran this three times in a row, and got:

EXACT 100000
APPROX 99712

EXACT 100000
APPROX 99072

EXACT 100000
APPROX 100864

The approximate version varies, but it’s pretty close—while using much less memory.

A speed comparison

We can compare the speed of the two implementations:

def timeit(f, *args, **kwargs):
    start = time.time()
    for _ in range(10):
        f(*args, **kwargs)
    print(f.__name__, (time.time() - start) / 10)

timeit(count_exact, WORDS, "seconds")
timeit(count_approx_python, WORDS, "seconds")

This gives us:

count_exact 0.14 seconds
count_approx_python 0.78 seconds

Our new function uses less memory, but it’s 5× slower. Let’s see if we can speed up count_approx_python() by rewriting it in Rust.

Going faster: Creating our Rust project

Using the Maturin Python packaging tool, we can create a new Rust Python extension very quickly, and using PyO3 we can easily interact with Python objects.

1. Initializing the project with Maturin

You can pipx or pip install maturin, and then you can use Maturin to initialize a whole new PyO3-based project:

$ maturin new rust_count_approx
✔ 🤷 Which kind of bindings to use?
  📖 Documentation: <a href="https://maturin.rs/bindings.html" rel="nofollow">https://maturin.rs/bindings.html</a> · pyo3
  ✨ Done! New project created rust_count_approx
$ cd rust_count_approx/

This creates all the files we need for a basic Rust-based Python package:

$ tree
├── Cargo.lock
├── Cargo.toml
├── pyproject.toml
└── src
    └── lib.rs

We can pip install the package right from the start, no further setup needed:

$ pip install .
...
Successfully installed rust_count_approx-0.1.0

2. Adding dependencies

Rust doesn’t include a built-in random number generation library, so we will add a third-party crate (Rust jargon for an installable package) using Rust’s build/package manager cargo:

$ cargo add rand
    Updating crates.io index
      Adding rand v0.8.5 to dependencies
             Features:
             + alloc
             + getrandom
             + libc
             + rand_chacha
             + std
             + std_rng
             - log
             - min_const_gen
             - nightly
             - packed_simd
             - serde
             - serde1
             - simd_support
             - small_rng

This updates the Cargo.toml file that among other things lists the Rust dependencies required to build our code. The relevant section now looks like this:

[dependencies]
pyo3 = "0.22.0"
rand = "0.8.5"

The pyo3 dependency was added automatically by Maturin when we initialized the project template.

The features list in the command output are flags that can be enabled at compile time to add more functionality; we’ll be using one later one.

3. A first Rust version

Now, we need to update the Rust code in src/lib.rs to implement our function:

use pyo3::exceptions::PyRuntimeError;
use pyo3::prelude::*;
use pyo3::types::{PySequence, PySet};
use rand::random;

#[pyfunction]
#[pyo3(signature = (items, epsilon=0.5, delta=0.001))]
fn count_approx_rust(
    py: Python<'_>,
    items: &Bound<PySequence>,
    epsilon: f64,
    delta: f64,
) -> PyResult<u64> {
    let mut p = 1.0;
    let mut tracked_items = PySet::empty_bound(py)?;
    let max_tracked =
        ((12.0 / epsilon.powi(2)) *
         (8.0 * (items.len()? as f64) / delta).log2()
        ).round() as usize;
    // In future versions of PyO3 iter() will be
    // renamed to try_iter():
    for item in items.iter()? {
        let item = item?;
        tracked_items.discard(item.clone())?;
        if random::<f64>() < p {
            tracked_items.add(item)?;
        }
        if tracked_items.len() == max_tracked {
            let mut temp_tracked_items =
                PySet::empty_bound(py)?;
            for subitem in tracked_items.iter() {
                if random::<f64>() < 0.5 {
                    temp_tracked_items.add(subitem);
                }
            }
            tracked_items = temp_tracked_items;
            p /= 2.0;
            if tracked_items.len() == 0 {
                return Err(PyRuntimeError::new_err(
                    "we got unlucky, no answer"
                ));
            }
        }
    }
    Ok((tracked_items.len() as f64 / p).round() as u64)
}

// Expose the function above via an importable Python
// extension.
#[pymodule]
fn rust_count_approx(
    m: &Bound<'_, PyModule>
) -> PyResult<()> {
    m.add_function(
        wrap_pyfunction!(count_approx_rust, m)?
    )?;
    Ok(())
}

4. Measuring performance

How does our new version compare as far as speed goes?

First, we install our new package:

Now we can import our new function from Python code and measure its speed:

from rust_count_approx import count_approx_rust

# See above for definition of WORDS and timeit():
timeit(count_approx_rust, WORDS)

Here’s how the new version compares:

Version Elapsed seconds
Python 0.78
Rust (naive) 0.37

It’s twice as fast. Why isn’t it faster?

This is logically the exact same code as the Python implementation, just implemented in Rust and a little more verbosely. We’re still interacting with Python objects in the same way, iterating over a Python list, and extensively interacting with a Python set. So that part of the code isn’t going to run any differently.

Let’s go faster, part 2: Optimizing the Rust code

Next we’re going to optimize our code in four different ways, all of which I measured separately as improving the performance.

Optimization 1: Link-time optimization

First, we’re going to enable link-time optimization, where the Rust compiler optimizes the code much later in the compilation process, during linking. This means slower compilation, but typically results in faster-running code. We do this by adding the following to Cargo.toml:

[profile.release]
lto = true

Optimizations 2 and 3: Faster random number generation

We also switch from using rand::random(), which uses a thread-local random number generator (RNG), to a RNG we manage ourselves, removing the overhead of thread-local lookups.

At the same time, we switch to using a faster RNG than the default one, the “small” RNG; it’s not quite as secure against hash denial-of-service attacks, but for our purposes that’s probably fine. To do this we add the smallrng feature to the rand crate in Cargo.toml:

[dependencies]
pyo3 = "0.22.0"
rand = {version = "0.8.5", features = ["small_rng"]}

And we’ll need to modify the code, which we’ll see below.

Optimization 4: Store hashes only

Finally, we switch from storing the Python objects in a Python-based set to storing just the hash of the Python object, as calculated by obj.__hash__(). What happens if two objects hash to the same value? The result will be off by one.

We’re already using a probabilistic function, so we’re already OK with slightly wrong results. Clashes should be rare, and having them very rarely be off by one shouldn’t matter if we have many unique values; 99712 isn’t particular more wrong than 99713.

Since we’re no longer storing Python objects, we can switch to using Rust’s std::collections::HashSet, which has some nicer APIs and may be a bit faster. However, the Rust HashSet will want to hash the values… and we don’t want to hash them again, they’re already pre-hashed.

To avoid double-hashing, we also add the Rust crate nohash_hasher:

$ cargo add nohash_hasher

This will allow us to use a Rust-based HashSet that assumes the values it stores are already hashes and can be used in the HashSet as is, without further hashing. The Cargo.toml dependencies section now looks like this:

[dependencies]
nohash-hasher = "0.2.0"
pyo3 = "0.22.0"
rand = {version = "0.8.5", features = ["small_rng"]}

We will also need to update our code to use this, as we’ll see next.

Our new code, with all 4 optimizations

Here’s our updated code:

use pyo3::exceptions::PyRuntimeError;
use pyo3::prelude::*;
use pyo3::types::PySequence;
use rand::{rngs::SmallRng, SeedableRng, Rng};
use nohash_hasher::IntSet;

#[pyfunction]
#[pyo3(signature = (items, epsilon=0.5, delta=0.001))]
fn count_approx_rust(
    py: Python<'_>,
    items: &Bound<PySequence>,
    epsilon: f64,
    delta: f64,
) -> PyResult<u64> {
    let mut p = 1.0;
    // Use a set that stores integer values that are assumed
    // to be hashes themselves:
    let mut tracked_items = IntSet::default();
    // Use an RNG we manage ourselves, specifically SmallRng
    // which is faster than the default rng that the rand
    // crate uses:
    let mut rng = SmallRng::from_entropy();
    // Create a closure, similar to a Python lambda:
    let mut random = || rng.gen::<f64>();
    let max_tracked =
        ((12.0 / epsilon.powi(2)) *
         (8.0 * (items.len()? as f64) / delta).log2()
        ).round() as usize;
    for item in items.iter()? {
        // Instead of storing the item, we store its
        // Python-calculated hash (the output of __hash__):
        let hash = item?.hash()?;
        tracked_items.remove(&hash);
        if random() < p {
            tracked_items.insert(hash);
        }
        if tracked_items.len() == max_tracked {
            tracked_items.retain(|_| random() < 0.5);
            p /= 2.0;
            if tracked_items.len() == 0 {
                return Err(PyRuntimeError::new_err(
                    "we got unlucky, no answer"
                ));
        }
    }
    Ok((tracked_items.len() as f64 / p).round() as u64)
}

Again, we can install our updated version by doing:

Here’s how long our optimized version takes to run in comparison to previous versions:

Version Elapsed seconds
Python 0.78
Rust (naive) 0.37
Rust (optimized) 0.21

Why this isn’t faster, and additional ideas

Why isn’t the Rust code even faster? Our latest optimized version still interacts with a Python list, namely items, and uses Python’s __hash__ API to hash every object. Which means we’re still limited to Python APIs’ speed for those two interactions. If we passed in items as an Arrow column, or a NumPy array of integers, we could probably run that much faster.

More broadly, if we wanted to go even faster, we should also consider using a different algorithm. There are many approximate counting algorithms, and I only picked this one because it was simple and easy to understand, not because it was necessarily particularly fast.

The big picture: Why Rust?

Rust allows us to speed our code by giving us access to a compiled language. But that’s true of C or C++ or Cython.

Unlike those languages, however, Rust has modern tooling, with a built-in package manager and build system:

  • Adding a new dependency is as easy as cargo add <cratename>. A good site to browse available crates is lib.rs.
  • Less visible, but still important: this code will compile on macOS and Windows with no additional work. In contrast, managing C or C++ dependencies across platforms can be very painful.

Rust also has excellent Python integration, with high-level access to Python APIs and easy to use packaging tools like Maturin.

Rust scales both to small projects—as in this article—and to much larger projects, thanks to memory- and thread-safety and to its powerful type system. The Polars project has a generic Rust core of 330K lines, which is then wrapped with more Rust and Python to make a Python extension.

Next time you’re considering speeding up some Python code, give Rust a try. It’s not a simple language, but it’s well worth your time to learn.

Read the whole story
chrisamico
5 hours ago
reply
Boston, MA
Share this story
Delete

Neither Elon Musk Nor Anybody Else Will Ever Colonize Mars | Defector

1 Share

Mars does not have a magnetosphere. Any discussion of humans ever settling the red planet can stop right there, but of course it never does. Do you have a low-cost plan for, uh, creating a gigantic active dynamo at Mars's dead core? No? Well. It's fine. I'm sure you have some other workable, sustainable plan for shielding live Mars inhabitants from deadly solar and cosmic radiation, forever. No? Huh. Well then let's discuss something else equally realistic, like your plan to build a condo complex in Middle Earth.

OK, so you still want to talk about Mars. Fine. Let's imagine that Mars's lack of a magnetic field somehow is not an issue. Would you like to try to simulate what life on Mars would be like? Step one is to clear out your freezer. Step two is to lock yourself inside of it. (You can bring your phone, if you like!) When you get desperately hungry, your loved ones on the outside may deliver some food to you no sooner than nine months after you ask for it. This nine-month wait will also apply when you start banging on the inside of the freezer, begging to be let out.

Congratulations: You have now simulated—you have now died, horribly, within a day or two, while simulating—what life on Mars might be like, once you solve the problem of it not having even one gasp worth of breathable air, anywhere on the entire planet. We will never live on Mars.


Let's discuss the breathable-air problem. Earth's atmosphere is rich with oxygen due in large part to all of the green plants photosynthesizing here. We got green plants out the ass. Some people have the idea that making Mars's atmosphere breathable is as simple as introducing some green plants to it: They will eat up sunlight and produce oxygen, and then people can breathe it. That is uhhhhh the circle of life (?) or whatever. They call this idea "terraforming."

At this point in our discussion I must acquaint you with two dear friends of mine. Their names are The South Pole, and The Summit Of Mount Everest.

The South Pole is around 2,800 meters above sea level, and like everywhere else on Earth around 44 million miles closer to the sun than any point on Mars. It sits deep down inside the nutritious atmosphere of a planet teeming with native life. Compared to the very most hospitable place on Mars it is an unimaginably fertile Eden. Here is a list of the plant-life that grows there: Nothing. Here is a list of all the animals that reproduce there: None.

Even with all the profound advantages the South Pole enjoys compared to Mars, even on a planet where living things have spent billions and billions of years figuring out how to adapt to and thrive within an incredibly diverse array of biomes—on a planet where giant tubeworms the size of NBA basket stanchions have colonized lightless ocean depths at which a human would be crushed like a grape under a piano—the South Pole simply cannot support complex life. It is too cold, and its relationship with sunlight too erratic, for living things to sustain themselves there. On astronomical scales it is for all practical purposes in the exact same spot as some of the most life-rich and biodiverse places in the known universe, and yet no species has established a permanent self-sustaining population there. Ever.

The summit of Mount Everest is around 8,800 meters above sea level, squarely within those balmy Earth latitudes that get nice long sunlit days all year round. Compared to anyplace on Mars, it is the very womb of God. No plant life grows there. No animals live there.

Even with steady year-round subtropical sunlight, even with conditions infinitely more nurturing than those found anywhere on Mars, the summit of Mount Everest cannot support complex life. It's too cold; the air is too thin; there is no liquid water for plants and animals to drink. Standing on the top of Mount Everest, a person can literally look at places where plants and animals happily grow and live and reproduce, yet no species has established a permanent self-sustaining population on the upper slopes of Everest. Even microbes avoid it.

Life on earth writ large, the grand network of life, is a greater and more dynamic terraforming engine than any person could ever conceive. It has been operating ceaselessly for several billions of years. It has not yet terraformed the South Pole or the summit of Mount Everest. On what type of timeframe were you imagining that the shoebox of lichen you send to Mars was going to transform Frozen Airless Radioactive Desert Hell into a place where people could grow wheat?

People have this idea that life is like some kind of magical force; that the reason Mars does not have life is that life has not yet gone there; that once life goes to a place, then it just figures out how to go on living there. This, I think, is a consequence of more people having gotten their science education from the movie character Ian Malcolm than from actual science classes. More generously, it is a testament to humans having formulated nearly all of their ideas about the nature of life from the absolute easiest (and only known) place to have life.

In any case Malcolm was exactly, precisely wrong when he said "Life ... [Jeff Goldblum stammering] ... finds a way." Sure, yes, when "life" is "bacteria" and the challenge before it is how to propagate inside of my house, yes: In that case, life finds a way. In the bigger picture, no, life does not find a way. It has not found a way, even at the prokaryotic level, anywhere else humans have figured out how to look, except here on Earth.

Finding ourselves on this lush, beautiful, abundant planet is not some testament to the ingenuity and resourcefulness of life. Nor is it a coincidence. This is where life could happen; we are here because this is where we could be. Even here, even where things were as comfortably laid out as our brightest minds could ever imagine, it took billions of years, reproductions beyond counting, before any individual life got advanced enough to think something as silly as "Hey, let's go live on Mars."


Humankind will never establish a permanent human settlement on Mars. Ever. Moreover there is no need to try to come up with some way to build one there.

The doomsday scenarios that science-fiction writers—and their contemptible counterparts, futurists—have imagined would necessitate an escape from Earth can be broken down into two categories. First there are the ones that would not come close to making Earth as hellish and inhospitable as Mars. These include global nuclear wars, food-chain collapses, extermination-level pandemics, and eugenic boogeymen like "overpopulation." None of these present a scenario in which Earth all at once completely ceases having breathable oxygen, for example, or suddenly no longer enjoys a magnetosphere. In the aftermath of even the worst of these scenarios, if you were picking one of the two planets to engineer into habitability, the Earth would remain the infinitely superior option. For planning purposes, the planet to prepare for use as a base of survival in an apocalyptic event is the one where you're reading this blog.

Second are the scenarios that are not even worth considering. These are your planet-destroying asteroid strikes. Let's be optimistic and generous and say that, over the course of 500,000 years of species-wide concerted effort that would more than exhaust the resources of the planet where we already live, Mars could be "terraformed" into a place where a permanent human settlement could eke out a horrible nightmare of a sustainable existence for a while, pointlessly, telling each other sad stories of what it was like to live in the endless biodiversity and beauty of the world Mars's loser inhabitants ruined for the cause of abandoning it. OK great. Truly a beautiful dream you got there. Unfortunately it only makes sense if you can anticipate a planet-destroying asteroid strike 500,001 years ahead of time, but also cannot avert or mitigate it in any other way. Otherwise you are simply rolling the dice that the planet-destroying asteroid strike will not happen at any time in the interim, while you busy yourself rendering the Earth uninhabitable for the sake of leaving it for someplace even worse.

But more importantly: There is no scenario in which humans can try to colonize Mars and also survive on Earth long enough to go live in that colony! I am sorry to be the Bad News Guy here. But there you have it: The effort to colonize Mars will help ensure nobody survives long enough to live in that colony. That makes the idea of trying to build that colony morally reprehensible.

In these latter days everybody is familiar with concepts like the carbon footprint, sustainability, and the like. Measures of the ecological cost of the things we do. One of the most irksome problems bedeviling Earth's biosphere at present is the outrageous cost of many aspects of many human lifestyles. Society is gradually and too late awakening to, for example, the reality that there is an inexcusable, untenable cost to shipping coffee beans all around the world from the relatively narrow belt in which they grow so that everybody can have a hot cup o' joe every morning. Or that the planet is being heated and poisoned by people's expectation of cheap steaks and year-round tomatoes and a new iPhone every year, and that as a consequence its water-cycle and weather systems are unraveling. Smearing the natural world flat and pouring toxic waste across it so that every American can drive a huge car from their too-large air-conditioned freestanding single-family home to every single other place they might choose to go turns out to be incompatible with the needs of basically all the other life we've ever detected in the observable universe. Whoops!

All of what makes, say, the lifestyle of your average McMansion owner in Ashburn, Va. anathema to life, writ large, applies a billionfold to each person in a theoretical Mars colony. Their carbon footprints would be the size of entire nations, by the time they even pressed the first normal human-sized actual footprint into the red planet's sterile frozen regolith. Shipping a pound of coffee from the Bean Belt to Connecticut is nothing at all compared to shipping flour to goddamn Mars.

This is only part of why that other spooky doomsday scenario, the sun's inevitable expansion and consumption of the Earth, is also not worth considering as a reason to plan a Martian relocation. That is not even going to appreciably begin happening for something like four billion more years. That is such an incredibly long time from now, buddy! The human race has only existed for something like 300,000 years. Four billion years is 13 thousand times as long as that. Four billion years ago, the Earth was a largely molten volcanic blob with no life more complex than microorganisms on it. Another 3.75 billion years elapsed before the first dinosaurs showed up. You could fit the entire lifespan of humanity (so far) 216 times over—just into the gap between when the dinosaurs all died out and when humanity first shows up in the fossil record.

You see where I am going with this. Spoiler alert! There will not be any human beings around when Sun Get Large even begins to become a problem. Planning around this issue is like some primordial amoeba trying to score some choice oceanfront Pangaean real estate against the possibility that humans would gentrify it in the 1990s. Even in the most optimistic plausible daydream, in which some descendants of humanity still exist four billion years from now to concern themselves with the ballooning sun, they will not be anything like us; they might even be all fucked-up and gross; they can go to hell. In any case you can unpack the canned goods.


None of what's in the preceding 38 or however many exhausting paragraphs is unknown to Elon Musk, the mega-rich clod and dullard famous for buying things for more than they're worth and then making them worse, who tweeted over the weekend some silly shit about his Martian colony, ah—what even is the word here? Plan? Vision? Intention? Anyway this is a thing that he thinks must and can and will happen. He sees his SpaceX company's work as part of the endeavor to colonize Mars someday.

This doofus's birdbrained space-colony takes are important to know; that alone is a very awful and embarrassing true thing to say about the state of things. Capitalist society permits such profound inequalities of wealth and power, and the U.S. has allowed its public sector to lapse into such abysmal decay, that a guy like Musk exerts a terrible gravity on the world around him: What he is interested in seeing done, some number of other people will work on doing, because that work pays better than nearly all others. Whatever pit he wants to throw his money into, some appalling volume of the world's resources and human labors will follow it down.

Those labors will be, for the people doing it under Musk, basically suicidal. A revealing and chilling phrase in his tweets about this stuff is "the probable lifespan of consciousness"; increasing this is what Musk views as the essential bleak and hideous goal of interplanetary colonization. What percentage of the human race—or any of the non-sentient life forms—need survive to ensure the mere continuity of consciousness?

This ordering of priorities, in which the sacrosanct goal is to extend "the probable lifespan of consciousness" and space colonization the means, is above all else a monstrous permission structure for this outspoken bigot's vile social ideas, a kind of reductio ad absurdum for what's been doing business as "effective altruism" for a while now. The fantasy—and it is a fantasy—isn't one of space travel and exploration and some bright Star Trek future for humanity, but one of winnowing and eugenics, of cold actuarial lifeboat logic, of ever greater reallocation from the dwindling many to the thriving few. That's the world as Elon Musk and his cohort want it; Mars colonization is just a pretext.

In a saner society, a rich guy with Musk's well-known and unapologetically expounded views would sooner find himself under a guillotine than atop a space agency with the power to dragoon the world's resources into his k-hole John Galt cosplay. The certainty that he will never make another planet habitable is no comfort to the rest of us, when in the act of trying he may do the opposite to this one. The doomsday scenario is coming from inside the house. I hope he dies on Mars.

Stay in touch

Sign up for our free newsletter

Read the whole story
chrisamico
18 hours ago
reply
Boston, MA
Share this story
Delete

The Great Luncheon Meat Disaster of '24

1 Share

This is one of the greatest things I have ever read in a press statement:

First and foremost, our investigation has identified the root cause of the contamination as a specific production process that only existed at the Jarratt facility and was used only for liverwurst. With this discovery, we have decided to permanently discontinue liverwurst.

The emphasis is in the original and it sells the paragraph. 

This comms masterpiece is from a statement released yesterday by the food company Boar’s Head, which is dealing with a crisis regarding listeria contamination of liverwurst manufactured at a facility in Virginia.

One general rule in crisis communications is that you should clearly explain the actions you will take to remediate the problem. “We have decided to permanently discontinue liverwurst.” That’s an action! This step was so important that it was the first of three remedial steps covered in that press release. The second was that they are permanently closing the facility and apparently laying off several hundred people. Naturally, that’s the headline the press went with. Still, if you’re going to bury the lede, bury it under liverwurst! 

The third step is an expert committee. Might as well play the hits.

Look, crisis PR is hard, and food crisis PR is really hard because food is about trust and what you put in your body and your childrens’ bodies. One of the roughest professional weeks of my life was working (in a relatively junior role) for a major food client impacted by the contamination of the Chinese dairy supply chain with melamine. It was some of the highest ambient stress I have ever encountered.

When it’s leafy greens being recalled for contamination people are often surprised because, you know, greens! But when it’s luncheon meat I guess it tracks. There are a lot of meat products in the FDA recall database. Meat processing seems hard. What is the “specific production process that only existed at the Jarratt facility and was used only for liverwurst?” The imagination runs wild!

I’m obsessing about this not because of any deep interest in the PR craft of this situation, but because I love liverwurst.

No, really! When I was in college I would make myself sandwiches of liverwurst and alfalfa sprouts with a generous spread of yellow mustard. Don’t judge it until you’ve tried it! I ate these sandwiches on the beach at Año Nuevo when I was surveying elephant seal pup mortality. To be honest, they don’t really hold up well to being stuffed in a backpack all day. Too much moisture. Better to go with the salami for days in the field. But the mustard helps to cover up the stench of elephant seal.

I don’t eat much liverwurst anymore. This is partly because as I got older I couldn’t make the same dietary choices. There is a point in your life when you can have (totally random example) a Slurpee, or a Charleston Chew Bar, or a Slurpee and a Charleston Chew bar. That point is when you are a teenager and you spend a lot of time hanging out in front of the 7-11 in midtown Palo Alto with your buddies, getting jacked on the worst possible snacks to prepare for endless hours of gaming on the Atari 2600. And then there is the rest of your life spent having the salad dressing on the side because adulthood is a gray and joyless desert inhabited by wandering hermits who are concerned about your prostate.

Also, I got fancier as I got older. I slid down the fatty slope from liverwurst (extruded meat product) to Braunschweiger1 (sounds German!) to rarefied patés and terrines best enjoyed on nuggets of artisan bread from craft bakeries staffed by Berkeley Hills Hobbits who grind their own flour with millstones and magic wheat imported from the Shire. It’s the same nutritional content as those sandwiches from my youth, but it feels way healthier because it has “texture” and costs 25 times as much.

Anyway, I haven’t been following the comms on the case closely, and I don’t have strong opinions one way or the other on how Boar’s Head is handling it. But the fact that liverwurst was determined to be at the center forced me to pay attention.

It’s for the best that I’m not involved in this situation. I am good at distancing myself emotionally from the crises I work on, which is important if you do this kind of work. But for liverwurst I might struggle to maintain that distance. Imagine the press conference!

Reporter: “What steps will you take to ensure this never happens again?”

Me [gripping sides of the podium, pale in the TV lights, one tear rolling down my cheek]: “We…we have decided to…permanently discontinue liverwurst.”

[Crowd gasps]

Me [openly sobbing]: “No further questions.”

The old saying is true. You really don’t want to see the sausage being made.

Support William Moss

Occasional navel-gazing from a trans-Pacific spin doctor.

Read the whole story
chrisamico
1 day ago
reply
Boston, MA
Share this story
Delete

Private schools in Boston suburbs are seeing a student boom. Why?

1 Share

private

Read the whole story
chrisamico
6 days ago
reply
Boston, MA
Share this story
Delete

Lessons from the California Journalism Legislative Debacle

1 Share

Welcome to Second Rough Draft, a newsletter about journalism in our time, how it (often its business) is evolving, and the challenges it faces.

Subscribe now

Earlier this month the federal government alleged that Google is a monopolist, reaping illegal monopoly profits, which strikes me as likely correct, even if not helpful to journalism. Even more recently, Google, undeterred, took the news industry to the cleaners in a battle in the California legislature in Sacramento.

This week I want to tell you what happened, and to try to draw some lessons for future news industry forays into the brutal world of what is politely referred to as “public policy,” but more colloquially as lobbying.

Rebuild Local News, an industry coalition led by the enormously astute Steve Waldman, estimates that just in California, building back what has been lost in local newsrooms would cost $375 million each and every year. A bill passed earlier this year in the State Senate would have provided even more than that—half a billion dollars annually from a tax on platforms. Another bill, originating in the Assembly, would have required platforms to negotiate subsidies with news organizations or pay new fees (read taxes) to the State.

Dropping the mask

Google, which spends millions on lobbying and lawyers with the same level of cost-consciousness that newsrooms display toward buying pizza, reacted angrily. Executives were heard to say that the news industry was being ungrateful for the handouts Google has offered over the years. Then they threatened to cut off Google News in California, and stop all support for news nationwide. In other words, the mask dropped. Google, it turns out, is no fonder of news than Facebook, which hasn’t hidden its own disdain for years now and is steadily eradicating news from its service. Google just wore a velvet glove over its mailed fist, while Facebook didn’t bother.

In the end, the Assembly sponsor and Google crafted a deal that, instead of $375 million a year, or $500 million a year, provides an average of $21 million (which, to save you the math, is less than 6% of $375 million) each year from Google, and for only five years, after which all bets seem to be off. Google’s new front-loaded first year payment is about one quarter of what it’s paying on a legislative deal in Canada, which has a population just a bit smaller than California.

Almost half of the overall money from Google comes from its agreeing not to stop, again for five years, the Google News Initiative payments it had already been making, ostensibly out of the goodness of its corporate heart. Facebook gets off scot-free, as does Amazon, another target of the draft bills.

There is a supposed commitment of another $30 million next year and $10 million per year thereafter from California taxpayer funds, but the fate of that element, which must be included in the budget, would appear at least somewhat in doubt in the State Senate, where both the sponsor of the big bill that passed and the most senior member of the chamber publicly expressed misgivings about the deal. Even if this appropriation passes, and again to save you the math, $10 million is less than 3% of the cost of rebuilding.

Google has also committed some new money to accelerate the adoption of AI, which is a bit like GM agreeing to spend to encourage the purchase of new electric vehicles. Not exactly a public-spirited move.

What are we to make of all this? First, as I have said when expressing misgivings about the big philanthropic initiative in our field, Press Forward, newsrooms are literally my favorite charity, so more money is better. That’s true even when it’s not enough, even when it’s being allocated and spent less than optimally, even if the motivations behind it are more self-interested than not.

And I am not saying that pursuing help from government is necessarily wrong, so long as the mechanism is content-neutral— that is, doesn’t leave political actors to choose coverage they want to reward or punish— which is a test these proposals met. I also recognize that proponents say this is only a beginning in California. That may be with respect to the appropriation of taxpayer funds, but it is also likely the end of money for news from the platforms, at least for the five year term of this deal.

Beyond all that, I see at least a couple of important lessons for newsrooms, and their associations and other representatives in the months and years ahead.

We are knife fighters at a gun fight

The first of these is that “public policy,” is not our native milieu. Our experience and expertise is in revealing its occasional corruption, not in joining in the deal-making ourselves. Particularly when contending with the platforms, we will always be overmatched in this arena, and that fact should place significant limitations on our expectations. Google, for instance, spends about as much each year lobbying just the federal government as it will continue to cough up under the Google News Initiative in California.

Getting too invested: “defeat is victory”

Next is that we need to remain true to our principles, including those of candor and independence.

The new money from Google comes with all sorts of exclusions. None of it will go to public broadcasters, which are struggling in California as elsewhere, or local television, which is showing its own signs of business strain, even as it remains a principal source of local news for many people. On the other hand, the hedge funds that own many of California’s remaining newspapers will be significant beneficiaries, and they helped push for this outcome.

None of the money will apparently go to any news organization with less than $100,000 in annual revenues, a threshold which accounts for well more than a third of all of the members of LION Publishers. So what did LION’s CEO say in response? “Lawmakers should be proud of this program,” which provides “immediate and needed relief.”

He wasn’t the only one to echo the Newspeak of Orwell’s Nineteen Eighty-Four (“war is peace; freedom is slavery”) in calling defeat victory. Three California publishers I deeply respect and like personally felt the need to lend their voices to the celebratory announcement of Google vanquishing legislators the publishers had considered their allies. One called it a “win for all Californians,” another termed it “ambitious.” This is the sort of puffery people in newsrooms make fun of every day. We ought to try to avoid doing it ourselves.

Yet, if we are going to play the legislative game, this sort of thing is going to happen more often. While trying to tell our readers that we stand outside of politics, and that they can count on us to reveal to them its foibles and follies, we will find ourselves praising politicians who have betrayed us, currying favor even as we seek to scrutinize, saying things we do not really mean. It might all be enough to make you wonder if the money on offer is really worth it.

Thanks for reading Second Rough Draft! Subscribe for free.

Leave a comment



Read the whole story
chrisamico
8 days ago
reply
Boston, MA
Share this story
Delete

With six words, Michelle Obama rewires America’s conversation on race

1 Share

In her speech at the Democratic National Convention in 2016, Michelle Obama coined one of the defining phrases of the political era: “When they go low, we go high.”

Going high did not work. Donald Trump won that election. While many of his supporters expressed discomfort with his go-low approach to politics, far more embraced it. Trump, despite his pedigree as a New York billionaire, would embarrass and attack and disparage the perceived elites, and many Americans loved him for it.

Lesson learned. In her speech Tuesday night at the 2024 Democratic convention, Obama didn’t explicitly revoke the “we go high” mantra, but she made clear that a different moment called for a different approach. It wasn’t that the former first lady went low, exactly, but she was unsparing in her disdain for and criticisms of her husband’s successor.

In one of the more memorable stretches of her speech, she equated the Democratic nominee, Vice President Kamala Harris, with the majority of Americans who never enjoyed Trump’s wealth and privilege — and the safety net that accompanies them.

Harris “understands that most of us will never be afforded the grace of failing forward,” Obama said. “We will never benefit from the affirmative action of generational wealth. If we bankrupt a business or choke in a crisis, we don’t get a second, third or fourth chance. If things don’t go our way, we don’t have the luxury of whining or cheating others to get further ahead. No.

“We don’t get to change the rules so we always win,” she continued. “If we see a mountain in front of us, we don’t expect there to be an escalator waiting to take us to the top. We put our heads down. We get to work.”

Trump’s name wasn’t used but it didn’t need to be. That line about the escalator, a call back to Trump’s 2015 campaign launch, made the point obvious, if it wasn’t already.

But there are six words in that stretch that extend well beyond Trump. Obama used a phrase that succinctly and elegantly reframes the ongoing debate over inequality in the United States and how it might be addressed: “the affirmative action of generational wealth.”

It’s concise, centered on two familiar concepts. The first is “affirmative action,” the term used to describe programs generally focused on ensuring that non-White Americans have access to resources and institutions they might not otherwise have. And the second is “generational wealth,” the transition of economic (and social) power through families and, at times, communities.

These are descriptors of elements in American society that are in tension. If you are a recipient of generational wealth, you don’t need affirmative action to ensure you have access. If you are someone who would benefit from affirmative action, you generally are not someone with access to generational wealth. Of course, you might be, which is one of the outliers used to criticize affirmative action programs: They often center more on demographic traits than on economic class.

The linchpin of Obama’s phrase, though, is its shortest word: “of.” She isn’t contrasting affirmative action and generational wealth as conduits to power and success, she’s overlapping them. She’s noting that generational wealth is a form of affirmative action, here in the person of Trump but certainly beyond that.

How? Because generational wealth presents opportunities to people who might otherwise not have access to them: legacy admissions at Ivy League colleges, tutors and training, vehicles and housing that make entry-level jobs or internships more feasible. These are benefits that derive from social and economic class — a form of affirmative action. This is how reframing a subject works; it presents familiar information in a new context.

The natural response, of course, is that a parent bolstering her child’s success is different from a government program that includes an effort to ensure that Black Americans have equal access. But this is the point of the word “generational.” We’re not simply considering a rich parent and the advantages they might offer. We’re focused on patterns of wealth transitioning from parent to child over and over again. And those patterns, traced backward over surprisingly few decades, very quickly bring us back to racial divisions.

Democratic National Convention

DNC 2024 live updates: Oprah makes the case for Harris ahead of Tim Walz’s moment

The speakers, schedule and how to watch the Democratic National Convention

There is no question that Black and White Americans did not have equal access to economic success in the 1950s or 1960s. They didn’t in later decades, either, thanks to ongoing overt discrimination (like being unable to rent an apartment) and discriminatory patterns built in to lending and jobs systems (such as making it harder to obtain a mortgage for homes in some neighborhoods). Nearly every American has a parent or grandparent who was alive in the era of explicit discrimination — that’s two generations away. Generational wealth, then, almost necessarily means wealth rooted in an American economy where explicit discrimination existed. It also means wealth that still enjoys the sorts of systemic protections and advantages, including ones from the government, that are pilloried when focused on addressing historical inequality.

One of the central debates over race in recent years has centered on existence or extent of racism embedded in American social and legal systems. The rise of the Black Lives Matter movement, focused on systemic racism in law enforcement, increased the number of White Americans — specifically, White Democrats — who indicated that they thought discrimination was a central cause for the lower incomes and worse housing many Black Americans experience.

The biennial General Social Survey, reflecting Republicans’ broad rejection of the idea of systemic racism, finds that they are much more likely to indicate that Black Americans have worse economic positions due to lack of motivation.

Republicans reject the idea of systemic racism, in part, because they view it as an unfair and unpatriotic disparagement of the United States. It’s in part, too, because the narrative of America overcoming explicit racism during the Civil Rights movement suggests that the fight is over. Many point to Michelle Obama’s husband: How could racism exist in an America that elected a Black man as president?

It’s also in part because the Black Lives Matter movement and questions about racism in general are coded as Democratic issues and therefore subject to partisan response. Black Lives Matter led to the right embracing Blue Lives Matter. Discussions of systemic racism were met with many White Republicans viewing themselves as victims of anti-White racism (to Trump’s political benefit). Affirmative action programs became a useful target for demonstrating that sort of anti-White bias.

Michelle Obama knows this. Her line overlapping affirmative action and generational wealth wasn’t offering “affirmative action” as a pejorative term. It was, instead, contextualizing a different way in which people are boosted by circumstances that aren’t always under their control. It was a defense of affirmative action programs that noted how wealth built in an explicitly unfair economy was its own form of unearned advantage.

It was pointed at Trump, yes. But it’s a reframing that rewires the conversation of race and advantage in a striking way. In six words that will likely have more staying power, if not more success, than “we go high.”

Read the whole story
chrisamico
25 days ago
reply
Boston, MA
Share this story
Delete
Next Page of Stories