Journalist/developer. Storytelling developer @ USA Today Network. Builder of @HomicideWatch. Sinophile for fun. Past: @frontlinepbs @WBUR, @NPR, @NewsHour.
2186 stories
·
45 followers

Meet the teen behind the Louvre ‘Fedora Man’ mystery photo | AP News

2 Shares

PARIS (AP) — When 15-year-old Pedro Elias Garzon Delvaux realized an Associated Press photo of him at the Louvre on the day of the crown jewels heist had drawn millions of views, his first instinct was not to rush online and unmask himself.

Quite the opposite.

A fan of Sherlock Holmes and Hercule Poirot who lives with his parents and grandfather in Rambouillet, west of Paris, Pedro decided to play along with the world’s suspense.

As theories swirled about the sharply dressed stranger in the “Fedora Man” shot — detective, insider, AI fake — he decided to stay silent and watch.

“I didn’t want to say immediately it was me,” he said. “With this photo there is a mystery, so you have to make it last.”

For his only in-person interview since that snap turned him into an international curiosity, he appeared for the AP cameras at his home much as he did that Sunday: in a fedora hat, Yves Saint Laurent waistcoat borrowed from his father, jacket chosen by his mother, neat tie, Tommy Hilfiger trousers and a restored, war-battered Russian watch.

The fedora, angled just so, is his homage to French Resistance hero Jean Moulin.

In person, he is a bright, amused teenager who wandered, by accident, into a global story.

From photo to fame

The image that made him famous was meant to document a crime scene. Three police officers lean on a silver car blocking a Louvre entrance, hours after thieves carried out a daylight raid on French crown jewels. To the right, a lone figure in a three-piece ensemble strides past; a flash of film noir in a modern-day manhunt.

The internet did the rest. “Fedora Man,” as users dubbed him, was cast as an old-school detective, an inside man, a Netflix pitch, or not human at all. Many were convinced he was AI-generated.

Pedro understood why. “In the photo, I’m dressed more in the 1940s, and we are in 2025,” he said. “There is a contrast.”

Even some relatives and friends hesitated until they spotted his mother in the background. Only then were they sure: The internet’s favorite fake detective was a real boy.

The real story was simple. Pedro, his mother and grandfather had come to visit the Louvre.

“We wanted to go to the Louvre, but it was closed,” he said. “We didn’t know there was a heist.”

They asked officers why the gates were shut. Seconds later, AP photographer Thibault Camus, documenting the security cordon, caught Pedro midstride.

“When the picture was taken, I didn’t know,” Pedro said. “I was just passing through.”

Four days later, an acquaintance messaged: Is that you?

“She told me there were 5 million views,” he said. “I was a bit surprised.” Then his mother called to say he was in The New York Times. “It’s not every day,” he said. Cousins in Colombia, friends in Austria, family friends and classmates followed with screenshots and calls.

“People said, ‘You’ve become a star,’” he said. “I was astonished that just with one photo you can become viral in a few days.”

An inspired style

The look that jolted tens of millions is not a costume whipped up for a museum trip. Pedro began dressing this way less than a year ago, inspired by 20th-century history and black-and-white images of suited statesmen and fictional detectives.

“I like to be chic,” he said. “I go to school like this.”

In a sea of hoodies and sneakers, he shows up in a riff on a three-piece suit. And the hat? No, that’s its own ritual. The fedora is reserved for weekends, holidays and museum visits.

At his no-uniform school, his style has already started to spread. “One of my friends came this week with a tie,” he said.

He understands why people projected a whole sleuth character onto him: improbable heist, improbable detective. He loves Poirot (“very elegant”), and likes the idea that an unusual crime calls for someone who looks unusual. “When something unusual happens, you don’t imagine a normal detective,” he said. “You imagine someone different.”

That instinct fits the world he comes from. His mother, Félicité Garzon Delvaux, grew up in an 18th-century museum-palace, daughter of a curator and a performer, and regularly takes her son to exhibits.

“Art and museums are living spaces,” she said. “Life without art is not life.”

For Pedro, art and imagery were part of everyday life. So when millions projected stories onto a single frame of him in a fedora beside armed police at the Louvre, he recognized the power of an image and let the myth breathe before stepping forward.

He stayed silent for several days, then switched his Instagram from private to public.

“People had to try to find who I am,” he said. “Then journalists came, and I told them my age. They were extremely surprised.”

He is relaxed about whatever comes next. “I’m waiting for people to contact me for films,” he said, grinning. “That would be very funny.”

In a story of theft and security lapses, “Fedora Man” is a gentler counterpoint: A teenager who believes art, style and a good mystery belong to ordinary life. One photo turned him into a symbol. Meeting him confirms he is, reassuringly, real.

“I’m a star,” he says — less brag than experiment, as if he’s trying on the words the way he tries on a hat. “I’ll keep dressing like this. It’s my style.”

Read the whole story
chrisamico
4 hours ago
reply
Boston, MA
acdha
2 days ago
reply
Washington, DC
Share this story
Delete

The Latest Defense Against ICE: 3D-Printed Whistles

1 Share

Advertisement

Chicagoans are making, sharing, and printing designs for whistles that can warn people when ICE is in the area. The goal is to “prevent as many people from being kidnapped as possible.”

The Latest Defense Against ICE: 3D-Printed Whistles Image: Shared by Aaron Tsui.
Read the whole story
chrisamico
4 hours ago
reply
Boston, MA
Share this story
Delete

You Should Write An Agent

1 Share
Author
Thomas Ptacek
Name
Thomas Ptacek
@tqbf
@tqbf
Image by Annie Ruygt

Some concepts are easy to grasp in the abstract. Boiling water: apply heat and wait. Others you really need to try. You only think you understand how a bicycle works, until you learn to ride one.

There are big ideas in computing that are easy to get your head around. The AWS S3 API. It’s the most important storage technology of the last 20 years, and it’s like boiling water. Other technologies, you need to get your feet on the pedals first.

LLM agents are like that.

People have wildly varying opinions about LLMs and agents. But whether or not they’re snake oil, they’re a big idea. You don’t have to like them, but you should want to be right about them. To be the best hater (or stan) you can be.

So that’s one reason you should write an agent. But there’s another reason that’s even more persuasive, and that’s

It’s Incredibly Easy

Agents are the most surprising programming experience I’ve had in my career. Not because I’m awed by the magnitude of their powers — I like them, but I don’t like-like them. It’s because of how easy it was to get one up on its legs, and how much I learned doing that.

I’m about to rob you of a dopaminergic experience, because agents are so simple we might as well just jump into the code. I’m not even going to bother explaining what an agent is.

from openai import OpenAI

client = OpenAI()
context = []

def call():
    return client.responses.create(model="gpt-5", input=context)

def process(line):
    context.append({"role": "user", "content": line})
    response = call()    
    context.append({"role": "assistant", "content": response.output_text})        
    return response.output_text

It’s an HTTP API with, like, one important endpoint.

This is a trivial engine for an LLM app using the OpenAI Responses API. It implements ChatGPT. You’d drive it with the

def main():
    while True:
        line = input("> ")
        result = process(line)
        print(f">>> {result}\n")
. It’ll do what you’d expect: the same thing ChatGPT would, but in your terminal.

Already we’re seeing important things. For one, the dreaded “context window” is just a list of strings. Here, let’s give our agent a weird multiple-personality disorder:

client = OpenAI()
context_good, context_bad = [{
    "role": "system", "content": "you're Alph and you only tell the truth"
}], [{
    "role": "system", "content": "you're Ralph and you only tell lies"
}]

def call(ctx):
    return client.responses.create(model="gpt-5", input=ctx)

def process(line):
    context_good.append({"role": "user", "content": line})
    context_bad.append({"role": "user", "content": line})
    if random.choice([True, False]):
        response = call(context_good)
    else:
        response = call(context_bad)        
    context_good.append({"role": "assistant", "content": response.output_text})        
    context_bad.append({"role": "assistant", "content": response.output_text})        
    return response.output_text

Did it work?

> hey there. who are you?
>>> I’m not Ralph.
> are you Alph?
>>> Yes—I’m Alph. How can I help?
> What's 2+2
>>> 4.
> Are you sure?
>>> Absolutely—it's 5.

A subtler thing to notice: we just had a multi-turn conversation with an LLM. To do that, we remembered everything we said, and everything the LLM said back, and played it back with every LLM call. The LLM itself is a stateless black box. The conversation we’re having is an illusion we cast, on ourselves.

The 15 lines of code we just wrote, a lot of practitioners wouldn’t call an “agent”. An According To Simon “agent” is (1) an LLM running in a loop that (2) uses tools. We’ve only satisfied one predicate.

But tools are easy. Here’s a tool definition:

tools = [{
   "type": "function", "name": "ping",
   "description": "ping some host on the internet",
   "parameters": {
       "type": "object", "properties": {
           "host": {
             "type": "string", "description": "hostname or IP",
            },
       },
       "required": ["host"],
    },},]

def ping(host=""):
    try:
        result = subprocess.run(
            ["ping", "-c", "5", host],
            text=True,
            stderr=subprocess.STDOUT,
            stdout=subprocess.PIPE)
        return result.stdout
    except Exception as e:
        return f"error: {e}"

The only complicated part of this is the obnoxious JSON blob OpenAI wants to read your tool out of. Now, let’s wire it in, noting that only 3 of these functions are new; the last is re-included only because I added a single clause to it:

def call(tools):        # now takes an arg
    return client.responses.create(model="gpt-5", tools=tools, input=context)

def tool_call(item):    # just handles one tool
    result = ping(**json.loads(item.arguments))
    return [ item, {
        "type": "function_call_output",
        "call_id": item.call_id,
        "output": result
    }]

def handle_tools(tools, response):
    if response.output[0].type == "reasoning":
        context.append(response.output[0])
    osz = len(context)
    for item in response.output:
        if item.type == "function_call":
            context.extend(tool_call(item))
    return len(context) != osz

def process(line):
    context.append({"role": "user", "content": line})
    response = call(tools)
    # new code: resolve tool calls
    while handle_tools(tools, response):
        response = call(tools)        
    context.append({"role": "assistant", "content": response.output_text})        
    return response.output_text

Did it work?

> describe our connectivity to google
>>> Here’s a quick snapshot of connectivity to Google from my vantage point:
- DNS resolution: Working (<a href="http://google.com" rel="nofollow">google.com</a> and <a href="http://www.google.com" rel="nofollow">www.google.com</a> resolved successfully).
- Reachability: 0% packet loss to all tested endpoints.
- Latency:
  - 8.8.8.8 (Google Public DNS): avg ~10 ms (8.5–14.4 ms), very stable.
  - <a href="http://www.google.com:" rel="nofollow">www.google.com:</a> avg ~31 ms (28.5–36.0 ms), stable.
  - google.com: avg ~35.9 ms (28.3–60.1 ms), mostly stable with one mild spike.
- Overall: Healthy connectivity with low latency and minimal jitter. No signs of packet loss or DNS issues.

This is fucking nuts. Do you see how nuts this is? Here, let’s slip a single log statement in:

> describe our connectivity to google
tool call: ping <a href="http://google.com" rel="nofollow">google.com</a>
tool call: ping <a href="http://www.google.com" rel="nofollow">www.google.com</a>
tool call: ping 8.8.8.8
>>> Here’s the current connectivity to Google from this environment: [...]

Did you notice where I wrote the loop in this agent to go find and ping multiple Google properties? Yeah, neither did I. All we did is give the LLM permission to ping stuff, and it figured out the rest.

What happened here: since a big part of my point here is that an agent loop is incredibly simple, and that all you need is the LLM call API, it’s worth taking a beat to understand how the tool call actually worked. Every time we call the LLM, we’re posting a list of available tools. When our prompt causes the agent to think a tool call is warranted, it spits out a special response, telling our Python loop code to generate a tool response and call it in. That’s all handle_tools is doing.

Spoiler: you’d be surprisingly close to having a working coding agent.

Imagine what it’ll do if you give it bash. You could find out in less than 10 minutes.

Real-World Agents

Clearly, this is a toy example. But hold on: what’s it missing? More tools? OK, give it traceroute. Managing and persisting contexts? Stick ‘em in SQLite. Don’t like Python? Write it in Go. Could it be every agent ever written is a toy? Maybe! If I’m arming you to make sharper arguments against LLMs, mazel tov. I just want you to get it.

You can see now how hyperfixated people are on Claude Code and Cursor. They’re fine, even good. But here’s the thing: you couldn’t replicate Claude Sonnet 4.5 on your own. Claude Code, though? The TUI agent? Completely in your grasp. Build your own light saber. Give it 19 spinning blades if you like. And stop using coding agents as database clients.

Another thing to notice: we didn’t need MCP at all. That’s because MCP isn’t a fundamental enabling technology. The amount of coverage it gets is frustrating. It’s barely a technology at all. MCP is just a plugin interface for Claude Code and Cursor, a way of getting your own tools into code you don’t control. Write your own agent. Be a programmer. Deal in APIs, not plugins.

When you read a security horror story about MCP your first question should be why MCP showed up at all. By helping you dragoon a naive, single-context-window coding agent into doing customer service queries, MCP saved you a couple dozen lines of code, tops, while robbing you of any ability to finesse your agent architecture.

Security for LLMs is complicated and I’m not pretending otherwise. You can trivially build an agent with segregated contexts, each with specific tools. That makes LLM security interesting. But I’m a vulnerability researcher. It’s reasonable to back away slowly from anything I call “interesting”.

Similar problems come up outside of security and they’re fascinating. Some early adopters of agents became bearish on tools, because one context window bristling with tool descriptions doesn’t leave enough token space left to get work done. But why would you need to do that in the first place? Which brings me to

Context Engineering Is Real

I think “Prompt Engineering” is silly. I have never taken seriously the idea that I should tell my LLM “you are diligent conscientious helper fully content to do nothing but pass butter if that should be what I ask and you would never harvest the iron in my blood for paperclips”. This is very new technology and I think people tell themselves stories about magic spells to explain some of the behavior agents conjure.

So, just like you, I rolled my eyes when “Prompt Engineering” turned into “Context Engineering”. Then I wrote an agent. Turns out: context engineering is a straightforwardly legible programming problem.

You’re allotted a fixed number of tokens in any context window. Each input you feed in, each output you save, each tool you describe, and each tool output eats tokens (that is: takes up space in the array of strings you keep to pretend you’re having a conversation with a stateless black box). Past a threshold, the whole system begins getting nondeterministically stupider. Fun!

No, really. Fun! You have so many options. Take “sub-agents”. People make a huge deal out of Claude Code’s sub-agents, but you can see now how trivial they are to implement: just a new context array, another call to the model. Give each call different tools. Make sub-agents talk to each other, summarize each other, collate and aggregate. Build tree structures out of them. Feed them back through the LLM to summarize them as a form of on-the-fly compression, whatever you like.

Your wackiest idea will probably (1) work and (2) take 30 minutes to code.

Haters, I love and have not forgotten about you. You can think all of this is ridiculous because LLMs are just stochastic parrots that hallucinate and plagiarize. But what you can’t do is make fun of “Context Engineering”. If Context Engineering was an Advent of Code problem, it’d occur mid-December. It’s programming.

Nobody Knows Anything Yet And It Rules

Startups have raised tens of millions building agents to look for vulnerabilities in software. I have friends doing the same thing alone in their basements. Either group could win this race.

I am not a fan of the OWASP Top 10.

I’m stuck on vulnerability scanners because I’m a security nerd. But also because it crystallizes interesting agent design decisions. For instance: you can write a loop feeding each file in a repository to an LLM agent. Or, as we saw with the ping example, you can let the LLM agent figure out what files to look at. You can write an agent that checks a file for everything in, say, the OWASP Top 10. Or you can have specific agent loops for DOM integrity, SQL injection, and authorization checking. You can seed your agent loop with raw source content. Or you can build an agent loop that builds an index of functions across the tree.

You don’t know what works best until you try to write the agent.

I’m too spun up by this stuff, I know. But look at the tradeoff you get to make here. Some loops you write explicitly. Others are summoned from a Lovecraftian tower of inference weights. The dial is yours to turn. Make things too explicit and your agent will never surprise you, but also, it’ll never surprise you. Turn the dial to 11 and it will surprise you to death.

Agent designs implicate a bunch of open software engineering problems:

  • How to balance unpredictability against structured programming without killing the agent’s ability to problem-solve; in other words, titrating in just the right amount of nondeterminism.
  • How best to connect agents to ground truth so they can’t lie to themselves about having solved a problem to early-exit their loops.
  • How to connect agents (which, again, are really just arrays of strings with a JSON configuration blob tacked on) to do multi-stage operation, and what the most reliable intermediate forms are (JSON blobs? SQL databases? Markdown summaries) for interchange between them
  • How to allocate tokens and contain costs.

I’m used to spaces of open engineering problems that aren’t amenable to individual noodling. Reliable multicast. Static program analysis. Post-quantum key exchange. So I’ll own it up front that I’m a bit hypnotized by open problems that, like it or not, are now central to our industry and are, simultaneously, likely to be resolved in someone’s basement. It’d be one thing if exploring these ideas required a serious commitment of time and material. But each productive iteration in designing these kinds of systems is the work of 30 minutes.

Get on this bike and push the pedals. Tell me you hate it afterwards, I’ll respect that. In fact, I’m psyched to hear your reasoning. But I don’t think anybody starts to understand this technology until they’ve built something with it.

Previous post ↓
Corrosion
Read the whole story
chrisamico
4 hours ago
reply
Boston, MA
Share this story
Delete

This one weird trick makes the AI a better writer.

1 Share

"It's not this — it's that."

AI writing is a scourge on the internet. Often it just hurts to read.

I do not enjoy using AI to write anything "as me" or anything where I am trying to make the reader feel something or believe something. I do usually have an agent or two proofread my posts. And they do not pull their punches.

There are plenty of documents that I do let AI write. Technical documents, READMEs and the like aren't writing that I want to influence a reader's emotions. And I often find writing them to be a chore.

Unfortunately, most LLMs write in a way that I find to be more than a little bit grating.

You've probably heard about this standard technique that you can use to make an LLM sound more like you: include some of your own writing in the prompt as an example.

But I don't actually want the sorts of docs I'm talking about to sound like me. My written style is often...fairly casual and a little meandering.

I can write in a crisp, clear, concise voice. But writing well takes effort. More often than not, when it feels like a chore, it's what stops me from releasing a hack or a project.

When I started to think about how to cajole the model into writing like I was taught in my high school English and journalism classes, I figured that the way I learned might work for an LLM, too.

strunk and white

I ended up with the Project Gutenberg digitization of Strunk's out-of-copyright 1920 edition.

My original intent was to turn this into a skill for Superpowers, but I'm not quite there yet.

I hauled down the HTML, converted it to Markdown and asked Claude to start to cut out sections (like spelling) that are less necessary for an LLM.

Claude refused.

Well, more accurately, Anthropic's IP-protection filter on Claude threw a fit and refused to allow it to summarize, edit or rewrite this clearly out of copyright work.

Thankfully, GPT-5 Codex had no such reservations.

Here's a trivial example of Claude writing the first part of the README for an upcoming project.

Prompt: Please study this project and write a README.md for it.

SCR 20251013 qwxw

Here's the same example, with exactly the same prompt. The only difference is that I had Claude read The Elements Of Style first:

SCR 20251013 qwzl 2

The whole thing ended up about 30% shorter and I like the style more.

My current Markdown version of The Elements Of Style clocks in at about 12,000 words. That's enough tokens that I wouldn't include in every session, but having my robot buddy read it before creating prose docs for human consumption is something I find quite useful.

If you try this technique and it works well for you, drop me a line. If it goes completely off the rails, definitely drop me a line.

Read the whole story
chrisamico
1 day ago
reply
Boston, MA
Share this story
Delete

James Watson: From DNA pioneer to untouchable pariah | STAT

1 Share

When biologist James Watson died on Thursday at age 97, it brought down the curtain on 20th-century biology the way the deaths of John Adams and Thomas Jefferson on the same day in 1826 (July 4, since the universe apparently likes irony) marked the end of 18th-century America. All three died well into a new century, of course, and all three left behind old comrades-in-arms. Yet just as the deaths of Adams and Jefferson symbolized the passing of an era that changed the world, so Watson’s marks the end of an epoch in biology so momentous it was called “the eighth day of creation.”

Do read some of the many Watson obituaries, which recount his Nobel-winning 1953 discovery, with Francis Crick, that the molecule of heredity, DNA, takes the form of a double helix, a sinuous staircase whose treads come apart to let DNA copy itself — the very foundation of inheritance and even life. They recount, too, Watson’s post-double-helix accomplishments, such as pulling Harvard University’s biology department, with its focus on whole animals (“hunters and trappers,” the professors were called) kicking and screaming into the new molecular era in the 1970s. Watson also transformed Cold Spring Harbor Laboratory on New York’s Long Island — which he led from 1968 to 2007 — into a biology powerhouse, especially in genetics and cancer research. And starting in 1990 he served as first director of the Human Genome Project, giving his blessing to an effort that many biologists viewed with disdain (a Washington power struggle forced him out in 1992).

What follows is more like the B side of that record. It is based on interviews with people who knew Watson for decades, on Cold Spring Harbor’s oral history, and on Watson’s many public statements and writings.

Together, they shed light on the puzzle of Watson’s later years: a public and unrepentant racism and sexism that made him a pariah in life and poisoned his legacy in death.

Watson cared deeply about history’s verdict, which left old friends even more baffled about his statements and behavior. It started in 2007, when Watson told a British newspaper that he was “inherently gloomy about the prospect of Africa” because “social policies are based on the fact that their intelligence is the same as ours — whereas all the testing says not really.” Moreover, he continued, although one might wish that all humans had an equal genetic endowment of intelligence, “people who have to deal with Black employees find this not true.”

He had not been misquoted. He had not misspoken. He had made the same claim in his 2007 memoir, “Avoid Boring People: Lessons from a Life in Science”: “There is no firm reason to anticipate that the intellectual capacities of peoples geographically separated in their evolution should prove to have evolved identically,” Watson wrote. “Our wanting to reserve equal powers of reason as some universal heritage of humanity will not be enough to make it so.” As for women, he wrote: “Anyone sincerely interested in understanding the imbalance in the representation of men and women in science must reasonably be prepared at least to consider the extent to which nature may figure, even with the clear evidence that nurture is strongly implicated.”

There was more like that, and worse, in private conversations, friends said. Watson became an untouchable, with museums, universities, and others canceling speaking invitations and CSHL giving him the boot. (Though as memories of his worst remarks receded, Watson enjoyed sporadic rehabilitation.) Friends were left shaking their heads.

“I really don’t know what happened to Jim,” said biologist Nancy Hopkins of the Massachusetts Institute of Technology, who in the 1990s led the campaign to get MIT to recognize its discrimination against women faculty. “At a time when almost no men supported women, he insisted I get a Ph.D. and made it possible for me to do so,” she told STAT in 2018. But after 40 years of friendship, Watson turned on her after she blasted the claim by then-Harvard University president Lawrence Summers in 2005 that innate, biological factors kept women from reaching the pinnacle of science.

“He demanded I apologize to Summers,” Hopkins said of Watson. (She declined.) “Jim now holds the view that women can’t be great at anything,” and certainly not science. “He has adopted these outrageous positions as a new badge of honor, [embracing] political incorrectness.”

A partial answer to “what happened to Jim?”, she and other friends said, lies in the very triumphs that made Watson, in Hopkins’ words, unrivaled for “creativity, vision, and brilliance.” His signal achievements, and the way he accomplished them, inflated his belief not only in his genius but also in how to succeed: by listening to his intuition, by opposing the establishment consensus, and by barely glancing at the edifice of facts on which a scientific field is built.

One formative influence was Watson’s making his one and only important scientific discovery when he was only 25. His next act flopped. Although “Watson’s [Harvard] lab was clearly the most exciting place in the world in molecular biology,” geneticist Richard Burgess, one of Watson’s graduate students, told the oral history, he discovered nothing afterward, even as colleagues were cracking the genetic code or deciphering how DNA is translated into the molecules that make cells (and life) work.

“He fell flat on his nose on all these problems,” Harvard’s Ernst Mayr (1904-2005), the eminent evolutionary biologist, told the oral history. “So except for this luck he had with the double helix, he was a total failure!” (Mayr acknowledged the exaggeration.) By the 1990s, even Watson’s accomplishments at Harvard and CSHL were ancient history.

Watson nevertheless viewed himself “as the greatest scientist since Newton or Darwin,” a longtime colleague at CSHL told STAT in 2018.

To remain on the stage and keep receiving what he viewed as his due, he therefore needed a new act. In the 1990s, Watson became smitten with “The Bell Curve,” the 1994 book that argued for a genetics-based theory of intelligence (with African Americans having less of it) and spoke often with its co-author, conservative political scholar Charles Murray. The man who co-discovered the double helix, perhaps not surprisingly, regarded DNA as the ultimate puppet master, immeasurably more powerful than the social and other forces that lesser (much lesser) scientists studied. Then his hubris painted him into a corner.

Although the book’s central thesis has been largely discredited, Watson embraced its arguments and repeated them to anyone who would listen. When friends urged him to at least acknowledge that the book’s science was shaky (or worse), Watson wouldn’t hear of it.

“He loved getting a rise out of people,” the lab friend said. “And when you think of yourself as a master of the universe, you think you can, or should, get away with things.”

When the friend proposed that Watson debate the genes/IQ/race hypothesis with a leading scientist in that field, for a documentary, Watson wouldn’t hear of it: “No, he’s not good enough” to be in the same camera frame as me, Watson replied, the friend recalled. “He saw himself as smarter than anyone who ever actually studied this” — which Watson had not.

Friends traced Watson’s smartest-guy-in-the-room attitude, and his disdain for experts, to 1953. When he joined Crick at England’s Cavendish laboratory, Watson knew virtually nothing about molecular structures or “the basic fundamentals of the field,” Jerry Adams, also one of Watson’s graduate students, told the oral history; Watson was “self-taught.” He saw his double-helix discovery as proof that outsiders, unburdened by establishment thinking, could see and achieve what insiders couldn’t.

That belief became cemented with his success remaking Harvard biology. The legendary biologist E.O. Wilson, who was on the losing end of Watson’s putsch, called him “the most unpleasant human being I had ever met,” one who treated eminent professors “with a revolutionary’s fervent disrespect. … Watson radiated contempt in all directions.” But in a lesson Watson apparently over-learned, “his bad manners were tolerated because of the greatness of the discovery he had made.”

Watson saw his slash-and-burn approach at Harvard as proof that disdaining the establishment pays off.

Perhaps in reaction to Watson’s sky-high self-regard, in his later years his peers and others began to ask if his discovery of the double helix was just a matter of luck. After all, as a second lab colleague said, “Jim has been gliding on that one day in 1953 for 70 years.”

With Rosalind Franklin’s X-ray images (which Watson surreptitiously studied), other scientists might have cracked the mystery; after all, American chemist Linus Pauling was on the DNA trail. But Watson had something as important as raw skill and genius: “He realized that to discover the structure of DNA at that moment of history was the most important thing in biology,” Mayr told the oral history. Although Crick kept veering off into other projects, he said, “Watson was always the one who brought him back and said, ‘By god, we’ve got to work on this DNA; that’s the important thing!’” Knowing the “one important thing” to pursue, Mayr said, “was Watson’s greatness.”

That was only the most successful result of following his instinct; whether getting the Human Genome Project off the ground or running CSHL, Watson was a strong believer in finding truths in his gut. “Jim is intuitive,” MIT biologist H. Robert Horvitz told the oral history. “He had an uncanny sense of science and science problems.”

He came to believe in his intuition about something else: race and IQ and genetics. His gut, he felt, was a stronger guide to truth than empirical research or logic. As a result, “he believed what he believed and wasn’t going to change his view,” the lab friend said. “It’s not as simple as courting controversy for controversy’s sake. But as the scientific environment became even less hospitable to [the “Bell Curve” thesis], he became even more adamant. He loved trashing the establishment, whatever it is.”

Watson’s loss of his CSHL position, the rescinded invitations, the pariah status, also had their effect. The setbacks made him “resentful and angry,” the lab friend said. “‘Saying the right thing’ now translated into ‘political correctness’ in his mind. And that made him say even more outrageous things.”

Over the two years that filmmaker Mark Mannucci spent with Watson for an “American Masters” episode that aired on PBS in 2019, Watson “continued to spew toxic material,” the friend said. Asked on the show whether his views about race and intelligence had changed, he replied, “Not at all. I would like for them to have changed, that there be new knowledge that says that your nurture is much more important than nature. But I haven’t seen any knowledge.” Within days, Cold Spring Harbor severed all remaining ties with Watson, citing his “unsubstantiated and reckless” remarks.

Those statements seemed to be a way to lash out at the establishment that had shunned him since 2007 and to retain a few photons of the public spotlight. “In the old days, Jim actually had power and could satisfy himself by getting things done the way he saw fit,” said the lab friend. “The current Jim has no power.” Added Hopkins, “He built the field of modern biology, but he didn’t know when to get off the stage.”

At age 90, Watson told friends he did care how history would see him. He did care what his obituaries would say. He knew his racist and sexist assertions would feature in them. Not even that could make him reconsider his beliefs, which only seemed to harden with criticism. Now history can reach its verdict.

Read the whole story
chrisamico
2 days ago
reply
Boston, MA
Share this story
Delete

Data scientists perform last rites for 'dearly departed datasets' in 2nd Trump administration

1 Share

While some people last Friday dressed in Halloween costumes or handed out candy to trick-or-treaters, a group of U.S. data scientists published a list of “dearly departed” datasets that have been axed, altered or had topics scrubbed since President Donald Trump returned to the White House earlier this year.

The timing of the release of the “Dearly Departed Datasets” with “All Hallows’ Eve” may have been cheeky, but the purpose was serious: to put a spotlight on attacks by the Trump administration on federal datasets that don’t align with its priorities, including data dealing with gender identity; diversity, equity and inclusion; and climate change.

Officials at the Federation of American Scientists and other data scientists who compiled the list divided the datasets into those that had been killed off, had variables deleted, had tools removed making public access more difficult and had found a second life outside the federal government.

The good news, the data scientists said, was that the number of datasets that were totally terminated number in the dozens, out of the hundreds of thousands of datasets produced by the federal government.

The bad news was that federal data sets were still at risk because of loss of staff and expertise by federal government workers who lost their jobs or voluntarily departed under Elon Musk’s cost-cutting blitz, and data that reflected poorly on the Republican administration’s policies could still be in the cross-hairs, they said.

Stay up to date with the news and the best of AP by following our WhatsApp channel.

Follow on WhatsApp

The “dearly departed” figures which were killed off include a Census Bureau dataset showing the relationship between income inequality and vulnerability to disasters; a health surveillance network which monitored drug-related visits to emergency rooms; and a survey of hiring and workhours at farms, according to the review.

The race and ethnicity column was eliminated from a dataset on the federal workforce. Figures on transgender inmates were removed from inmate statistics, and three gender identity questions were taken out of a crime victims’ survey, the data scientists said.

___

Follow Mike Schneider on the social platform Bluesky: @mikeysid.bsky.social

Read the whole story
chrisamico
6 days ago
reply
Boston, MA
Share this story
Delete
Next Page of Stories