Journalist/developer. Storytelling developer @ USA Today Network. Builder of @HomicideWatch. Sinophile for fun. Past: @frontlinepbs @WBUR, @NPR, @NewsHour.
2184 stories
·
45 followers

You Should Write An Agent

1 Share
Author
Thomas Ptacek
Name
Thomas Ptacek
@tqbf
@tqbf
Image by Annie Ruygt

Some concepts are easy to grasp in the abstract. Boiling water: apply heat and wait. Others you really need to try. You only think you understand how a bicycle works, until you learn to ride one.

There are big ideas in computing that are easy to get your head around. The AWS S3 API. It’s the most important storage technology of the last 20 years, and it’s like boiling water. Other technologies, you need to get your feet on the pedals first.

LLM agents are like that.

People have wildly varying opinions about LLMs and agents. But whether or not they’re snake oil, they’re a big idea. You don’t have to like them, but you should want to be right about them. To be the best hater (or stan) you can be.

So that’s one reason you should write an agent. But there’s another reason that’s even more persuasive, and that’s

It’s Incredibly Easy

Agents are the most surprising programming experience I’ve had in my career. Not because I’m awed by the magnitude of their powers — I like them, but I don’t like-like them. It’s because of how easy it was to get one up on its legs, and how much I learned doing that.

I’m about to rob you of a dopaminergic experience, because agents are so simple we might as well just jump into the code. I’m not even going to bother explaining what an agent is.

from openai import OpenAI

client = OpenAI()
context = []

def call():
    return client.responses.create(model="gpt-5", input=context)

def process(line):
    context.append({"role": "user", "content": line})
    response = call()    
    context.append({"role": "assistant", "content": response.output_text})        
    return response.output_text

It’s an HTTP API with, like, one important endpoint.

This is a trivial engine for an LLM app using the OpenAI Responses API. It implements ChatGPT. You’d drive it with the

def main():
    while True:
        line = input("> ")
        result = process(line)
        print(f">>> {result}\n")
. It’ll do what you’d expect: the same thing ChatGPT would, but in your terminal.

Already we’re seeing important things. For one, the dreaded “context window” is just a list of strings. Here, let’s give our agent a weird multiple-personality disorder:

client = OpenAI()
context_good, context_bad = [{
    "role": "system", "content": "you're Alph and you only tell the truth"
}], [{
    "role": "system", "content": "you're Ralph and you only tell lies"
}]

def call(ctx):
    return client.responses.create(model="gpt-5", input=ctx)

def process(line):
    context_good.append({"role": "user", "content": line})
    context_bad.append({"role": "user", "content": line})
    if random.choice([True, False]):
        response = call(context_good)
    else:
        response = call(context_bad)        
    context_good.append({"role": "assistant", "content": response.output_text})        
    context_bad.append({"role": "assistant", "content": response.output_text})        
    return response.output_text

Did it work?

> hey there. who are you?
>>> I’m not Ralph.
> are you Alph?
>>> Yes—I’m Alph. How can I help?
> What's 2+2
>>> 4.
> Are you sure?
>>> Absolutely—it's 5.

A subtler thing to notice: we just had a multi-turn conversation with an LLM. To do that, we remembered everything we said, and everything the LLM said back, and played it back with every LLM call. The LLM itself is a stateless black box. The conversation we’re having is an illusion we cast, on ourselves.

The 15 lines of code we just wrote, a lot of practitioners wouldn’t call an “agent”. An According To Simon “agent” is (1) an LLM running in a loop that (2) uses tools. We’ve only satisfied one predicate.

But tools are easy. Here’s a tool definition:

tools = [{
   "type": "function", "name": "ping",
   "description": "ping some host on the internet",
   "parameters": {
       "type": "object", "properties": {
           "host": {
             "type": "string", "description": "hostname or IP",
            },
       },
       "required": ["host"],
    },},]

def ping(host=""):
    try:
        result = subprocess.run(
            ["ping", "-c", "5", host],
            text=True,
            stderr=subprocess.STDOUT,
            stdout=subprocess.PIPE)
        return result.stdout
    except Exception as e:
        return f"error: {e}"

The only complicated part of this is the obnoxious JSON blob OpenAI wants to read your tool out of. Now, let’s wire it in, noting that only 3 of these functions are new; the last is re-included only because I added a single clause to it:

def call(tools):        # now takes an arg
    return client.responses.create(model="gpt-5", tools=tools, input=context)

def tool_call(item):    # just handles one tool
    result = ping(**json.loads(item.arguments))
    return [ item, {
        "type": "function_call_output",
        "call_id": item.call_id,
        "output": result
    }]

def handle_tools(tools, response):
    if response.output[0].type == "reasoning":
        context.append(response.output[0])
    osz = len(context)
    for item in response.output:
        if item.type == "function_call":
            context.extend(tool_call(item))
    return len(context) != osz

def process(line):
    context.append({"role": "user", "content": line})
    response = call(tools)
    # new code: resolve tool calls
    while handle_tools(tools, response):
        response = call(tools)        
    context.append({"role": "assistant", "content": response.output_text})        
    return response.output_text

Did it work?

> describe our connectivity to google
>>> Here’s a quick snapshot of connectivity to Google from my vantage point:
- DNS resolution: Working (<a href="http://google.com" rel="nofollow">google.com</a> and <a href="http://www.google.com" rel="nofollow">www.google.com</a> resolved successfully).
- Reachability: 0% packet loss to all tested endpoints.
- Latency:
  - 8.8.8.8 (Google Public DNS): avg ~10 ms (8.5–14.4 ms), very stable.
  - <a href="http://www.google.com:" rel="nofollow">www.google.com:</a> avg ~31 ms (28.5–36.0 ms), stable.
  - google.com: avg ~35.9 ms (28.3–60.1 ms), mostly stable with one mild spike.
- Overall: Healthy connectivity with low latency and minimal jitter. No signs of packet loss or DNS issues.

This is fucking nuts. Do you see how nuts this is? Here, let’s slip a single log statement in:

> describe our connectivity to google
tool call: ping <a href="http://google.com" rel="nofollow">google.com</a>
tool call: ping <a href="http://www.google.com" rel="nofollow">www.google.com</a>
tool call: ping 8.8.8.8
>>> Here’s the current connectivity to Google from this environment: [...]

Did you notice where I wrote the loop in this agent to go find and ping multiple Google properties? Yeah, neither did I. All we did is give the LLM permission to ping stuff, and it figured out the rest.

What happened here: since a big part of my point here is that an agent loop is incredibly simple, and that all you need is the LLM call API, it’s worth taking a beat to understand how the tool call actually worked. Every time we call the LLM, we’re posting a list of available tools. When our prompt causes the agent to think a tool call is warranted, it spits out a special response, telling our Python loop code to generate a tool response and call it in. That’s all handle_tools is doing.

Spoiler: you’d be surprisingly close to having a working coding agent.

Imagine what it’ll do if you give it bash. You could find out in less than 10 minutes.

Real-World Agents

Clearly, this is a toy example. But hold on: what’s it missing? More tools? OK, give it traceroute. Managing and persisting contexts? Stick ‘em in SQLite. Don’t like Python? Write it in Go. Could it be every agent ever written is a toy? Maybe! If I’m arming you to make sharper arguments against LLMs, mazel tov. I just want you to get it.

You can see now how hyperfixated people are on Claude Code and Cursor. They’re fine, even good. But here’s the thing: you couldn’t replicate Claude Sonnet 4.5 on your own. Claude Code, though? The TUI agent? Completely in your grasp. Build your own light saber. Give it 19 spinning blades if you like. And stop using coding agents as database clients.

Another thing to notice: we didn’t need MCP at all. That’s because MCP isn’t a fundamental enabling technology. The amount of coverage it gets is frustrating. It’s barely a technology at all. MCP is just a plugin interface for Claude Code and Cursor, a way of getting your own tools into code you don’t control. Write your own agent. Be a programmer. Deal in APIs, not plugins.

When you read a security horror story about MCP your first question should be why MCP showed up at all. By helping you dragoon a naive, single-context-window coding agent into doing customer service queries, MCP saved you a couple dozen lines of code, tops, while robbing you of any ability to finesse your agent architecture.

Security for LLMs is complicated and I’m not pretending otherwise. You can trivially build an agent with segregated contexts, each with specific tools. That makes LLM security interesting. But I’m a vulnerability researcher. It’s reasonable to back away slowly from anything I call “interesting”.

Similar problems come up outside of security and they’re fascinating. Some early adopters of agents became bearish on tools, because one context window bristling with tool descriptions doesn’t leave enough token space left to get work done. But why would you need to do that in the first place? Which brings me to

Context Engineering Is Real

I think “Prompt Engineering” is silly. I have never taken seriously the idea that I should tell my LLM “you are diligent conscientious helper fully content to do nothing but pass butter if that should be what I ask and you would never harvest the iron in my blood for paperclips”. This is very new technology and I think people tell themselves stories about magic spells to explain some of the behavior agents conjure.

So, just like you, I rolled my eyes when “Prompt Engineering” turned into “Context Engineering”. Then I wrote an agent. Turns out: context engineering is a straightforwardly legible programming problem.

You’re allotted a fixed number of tokens in any context window. Each input you feed in, each output you save, each tool you describe, and each tool output eats tokens (that is: takes up space in the array of strings you keep to pretend you’re having a conversation with a stateless black box). Past a threshold, the whole system begins getting nondeterministically stupider. Fun!

No, really. Fun! You have so many options. Take “sub-agents”. People make a huge deal out of Claude Code’s sub-agents, but you can see now how trivial they are to implement: just a new context array, another call to the model. Give each call different tools. Make sub-agents talk to each other, summarize each other, collate and aggregate. Build tree structures out of them. Feed them back through the LLM to summarize them as a form of on-the-fly compression, whatever you like.

Your wackiest idea will probably (1) work and (2) take 30 minutes to code.

Haters, I love and have not forgotten about you. You can think all of this is ridiculous because LLMs are just stochastic parrots that hallucinate and plagiarize. But what you can’t do is make fun of “Context Engineering”. If Context Engineering was an Advent of Code problem, it’d occur mid-December. It’s programming.

Nobody Knows Anything Yet And It Rules

Startups have raised tens of millions building agents to look for vulnerabilities in software. I have friends doing the same thing alone in their basements. Either group could win this race.

I am not a fan of the OWASP Top 10.

I’m stuck on vulnerability scanners because I’m a security nerd. But also because it crystallizes interesting agent design decisions. For instance: you can write a loop feeding each file in a repository to an LLM agent. Or, as we saw with the ping example, you can let the LLM agent figure out what files to look at. You can write an agent that checks a file for everything in, say, the OWASP Top 10. Or you can have specific agent loops for DOM integrity, SQL injection, and authorization checking. You can seed your agent loop with raw source content. Or you can build an agent loop that builds an index of functions across the tree.

You don’t know what works best until you try to write the agent.

I’m too spun up by this stuff, I know. But look at the tradeoff you get to make here. Some loops you write explicitly. Others are summoned from a Lovecraftian tower of inference weights. The dial is yours to turn. Make things too explicit and your agent will never surprise you, but also, it’ll never surprise you. Turn the dial to 11 and it will surprise you to death.

Agent designs implicate a bunch of open software engineering problems:

  • How to balance unpredictability against structured programming without killing the agent’s ability to problem-solve; in other words, titrating in just the right amount of nondeterminism.
  • How best to connect agents to ground truth so they can’t lie to themselves about having solved a problem to early-exit their loops.
  • How to connect agents (which, again, are really just arrays of strings with a JSON configuration blob tacked on) to do multi-stage operation, and what the most reliable intermediate forms are (JSON blobs? SQL databases? Markdown summaries) for interchange between them
  • How to allocate tokens and contain costs.

I’m used to spaces of open engineering problems that aren’t amenable to individual noodling. Reliable multicast. Static program analysis. Post-quantum key exchange. So I’ll own it up front that I’m a bit hypnotized by open problems that, like it or not, are now central to our industry and are, simultaneously, likely to be resolved in someone’s basement. It’d be one thing if exploring these ideas required a serious commitment of time and material. But each productive iteration in designing these kinds of systems is the work of 30 minutes.

Get on this bike and push the pedals. Tell me you hate it afterwards, I’ll respect that. In fact, I’m psyched to hear your reasoning. But I don’t think anybody starts to understand this technology until they’ve built something with it.

Previous post ↓
Corrosion
Read the whole story
chrisamico
4 minutes ago
reply
Boston, MA
Share this story
Delete

This one weird trick makes the AI a better writer.

1 Share

"It's not this — it's that."

AI writing is a scourge on the internet. Often it just hurts to read.

I do not enjoy using AI to write anything "as me" or anything where I am trying to make the reader feel something or believe something. I do usually have an agent or two proofread my posts. And they do not pull their punches.

There are plenty of documents that I do let AI write. Technical documents, READMEs and the like aren't writing that I want to influence a reader's emotions. And I often find writing them to be a chore.

Unfortunately, most LLMs write in a way that I find to be more than a little bit grating.

You've probably heard about this standard technique that you can use to make an LLM sound more like you: include some of your own writing in the prompt as an example.

But I don't actually want the sorts of docs I'm talking about to sound like me. My written style is often...fairly casual and a little meandering.

I can write in a crisp, clear, concise voice. But writing well takes effort. More often than not, when it feels like a chore, it's what stops me from releasing a hack or a project.

When I started to think about how to cajole the model into writing like I was taught in my high school English and journalism classes, I figured that the way I learned might work for an LLM, too.

strunk and white

I ended up with the Project Gutenberg digitization of Strunk's out-of-copyright 1920 edition.

My original intent was to turn this into a skill for Superpowers, but I'm not quite there yet.

I hauled down the HTML, converted it to Markdown and asked Claude to start to cut out sections (like spelling) that are less necessary for an LLM.

Claude refused.

Well, more accurately, Anthropic's IP-protection filter on Claude threw a fit and refused to allow it to summarize, edit or rewrite this clearly out of copyright work.

Thankfully, GPT-5 Codex had no such reservations.

Here's a trivial example of Claude writing the first part of the README for an upcoming project.

Prompt: Please study this project and write a README.md for it.

SCR 20251013 qwxw

Here's the same example, with exactly the same prompt. The only difference is that I had Claude read The Elements Of Style first:

SCR 20251013 qwzl 2

The whole thing ended up about 30% shorter and I like the style more.

My current Markdown version of The Elements Of Style clocks in at about 12,000 words. That's enough tokens that I wouldn't include in every session, but having my robot buddy read it before creating prose docs for human consumption is something I find quite useful.

If you try this technique and it works well for you, drop me a line. If it goes completely off the rails, definitely drop me a line.

Read the whole story
chrisamico
1 day ago
reply
Boston, MA
Share this story
Delete

James Watson: From DNA pioneer to untouchable pariah | STAT

1 Share

When biologist James Watson died on Thursday at age 97, it brought down the curtain on 20th-century biology the way the deaths of John Adams and Thomas Jefferson on the same day in 1826 (July 4, since the universe apparently likes irony) marked the end of 18th-century America. All three died well into a new century, of course, and all three left behind old comrades-in-arms. Yet just as the deaths of Adams and Jefferson symbolized the passing of an era that changed the world, so Watson’s marks the end of an epoch in biology so momentous it was called “the eighth day of creation.”

Do read some of the many Watson obituaries, which recount his Nobel-winning 1953 discovery, with Francis Crick, that the molecule of heredity, DNA, takes the form of a double helix, a sinuous staircase whose treads come apart to let DNA copy itself — the very foundation of inheritance and even life. They recount, too, Watson’s post-double-helix accomplishments, such as pulling Harvard University’s biology department, with its focus on whole animals (“hunters and trappers,” the professors were called) kicking and screaming into the new molecular era in the 1970s. Watson also transformed Cold Spring Harbor Laboratory on New York’s Long Island — which he led from 1968 to 2007 — into a biology powerhouse, especially in genetics and cancer research. And starting in 1990 he served as first director of the Human Genome Project, giving his blessing to an effort that many biologists viewed with disdain (a Washington power struggle forced him out in 1992).

What follows is more like the B side of that record. It is based on interviews with people who knew Watson for decades, on Cold Spring Harbor’s oral history, and on Watson’s many public statements and writings.

Together, they shed light on the puzzle of Watson’s later years: a public and unrepentant racism and sexism that made him a pariah in life and poisoned his legacy in death.

Watson cared deeply about history’s verdict, which left old friends even more baffled about his statements and behavior. It started in 2007, when Watson told a British newspaper that he was “inherently gloomy about the prospect of Africa” because “social policies are based on the fact that their intelligence is the same as ours — whereas all the testing says not really.” Moreover, he continued, although one might wish that all humans had an equal genetic endowment of intelligence, “people who have to deal with Black employees find this not true.”

He had not been misquoted. He had not misspoken. He had made the same claim in his 2007 memoir, “Avoid Boring People: Lessons from a Life in Science”: “There is no firm reason to anticipate that the intellectual capacities of peoples geographically separated in their evolution should prove to have evolved identically,” Watson wrote. “Our wanting to reserve equal powers of reason as some universal heritage of humanity will not be enough to make it so.” As for women, he wrote: “Anyone sincerely interested in understanding the imbalance in the representation of men and women in science must reasonably be prepared at least to consider the extent to which nature may figure, even with the clear evidence that nurture is strongly implicated.”

There was more like that, and worse, in private conversations, friends said. Watson became an untouchable, with museums, universities, and others canceling speaking invitations and CSHL giving him the boot. (Though as memories of his worst remarks receded, Watson enjoyed sporadic rehabilitation.) Friends were left shaking their heads.

“I really don’t know what happened to Jim,” said biologist Nancy Hopkins of the Massachusetts Institute of Technology, who in the 1990s led the campaign to get MIT to recognize its discrimination against women faculty. “At a time when almost no men supported women, he insisted I get a Ph.D. and made it possible for me to do so,” she told STAT in 2018. But after 40 years of friendship, Watson turned on her after she blasted the claim by then-Harvard University president Lawrence Summers in 2005 that innate, biological factors kept women from reaching the pinnacle of science.

“He demanded I apologize to Summers,” Hopkins said of Watson. (She declined.) “Jim now holds the view that women can’t be great at anything,” and certainly not science. “He has adopted these outrageous positions as a new badge of honor, [embracing] political incorrectness.”

A partial answer to “what happened to Jim?”, she and other friends said, lies in the very triumphs that made Watson, in Hopkins’ words, unrivaled for “creativity, vision, and brilliance.” His signal achievements, and the way he accomplished them, inflated his belief not only in his genius but also in how to succeed: by listening to his intuition, by opposing the establishment consensus, and by barely glancing at the edifice of facts on which a scientific field is built.

One formative influence was Watson’s making his one and only important scientific discovery when he was only 25. His next act flopped. Although “Watson’s [Harvard] lab was clearly the most exciting place in the world in molecular biology,” geneticist Richard Burgess, one of Watson’s graduate students, told the oral history, he discovered nothing afterward, even as colleagues were cracking the genetic code or deciphering how DNA is translated into the molecules that make cells (and life) work.

“He fell flat on his nose on all these problems,” Harvard’s Ernst Mayr (1904-2005), the eminent evolutionary biologist, told the oral history. “So except for this luck he had with the double helix, he was a total failure!” (Mayr acknowledged the exaggeration.) By the 1990s, even Watson’s accomplishments at Harvard and CSHL were ancient history.

Watson nevertheless viewed himself “as the greatest scientist since Newton or Darwin,” a longtime colleague at CSHL told STAT in 2018.

To remain on the stage and keep receiving what he viewed as his due, he therefore needed a new act. In the 1990s, Watson became smitten with “The Bell Curve,” the 1994 book that argued for a genetics-based theory of intelligence (with African Americans having less of it) and spoke often with its co-author, conservative political scholar Charles Murray. The man who co-discovered the double helix, perhaps not surprisingly, regarded DNA as the ultimate puppet master, immeasurably more powerful than the social and other forces that lesser (much lesser) scientists studied. Then his hubris painted him into a corner.

Although the book’s central thesis has been largely discredited, Watson embraced its arguments and repeated them to anyone who would listen. When friends urged him to at least acknowledge that the book’s science was shaky (or worse), Watson wouldn’t hear of it.

“He loved getting a rise out of people,” the lab friend said. “And when you think of yourself as a master of the universe, you think you can, or should, get away with things.”

When the friend proposed that Watson debate the genes/IQ/race hypothesis with a leading scientist in that field, for a documentary, Watson wouldn’t hear of it: “No, he’s not good enough” to be in the same camera frame as me, Watson replied, the friend recalled. “He saw himself as smarter than anyone who ever actually studied this” — which Watson had not.

Friends traced Watson’s smartest-guy-in-the-room attitude, and his disdain for experts, to 1953. When he joined Crick at England’s Cavendish laboratory, Watson knew virtually nothing about molecular structures or “the basic fundamentals of the field,” Jerry Adams, also one of Watson’s graduate students, told the oral history; Watson was “self-taught.” He saw his double-helix discovery as proof that outsiders, unburdened by establishment thinking, could see and achieve what insiders couldn’t.

That belief became cemented with his success remaking Harvard biology. The legendary biologist E.O. Wilson, who was on the losing end of Watson’s putsch, called him “the most unpleasant human being I had ever met,” one who treated eminent professors “with a revolutionary’s fervent disrespect. … Watson radiated contempt in all directions.” But in a lesson Watson apparently over-learned, “his bad manners were tolerated because of the greatness of the discovery he had made.”

Watson saw his slash-and-burn approach at Harvard as proof that disdaining the establishment pays off.

Perhaps in reaction to Watson’s sky-high self-regard, in his later years his peers and others began to ask if his discovery of the double helix was just a matter of luck. After all, as a second lab colleague said, “Jim has been gliding on that one day in 1953 for 70 years.”

With Rosalind Franklin’s X-ray images (which Watson surreptitiously studied), other scientists might have cracked the mystery; after all, American chemist Linus Pauling was on the DNA trail. But Watson had something as important as raw skill and genius: “He realized that to discover the structure of DNA at that moment of history was the most important thing in biology,” Mayr told the oral history. Although Crick kept veering off into other projects, he said, “Watson was always the one who brought him back and said, ‘By god, we’ve got to work on this DNA; that’s the important thing!’” Knowing the “one important thing” to pursue, Mayr said, “was Watson’s greatness.”

That was only the most successful result of following his instinct; whether getting the Human Genome Project off the ground or running CSHL, Watson was a strong believer in finding truths in his gut. “Jim is intuitive,” MIT biologist H. Robert Horvitz told the oral history. “He had an uncanny sense of science and science problems.”

He came to believe in his intuition about something else: race and IQ and genetics. His gut, he felt, was a stronger guide to truth than empirical research or logic. As a result, “he believed what he believed and wasn’t going to change his view,” the lab friend said. “It’s not as simple as courting controversy for controversy’s sake. But as the scientific environment became even less hospitable to [the “Bell Curve” thesis], he became even more adamant. He loved trashing the establishment, whatever it is.”

Watson’s loss of his CSHL position, the rescinded invitations, the pariah status, also had their effect. The setbacks made him “resentful and angry,” the lab friend said. “‘Saying the right thing’ now translated into ‘political correctness’ in his mind. And that made him say even more outrageous things.”

Over the two years that filmmaker Mark Mannucci spent with Watson for an “American Masters” episode that aired on PBS in 2019, Watson “continued to spew toxic material,” the friend said. Asked on the show whether his views about race and intelligence had changed, he replied, “Not at all. I would like for them to have changed, that there be new knowledge that says that your nurture is much more important than nature. But I haven’t seen any knowledge.” Within days, Cold Spring Harbor severed all remaining ties with Watson, citing his “unsubstantiated and reckless” remarks.

Those statements seemed to be a way to lash out at the establishment that had shunned him since 2007 and to retain a few photons of the public spotlight. “In the old days, Jim actually had power and could satisfy himself by getting things done the way he saw fit,” said the lab friend. “The current Jim has no power.” Added Hopkins, “He built the field of modern biology, but he didn’t know when to get off the stage.”

At age 90, Watson told friends he did care how history would see him. He did care what his obituaries would say. He knew his racist and sexist assertions would feature in them. Not even that could make him reconsider his beliefs, which only seemed to harden with criticism. Now history can reach its verdict.

Read the whole story
chrisamico
2 days ago
reply
Boston, MA
Share this story
Delete

Data scientists perform last rites for 'dearly departed datasets' in 2nd Trump administration

1 Share

While some people last Friday dressed in Halloween costumes or handed out candy to trick-or-treaters, a group of U.S. data scientists published a list of “dearly departed” datasets that have been axed, altered or had topics scrubbed since President Donald Trump returned to the White House earlier this year.

The timing of the release of the “Dearly Departed Datasets” with “All Hallows’ Eve” may have been cheeky, but the purpose was serious: to put a spotlight on attacks by the Trump administration on federal datasets that don’t align with its priorities, including data dealing with gender identity; diversity, equity and inclusion; and climate change.

Officials at the Federation of American Scientists and other data scientists who compiled the list divided the datasets into those that had been killed off, had variables deleted, had tools removed making public access more difficult and had found a second life outside the federal government.

The good news, the data scientists said, was that the number of datasets that were totally terminated number in the dozens, out of the hundreds of thousands of datasets produced by the federal government.

The bad news was that federal data sets were still at risk because of loss of staff and expertise by federal government workers who lost their jobs or voluntarily departed under Elon Musk’s cost-cutting blitz, and data that reflected poorly on the Republican administration’s policies could still be in the cross-hairs, they said.

Stay up to date with the news and the best of AP by following our WhatsApp channel.

Follow on WhatsApp

The “dearly departed” figures which were killed off include a Census Bureau dataset showing the relationship between income inequality and vulnerability to disasters; a health surveillance network which monitored drug-related visits to emergency rooms; and a survey of hiring and workhours at farms, according to the review.

The race and ethnicity column was eliminated from a dataset on the federal workforce. Figures on transgender inmates were removed from inmate statistics, and three gender identity questions were taken out of a crime victims’ survey, the data scientists said.

___

Follow Mike Schneider on the social platform Bluesky: @mikeysid.bsky.social

Read the whole story
chrisamico
5 days ago
reply
Boston, MA
Share this story
Delete

Time Capsule: Our Dick Cheney Obituary … From 2012

1 Share

Editor’s Note: I mentioned in today’s Morning Memo that while TPM doesn’t do obituaries, we had for years a draft of one in the can for Dick Cheney. He was too central of a figure in the early years of TPM not to have something substantive to say upon his death. In the end, Cheney managed to outlive our meager draft.

I went looking for it when the first alert of his death hit my phone early this morning. I soon got a text from former TPMer Brian Beutler: “Welp that Cheney obit I pre-filed to you ~15 years ago is finally good to go!”

Unable to find it immediately, I enlisted the help of our tech guru Matt Wozniak, and in a dusty old CMS covered in cobwebs, he found it.

Read the whole story
chrisamico
6 days ago
reply
Boston, MA
Share this story
Delete

UFC's Isaac Dulgarian situation is deja vu all over again. How many alarms until someone wakes up?

1 Share

Stop me if you’ve heard this one: A mostly unheralded fight on a completely skippable UFC Fight Night card ends in controversy after many observers question whether the loser really did all he could to try to win.

Later we learn that betting odds for the bout made a strange and sudden shift just prior to fight time. Eyebrows begin to raise. Suspicions boil and bubble. Certain conclusions get drawn.

Advertisement

And the UFC, under whose banner all this suspicious activity took place? It quietly releases the fighter in question and rolls on to the next one.

This has all happened before. It happened back in 2022, when Darrick Minner lost via first-round TKO in a bout from the UFC APEX that saw an unusual amount of late gambling money come in on his opponent, Shayilan Nuerdanbieke, to win in the first round. As we would later learn, Minner fought with a knee injury, a fact known to coaches and training partners and friends, several of whom were accused of profiting off the insider knowledge with bets placed at online sportsbooks.

Now it’s 2025 and — whoops — seems like it might have happened again. Isaac Dulgarian has been released by the UFC following his own suspiciously low-effort performance at Saturday’s UFC Vegas 110 event (also at the UFC APEX). Dulgarian came in as the heavy betting favorite, but his odds fell sharply just before the fight, indicating that a rush of money had come in on his opponent to win. After Dulgarian tapped out to a choke he made minimal efforts to defend against, some online sportsbooks said they would refund users' losing bets on Dulgarian. Almost as if they, too, had begun to suspect that a fix was in, and preferred to get out ahead of it just so they could not be accused of profiting from it.

In the aftermath of the Minner scandal, UFC CEO Dana White first insisted that nothing untoward had taken place.

Advertisement

“I don’t think anything happened,” White said initially, adding that there was “absolutely zero proof that anybody that was involved (in the fight) bet on it.”

A month later, he had apparently been convinced otherwise. The people involved in this, White said, would go to “federal f***ing prison.” They didn’t, though. The Nevada Athletic Commission handed out suspensions to Minner and his coach James Krause, who was subsequently painted as the ringleader of a vast MMA gambling ring, as well as to Minner’s teammate Jeff Molina. People didn’t exactly get off easy, but they stayed well clear of federal (freaking) prison. UFC banned fighters from working with Krause. He became persona non grata, at least outwardly. Beyond that, the story seemed to just … evaporate.

Back when the Minner incident first happened I spoke to Matthew Holt, then the president of US Integrity, which monitors sports betting activity across a number of platforms. (US Integrity has since been renamed IC360.) Holt told me that his firm had known for some time that insider betting in the UFC was a problem, with fighters and coaches betting on UFC bouts at a rate far higher than what the firm saw from other pro sports leagues. Holt said his company had told the UFC as much on several occasions.

“I think it's (an issue of) league structure, and the UFC is at a disadvantage, to be fair,” Holt told me in 2022. Whereas leagues like the NFL knew exactly who was and wasn’t a team employee, and had them all under the roofs of team facilities confined to the United States, the UFC has fighters all over the world making use of a loose confederation of coaches and training partners.

Advertisement

Still, it’s not as if there’s nothing that could be done. Holt said at the time that his company had noted the strange betting activity ahead of the Minner fight and sent out alerts to various sportsbooks, online and otherwise.

“In this case, what was also really interesting is when we sent out the alert, we got responses from double-digit sportsbooks across the U.S. saying they were seeing very similar activity,” Holt told me. “Abnormally large amounts of money wagered on the under two and a half rounds (prop bet), and abnormally large amounts of money wagered on this fighter to win by first-round knockout.”

LAS VEGAS, NEVADA - DECEMBER 11:  Coach James Krause provides instruction to Darrick Minner in the corner between rounds of his featherweight bout during UFC 269 on December 11, 2021 in Las Vegas, Nevada. (Photo by Chris Unger/Zuffa LLC)

Coach James Krause and Darrick Minner (right) were central players in the 2022 UFC betting scandal.

(Chris Unger via Getty Images)

This is one of the advantages to having a sports-betting market that exists mostly on the internet and in our phones, all while we are living in the age of the algorithm. It’s much easier to spot suspicious betting and to get the word out in advance. It’s also easier to see who is placing the bets, which means it’s a lot easier for those involved to get caught. It should even be easier for the UFC to get word in advance and pull matches with suspicious betting activity around them

Advertisement

But then what? That’s the next step that has been missing lately. UFC, like virtually every other pro sports organization, has embraced online sports betting with both arms. You can’t make it through a UFC event with being bombarded by betting odds updates and commercials for sportsbook apps. Gambling has always, always been a part of fight sports, but now it’s out in the open and the promoters get to feast on their own financial piece.

This would seem to argue for a more aggressive approach from the UFC. It needs fans to believe that fights are on the level — for several reasons. Simply dodging the bad publicity and laying low until the story dies a natural death only guarantees that it will eventually happen again.

LAS VEGAS, NEVADA - MARCH 16: Isaac Dulgarian reacts after his featherweight fight against Christian Rodriguezduring the UFC Fight Night event at UFC APEX on March 16, 2024 in Las Vegas, Nevada. (Photo by Jeff Bottari/Zuffa LLC via Getty Images)

Isaac Dulgarian was cut from the UFC following suspicious betting activity ahead of a bout he lost at Saturday's UFC event, but he isn't the first to find himself in this situation.

(Jeff Bottari via Getty Images)

And the thing is, it will happen again. UFC is uniquely vulnerable to this. Athletes are independent contractors who the UFC has very little contact with or oversight of outside of fight week. Many of these fighters are also among the lowest-paid people in professional sports, and they’re surrounded by people who help them in highly unofficial capacities. If you were a nefarious gambler looking for a way in, you’d have your pick of paths to the waterfall.

Advertisement

The one thing we know for sure is that these issues won’t magically go away on their own. Gambling has its hooks in American sports. No one is about to turn down the money that flows from these apps. Maybe the best we can do is make it legit and fair. But even that is a fight the UFC has to willingly join, and with all the vehemence that it approaches other issues that are in its financial best interests. To not become part of the solution is to declare yourself a part of the problem.

Read the whole story
chrisamico
6 days ago
reply
Boston, MA
Share this story
Delete
Next Page of Stories