Journalist/developer. Storytelling developer @ USA Today Network. Builder of @HomicideWatch. Sinophile for fun. Past: @frontlinepbs @WBUR, @NPR, @NewsHour.
2227 stories
·
45 followers

Don't be a slop cannon

1 Share

I wrote this because I made this mistake myself.

The other day, I was attempting to burn through my remaining Claude Code session limit before it reset. I was feeling productive, maybe a little too productive. So I found an open source journalism project I genuinely admire, saw some open issues, and thought I could help. I ran some tests on the code and did my best to verify that the changes were relevant and accurate. But I opened several pull requests — multiple PRs, across multiple repos, in the span of about an hour. All AI-assisted. And that was the problem.

It doesn’t matter that my code was good (I think). The maintainers had no way to know that. To a small team receiving multiple AI-authored PRs from a stranger in rapid succession, the pattern looked like the start of a flood — the kind of flood they’d been reading about other projects drowning in. They had no reason to assume good faith from someone they’d never seen before. They had every reason to be concerned.

A maintainer from the project emailed me. They were gracious and patient about it — far more than they needed to be.

They explained that as a small team, they couldn’t review back-to-back AI-authored pull requests, especially several in one hour. They asked me to pick a single issue, make sure it followed best practices and passed tests in my local dev environment, and then let them know when it was ready for review. No anger. No public shaming. Just a clear, professional request to slow down and do it right.

In my case, the code itself was fine (I think). This was a false positive on quality. But it was a true positive on the pattern — and if they hadn’t said something, I probably would have kept going, submitting PRs on every open issue I felt comfortable tackling. That’s the thing about enthusiasm combined with powerful tools: it doesn’t feel like a flood when you’re the one sending it.

On top of that, even though I did my best to verify what I was submitting, I’m a beginner. There’s an old distinction between “known unknowns” and “unknown unknowns” — the things you know you don’t know versus the things you don’t even know to look for.

As an early-stage contributor, I had plenty of both. There are edge cases, architectural decisions, project-specific conventions, and backward compatibility concerns that an experienced contributor would catch but that I’d walk right past. I didn’t even know what questions to ask, let alone the answers. Following what you think is proper procedure isn’t the same as actually knowing what proper procedure is for a given project.

Every codebase has its own norms, and you can’t learn them from the outside.

That’s why, especially as a beginner, it’s worth going the extra mile before you even think about contributing: actually use the app or project you want to help with. Read through the codebase. Explore the existing issues and past pull requests to understand how the community works. And reach out to the maintainers first — ask if they’re open to AI-assisted contributions, ask if there are norms or practices you should know about, and ask which issues would be most helpful to tackle. A five-minute conversation can save everyone hours of wasted work.

And here’s the uncomfortable truth that goes beyond etiquette: even if you follow every best practice on this page, the maintainer may still not want your code. When AI makes writing code trivial, the code itself stops being the valuable part of a contribution.

Nikita Roy, a data scientist, Knight Fellow at ICFJ, and founder of Newsroom Robots, put it bluntly when I told her about my experience: “AI-generated PRs are putting real strain on maintainers right now, even well-intentioned ones, and it’s a big issue in tech circles. So even with following best practices, I don’t believe that’s necessarily the solution.”

Nikita pointed me to Steve Ruiz’s blog post about shutting down external PRs on tldraw, where he asked: “If writing the code is the easy part, why would I want someone else to write it?” The answer might be that the most valuable thing you can contribute isn’t code — it’s bug reports, documentation, testing, design feedback, or a well-written issue that helps the maintainer understand a problem they haven’t seen yet.

And my situation is still the mild version.

I at least took the time to verify what I was submitting. The problem is made far worse by people who don’t — who point an AI at a repo, generate a patch, and submit it without reading, testing, or understanding any of it. Maintainers can’t tell the difference at a glance between a well-tested AI-assisted PR and a completely untested one. The volume and the pattern look the same from their side.

I got lucky. I got a kind email from a patient person. Many open source maintainers aren’t in a position to be that generous. They’re unpaid volunteers maintaining projects that millions of people depend on, and they’re being hit with a flood of AI-generated contributions from strangers who never bothered to check their work.

Some maintainers have shut down their bug bounty programs. Others have closed their projects to outside contributions entirely. A few have started keeping public lists of repeat offenders. My experience was mild compared to what many of them deal with every day.

Using AI coding agents means you’ll be able to generate code faster than you ever could before. That power comes with a responsibility: as Simon Willison put it, your job is to deliver code you have proven to work.

Just because you can generate a pull request in five minutes doesn’t mean you should.

This post was originally published as part of the course materials for “Advanced prompt engineering for journalists,” a forthcoming MOOC from the Knight Center for Journalism in the Americas at UT Austin.

Read the full guide, list of case studies, and other course resources here.

Read the whole story
chrisamico
5 hours ago
reply
Boston, MA
Share this story
Delete

NASA’s Artemis II Is the First Crewed Moon Mission Since 1972. Why Are We Going Back?

1 Share

A lunar telescope could be installed in a crater on the far side of the moon.

Over the past century, the Earth has become a noisy place for astronomers wishing to listen to the radio waves that fill the universe. Those waves emanate from glowing gas clouds of hydrogen, auroras of distant planets and fast-spinning neutron stars. But those signals are often drowned out by ubiquitous transmissions of modern society like radio and television shows, cellphone calls and industrial electrical equipment.

The Earth’s ionosphere also blocks long-wavelength radio waves, which would give clues about the very early universe, from reaching ground-based radio telescopes. But on the far side of the moon, all that radio noise from Earth is silenced, unable to pass through 2,000 miles of rock. And the long-wavelength radio waves could also be observed.

Building a radio telescope in a crater on the moon would take advantage of that natural concave shape. A location near the equator in the middle of the far side could be an ideal listening spot.

After years of talking about lunar outposts in vague terms for sometime in the indefinite future, NASA recently shifted, putting a continuing U.S. presence on the moon solidly on its road map for the coming decade.

Plans for a moon base would proceed in phases. It would go from regular moon visits to building permanent infrastructure; power and communication systems; vehicles to carry astronauts and cargo across the surface; and possibly nuclear power plants.

Methodology

The 3-D model’s base imagery is from NASA’s Moon CGI kit. Data on lunar landing and crash sites was gathered and verified using multiple sources: NASA Space Science Data Coordinated Archive; China National Space Administration; Japanese Space Agency; European Space Agency; Indian Space Research Organization; and the Smithsonian Institution.

To create the time-lapse animation showing the moon’s permanently shadowed areas at the south pole in January 2026, New York Times journalists used a digital elevation model from the Lunar Orbiter Laser Altimeter (LOLA), data from LOLA’s Gridded Data Records (GDRs) and ephemeris sourced from the U.S. Geological Service (USGS) Astropedia.

Frozen water detections were provided by Shuai Li from the University of Hawaii.

Lunar landing sites for future Artemis missions at the South Pole are from NASA’s update from October 2024.

Helium-3 concentration data was provided by Wenzhe Fa from Peking University, China.

Diagrams of the lunar radio telescope deployment and radio interference are based on NASA Jet Propulsion Laboratory’s concepts.

This project also used geographic references from the USGS Geologic Atlas of the Moon and the Lunar South Pole Atlas by the Lunar and Planetary Institute.

Read the whole story
chrisamico
1 day ago
reply
Boston, MA
Share this story
Delete

The Axios supply chain attack used individually targeted social engineering

1 Share

The Axios team have published a full postmortem on the supply chain attack which resulted in a malware dependency going out in a release the other day, and it involved a sophisticated social engineering campaign targeting one of their maintainers directly. Here's Jason Saayman'a description of how that worked:

so the attack vector mimics what google has documented here: https://cloud.google.com/blog/topics/threat-intelligence/unc1069-targets-cryptocurrency-ai-social-engineering

they tailored this process specifically to me by doing the following:

  • they reached out masquerading as the founder of a company they had cloned the companys founders likeness as well as the company itself.
  • they then invited me to a real slack workspace. this workspace was branded to the companies ci and named in a plausible manner. the slack was thought out very well, they had channels where they were sharing linked-in posts, the linked in posts i presume just went to the real companys account but it was super convincing etc. they even had what i presume were fake profiles of the team of the company but also number of other oss maintainers.
  • they scheduled a meeting with me to connect. the meeting was on ms teams. the meeting had what seemed to be a group of people that were involved.
  • the meeting said something on my system was out of date. i installed the missing item as i presumed it was something to do with teams, and this was the RAT.
  • everything was extremely well co-ordinated looked legit and was done in a professional manner.

A RAT is a Remote Access Trojan - this was the software which stole the developer's credentials which could then be used to publish the malicious package.

That's a very effective scam. I join a lot of meetings where I find myself needing to install Webex or Microsoft Teams or similar at the last moment and the time constraint means I always click "yes" to things as quickly as possible to make sure I don't join late.

Every maintainer of open source software used by enough people to be worth taking in this way needs to be familiar with this attack strategy.

You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

Read the whole story
chrisamico
2 days ago
reply
Boston, MA
Share this story
Delete

Thoughts on OpenAI acquiring Astral and uv/ruff/ty

1 Share

The big news this morning: Astral to join OpenAI (on the Astral blog) and OpenAI to acquire Astral (the OpenAI announcement). Astral are the company behind uv, ruff, and ty - three increasingly load-bearing open source projects in the Python ecosystem. I have thoughts!

The official line from OpenAI and Astral

The Astral team will become part of the Codex team at OpenAI.

Charlie Marsh has this to say:

Open source is at the heart of that impact and the heart of that story; it sits at the center of everything we do. In line with our philosophy and OpenAI's own announcement, OpenAI will continue supporting our open source tools after the deal closes. We'll keep building in the open, alongside our community -- and for the broader Python ecosystem -- just as we have from the start. [...]

After joining the Codex team, we'll continue building our open source tools, explore ways they can work more seamlessly with Codex, and expand our reach to think more broadly about the future of software development.

OpenAI's message has a slightly different focus (highlights mine):

As part of our developer-first philosophy, after closing OpenAI plans to support Astral’s open source products. By bringing Astral’s tooling and engineering expertise to OpenAI, we will accelerate our work on Codex and expand what AI can do across the software development lifecycle.

This is a slightly confusing message. The Codex CLI is a Rust application, and Astral have some of the best Rust engineers in the industry - BurntSushi alone (Rust regex, ripgrep, jiff) may be worth the price of acquisition!

So is this about the talent or about the product? I expect both, but I know from past experience that a product+talent acquisition can turn into a talent-only acquisition later on.

uv is the big one

Of Astral's projects the most impactful is uv. If you're not familiar with it, uv is by far the most convincing solution to Python's environment management problems, best illustrated by this classic XKCD:

xkcd comic showing a tangled, chaotic flowchart of Python environment paths and installations. Nodes include "PIP", "EASY_INSTALL", "$PYTHONPATH", "ANACONDA PYTHON", "ANOTHER PIP??", "HOMEBREW PYTHON (2.7)", "OS PYTHON", "HOMEBREW PYTHON (3.6)", "PYTHON.ORG BINARY (2.6)", and "(MISC FOLDERS OWNED BY ROOT)" connected by a mess of overlapping arrows. A stick figure with a "?" stands at the top left. Paths at the bottom include "/usr/local/Cellar", "/usr/local/opt", "/usr/local/lib/python3.6", "/usr/local/lib/python2.7", "/python/", "/newenv/", "$PATH", "????", and "/(A BUNCH OF PATHS WITH "FRAMEWORKS" IN THEM SOMEWHERE)/". Caption reads: "MY PYTHON ENVIRONMENT HAS BECOME SO DEGRADED THAT MY LAPTOP HAS BEEN DECLARED A SUPERFUND SITE."

Switch from python to uv run and most of these problems go away. I've been using it extensively for the past couple of years and it's become an essential part of my workflow.

I'm not alone in this. According to PyPI Stats uv was downloaded more than 126 million times last month! Since its release in February 2024 - just two years ago - it's become one of the most popular tools for running Python code.

Ruff and ty

Astral's two other big projects are ruff - a Python linter and formatter - and ty - a fast Python type checker.

These are popular tools that provide a great developer experience but they aren't load-bearing in the same way that uv is.

They do however resonate well with coding agent tools like Codex - giving an agent access to fast linting and type checking tools can help improve the quality of the code they generate.

I'm not convinced that integrating them into the coding agent itself as opposed to telling it when to run them will make a meaningful difference, but I may just not be imaginative enough here.

What of pyx?

Ever since uv started to gain traction the Python community has been worrying about the strategic risk of a single VC-backed company owning a key piece of Python infrastructure. I wrote about one of those conversations in detail back in September 2024.

The conversation back then focused on what Astral's business plan could be, which started to take form in August 2025 when they announced pyx, their private PyPI-style package registry for organizations.

I'm less convinced that pyx makes sense within OpenAI, and it's notably absent from both the Astral and OpenAI announcement posts.

Competitive dynamics

An interesting aspect of this deal is how it might impact the competition between Anthropic and OpenAI.

Both companies spent most of 2025 focused on improving the coding ability of their models, resulting in the November 2025 inflection point when coding agents went from often-useful to almost-indispensable tools for software development.

The competition between Anthropic's Claude Code and OpenAI's Codex is fierce. Those $200/month subscriptions add up to billions of dollars a year in revenue, for companies that very much need that money.

Anthropic acquired the Bun JavaScript runtime in December 2025, an acquisition that looks somewhat similar in shape to Astral.

Bun was already a core component of Claude Code and that acquisition looked to mainly be about ensuring that a crucial dependency stayed actively maintained. Claude Code's performance has increased significantly since then thanks to the efforts of Bun's Jarred Sumner.

One bad version of this deal would be if OpenAI start using their ownership of uv as leverage in their competition with Anthropic.

Astral's quiet series A and B

One detail that caught my eye from Astral's announcement, in the section thanking the team, investors, and community:

Second, to our investors, especially Casey Aylward from Accel, who led our Seed and Series A, and Jennifer Li from Andreessen Horowitz, who led our Series B. As a first-time, technical, solo founder, you showed far more belief in me than I ever showed in myself, and I will never forget that.

As far as I can tell neither the Series A nor the Series B were previously announced - I've only been able to find coverage of the original seed round from April 2023.

Those investors presumably now get to exchange their stake in Astral for a piece of OpenAI. I wonder how much influence they had on Astral's decision to sell.

Forking as a credible exit?

Armin Ronacher built Rye, which was later taken over by Astral and effectively merged with uv. In August 2024 he wrote about the risk involved in a VC-backed company owning a key piece of open source infrastructure and said the following (highlight mine):

However having seen the code and what uv is doing, even in the worst possible future this is a very forkable and maintainable thing. I believe that even in case Astral shuts down or were to do something incredibly dodgy licensing wise, the community would be better off than before uv existed.

Astral's own Douglas Creager emphasized this angle on Hacker News today:

All I can say is that right now, we're committed to maintaining our open-source tools with the same level of effort, care, and attention to detail as before. That does not change with this acquisition. No one can guarantee how motives, incentives, and decisions might change years down the line. But that's why we bake optionality into it with the tools being permissively licensed. That makes the worst-case scenarios have the shape of "fork and move on", and not "software disappears forever".

I like and trust the Astral team and I'm optimistic that their projects will be well-maintained in their new home.

OpenAI don't yet have much of a track record with respect to acquiring and maintaining open source projects. They've been on a bit of an acquisition spree over the past three months though, snapping up Promptfoo and OpenClaw (sort-of, they hired creator Peter Steinberger and are spinning OpenClaw off to a foundation), plus closed source LaTeX platform Crixet (now Prism).

If things do go south for uv and the other Astral projects we'll get to see how credible the forking exit strategy turns out to be.

You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

Read the whole story
chrisamico
8 days ago
reply
Boston, MA
Share this story
Delete

The autism spectrum isn’t a sliding scale; 39 traits show the complexity

1 Share

March 17, 2026

2 min read

Google Logo Add Us On GoogleAdd SciAm

Here’s what the autism spectrum really looks like

The autism spectrum is big, vibrant and complicated, a new graphic of 39 traits shows

Cropped image of a row of three colorful sunburst charts.

Amanda Montañez

Autism is a spectrum. This metaphor is a helpful way to explain why autism looks and feels so varied across different people. Since 2013 it’s been baked into the name of the diagnosis itself, autism spectrum disorder (ASD). But what does this spectrum look like?

It’s not simply a one-dimensional scale from “more autistic” to “less autistic,” which would collapse so much of the diversity that the spectrum metaphor is meant to showcase. There is no single trait that defines autism: it encompasses differences in social communication skills, interests, sensory sensitivities, and more. Every person’s profile is unique. These graphics, based on clinicians’ evaluations of actual people using the Autism Symptom Dimensions Questionnaire, reveal a more nuanced “spectrum” of differences.

And this picture doesn’t factor in how people’s profiles change over time in response to treatments, life circumstances or age. It also doesn’t measure individuals’ overall cognitive ability, something researchers treat as a separate but important feature that can affect someone’s particular constellation of traits.

Not all these characteristics are impairments that should be treated. “Someone not making eye contact is useful information for diagnosing autism,” but it is not necessarily an appropriate target for intervention, says Ari Ne’eman, co-founder of the Autistic Self Advocacy Network and a health policy researcher at Harvard University. Many of these traits are best thought of as normal human variation rather than something to be treated or changed, Ne’eman says.

A spectrum in many dimensions

Each of the 39 wedges in the circle represents one question in the Autism Symptom Dimensions Questionnaire. The traits associated with each question (listed below) are grouped into key symptom factors—the main aspects of behavior that evaluators look for when they assess someone for autism.

Amanda Montañez; Source: “The Autism Symptom Dimensions Questionnaire: Development and Psychometric Evaluation of a New, Open-Source Measure of Autism Symptomatology,” by Thomas W. Frazier et al., in Developmental Medicine & Child Neurology, Vol. 65, No. 8; August 2023 (data)

Variation across individuals

These charts represent questionnaire responses for three different autistic individuals. These data reflect each person’s strengths and challenges at their current stage of development and may change over time.

Amanda Montañez; Source: “The Autism Symptom Dimensions Questionnaire: Development and Psychometric Evaluation of a New, Open-Source Measure of Autism Symptomatology,” by Thomas W. Frazier et al., in Developmental Medicine & Child Neurology, Vol. 65, No. 8; August 2023 (data)

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Read the whole story
chrisamico
9 days ago
reply
Boston, MA
Share this story
Delete

Astral to join OpenAI

1 Share

I started Astral to make programming more productive.

From the beginning, our goal has been to build tools that radically change what it feels like to work with Python – tools that feel fast, robust, intuitive, and integrated.

Today, we're taking a step forward in that mission by announcing that we've entered into an agreement to join OpenAI as part of the Codex team.

Over the past few years, our tools have grown from zero to hundreds of millions of downloads per month across Ruff, uv, and ty. The Astral toolchain has become foundational to modern Python development. The numbers – and the impact – went far beyond my most ambitious expectations at every step of the way.

Open source is at the heart of that impact and the heart of that story; it sits at the center of everything we do. In line with our philosophy and OpenAI's own announcement, OpenAI will continue supporting our open source tools after the deal closes. We'll keep building in the open, alongside our community – and for the broader Python ecosystem – just as we have from the start.

I view building tools as an incredibly high-leverage endeavor. As I wrote in our launch post three years ago: "If you could make the Python ecosystem even 1% more productive, imagine how that impact would compound?"

Today, AI is rapidly changing the way we build software, and the pace of that change is only accelerating. If our goal is to make programming more productive, then building at the frontier of AI and software feels like the highest-leverage thing we can do.

It is increasingly clear to me that Codex is that frontier. And by bringing Astral's tooling and expertise to OpenAI, we're putting ourselves in a position to push it forward. After joining the Codex team, we'll continue building our open source tools, explore ways they can work more seamlessly with Codex, and expand our reach to think more broadly about the future of software development.

Through it all, though, our goal remains the same: to make programming more productive. To build tools that radically change what it feels like to build software.

On a personal note, I want to say thank you, first, to the Astral team, who have always put our users first and shipped some of the most beloved software in the world. You've pushed me to be a better leader and a better programmer. I am so excited to keep building with you.

Second, to our investors, especially Casey Aylward from Accel, who led our Seed and Series A, and Jennifer Li from Andreessen Horowitz, who led our Series B. As a first-time, technical, solo founder, you showed far more belief in me than I ever showed in myself, and I will never forget that.

And third, to our users. Our tools exist because of you. Thank you for your trust. We won't let you down.

Read the whole story
chrisamico
18 days ago
reply
Boston, MA
acdha
17 days ago
I hope this turns out better than I fear. Last year there was so much discussion about this at PyCon and I'd bet this year that'll be half of the hallway track.
Share this story
Delete
Next Page of Stories