Journalist/developer. Storytelling developer @ USA Today Network. Builder of @HomicideWatch. Sinophile for fun. Past: @frontlinepbs @WBUR, @NPR, @NewsHour.
2228 stories
·
45 followers

I Decompiled the White House's New App

1 Share

The White House released an app on the App Store and Google Play. They posted a blog about it. "Unparalleled access to the Trump Administration."

It took a few minutes to pull the APKs with ADB, and threw them into JADX.

Here is everything I found.

It's a React Native app built with Expo (SDK 54), running on the Hermes JavaScript engine. The backend is WordPress with a custom REST API. The app was built by an entity called "forty-five-press" according to the Expo config.

The actual app logic is compiled into a 5.5 MB Hermes bytecode bundle. The native Java side is just a thin wrapper.

Version 47.0.1. Build 20. Hermes enabled. New Architecture enabled. Nothing weird here. Let's keep going.

Two things stand out here. First, there's a plugin called withNoLocation. Second, there's a plugin called withStripPermissions. Remember these. They become relevant very soon.

OTA updates are disabled. The Expo update infrastructure is compiled in but dormant.

I extracted every string from the Hermes bytecode bundle and filtered for URLs and API endpoints. The app's content comes from a WordPress REST API at whitehouse.gov with a custom whitehouse/v1 namespace.

Here are the endpoints:

EndpointWhat It Serves
/wp-json/whitehouse/v1/homeHome screen
/wp-json/whitehouse/v1/news/articlesNews articles
/wp-json/whitehouse/v1/wire"The Wire" news feed
/wp-json/whitehouse/v1/liveLive streams
/wp-json/whitehouse/v1/galleriesPhoto galleries
/wp-json/whitehouse/v1/issuesPolicy issues
/wp-json/whitehouse/v1/prioritiesPriorities
/wp-json/whitehouse/v1/achievementsAchievements
/wp-json/whitehouse/v1/affordabilityDrug pricing
/wp-json/whitehouse/v1/media-bias"Media Bias" section
/wp-json/whitehouse/v1/social/xX/Twitter feed proxy

Other hardcoded strings from the bundle: "THE TRUMP EFFECT", "Greatest President Ever!" (lol), "Text President Trump", "Send a text message to President Trump at 45470", "Visit TrumpRx.gov", "Visit TrumpAccounts.gov".

There's also a direct link to <a href="https://www.ice.gov/webform/ice-tip-form" rel="nofollow">https://www.ice.gov/webform/ice-tip-form</a>. The ICE tip reporting form. In a news app.

It's a content portal. News, live streams, galleries, policy pages, social media embeds, and promotional material for administration initiatives. All powered by WordPress.

Now let's look at what else it does.

The app has a WebView for opening external links. Every time a page loads in this WebView, the app injects a JavaScript snippet. I found it in the Hermes bytecode string table:

Read that carefully. It hides:

  • Cookie banners
  • GDPR consent dialogs
  • OneTrust popups
  • Privacy banners
  • Login walls
  • Signup walls
  • Upsell prompts
  • Paywall elements
  • CMP (Consent Management Platform) boxes

It forces body { overflow: auto !important } to re-enable scrolling on pages where consent dialogs lock the scroll. Then it sets up a MutationObserver to continuously nuke any consent elements that get dynamically added.

An official United States government app is injecting CSS and JavaScript into third-party websites to strip away their cookie consent dialogs, GDPR banners, login gates, and paywalls.

The native side confirms this is the injectedJavaScript prop on the React Native WebView:

Every page load in the in-app browser triggers this. It wraps the injection in an IIFE and runs it via Android's evaluateJavascript().

Remember withNoLocation from the Expo config? The plugin that's supposed to strip location? Yeah. The OneSignal SDK's native location tracking code is fully compiled into the APK.

270,000 milliseconds is 4.5 minutes. 570,000 is 9.5 minutes.

To be clear about what activates this: the tracking doesn't start silently. There are three gates. The LocationManager checks all of them before the fused location API ever fires.

First, the _isShared flag. It's read from SharedPreferences on init and defaults to false. The JavaScript layer can flip it on with setLocationShared(true). The Hermes string table confirms both setLocationShared and isLocationShared are referenced in the app's JS bundle, so the app has the ability to toggle this.

Second, the user has to grant the Android runtime location permission. The location permissions aren't declared in the AndroidManifest but requested at runtime. The Google Play Store listing confirms the app asks for "access precise location only in the foreground" and "access approximate location only in the foreground."

Third, the start() method only proceeds if the device actually has a location provider (GMS or HMS).

If all three gates pass, here's what runs. The fused location API requests GPS at the intervals defined above:

This gets called on both onFocus() and onUnfocused(), dynamically switching between the 4.5-minute foreground interval and the 9.5-minute background interval.

When a location update comes in, it feeds into the LocationCapturer:

Latitude, longitude, accuracy, timestamp, whether the app was in the foreground or background, and whether it was fine (GPS) or coarse (network). All of it gets written into OneSignal's PropertiesModel, which syncs to their backend.

The data goes here:

There's also a background service that keeps capturing location even when the app isn't active:

So the tracking isn't unconditionally active. But the entire pipeline including permission strings, interval constants, fused location requests, capture logic, background scheduling, and the sync to OneSignal's API, all of them are fully compiled in and one setLocationShared(true) call away from activating. The withNoLocation Expo plugin clearly did not strip any of this. Whether the JS layer currently calls setLocationShared(true) is something I can't determine from the native side alone, since the Hermes bytecode is compiled and the actual call site is buried in the 5.5 MB bundle. What I can say is that the infrastructure is there, ready to go, and the JS API to enable it is referenced in the bundle.

OneSignal is doing a lot more than push notifications in this app. From the Hermes string table:

  • addTag - tag users for segmentation
  • addSms - associate phone numbers with user profiles
  • addAliases - cross-device user identification
  • addOutcomeWithValue / addUniqueOutcome - track user actions and conversions
  • OneSignal-notificationClicked - notification tap tracking
  • OneSignal-inAppMessageClicked / WillDisplay / DidDisplay / WillDismiss / DidDismiss - full in-app message lifecycle tracking
  • OneSignal-permissionChanged / subscriptionChanged / userStateChanged - state change tracking
  • setLocationShared / isLocationShared - location toggle
  • setPrivacyConsentRequired / setPrivacyConsentGiven - consent gating

The local database tracks every notification received and whether it was opened or dismissed:

Your location, your notification interactions, your in-app message clicks, your phone number if you provide it, your tags, your state changes. All going to OneSignal's servers.

The app embeds YouTube videos using the react-native-youtube-iframe library. This library loads its player HTML from:

That's a personal GitHub Pages site. If the lonelycpp GitHub account gets compromised, whoever controls it can serve arbitrary HTML and JavaScript to every user of this app, executing inside the WebView context.

This is a government app loading code from a random person's GitHub Pages.

The app loads third-party JavaScript from Elfsight to embed social media feeds:

Elfsight is a commercial SaaS widget company. Their JavaScript runs inside the app's WebView with no sandboxing. Whatever tracking Elfsight does, it does it here too. Their code can change at any time. The Elfsight widget ID 4a00611b-befa-466e-bab2-6e824a0a98a9 is hardcoded in an HTML embed.

  • Mailchimp at whitehouse.us10.list-manage.com/subscribe/post-json handles email signups. User emails go to Mailchimp's infrastructure.
  • Uploadcare at <a href="http://ucarecdn.com" rel="nofollow">ucarecdn.com</a> hosts content images via six hardcoded UUIDs.
  • Truth Social has a hardcoded HTML embed with Trump's profile, avatar image URL from <a href="http://static-assets-1.truthsocial.com" rel="nofollow">static-assets-1.truthsocial.com</a>, and a "Follow on Truth Social" button.
  • Facebook page plugin is loaded in an iframe via facebook.com/plugins/page.php.

None of these are government-controlled infrastructure.

The app uses standard Android TrustManager for SSL with no custom certificate pinning. If you're on a network with a compromised CA (corporate proxies, public wifi with MITM, etc.), traffic between the app and its backends can be intercepted and read.

The build has some sloppy leftovers.

A localhost URL made it into the production Hermes bundle:

That's the React Native Metro bundler dev server.

A developer's local IP is hardcoded in the string resources:

The Expo development client (expo-dev-client, expo-devlauncher, expo-devmenu) is compiled into the release build. There's a dev_menu_fab_icon.png in the drawable resources. The Compose PreviewActivity is exported in the manifest, which is a development-only component that should not be in a production APK.

The AndroidManifest itself is pretty standard for a notification-heavy app:

Plus about 16 badge permissions for Samsung, HTC, Sony, Huawei, OPPO, and other launchers. These just let the app show notification badge counts. Not interesting.

The interesting permissions are the ones that aren't in the manifest but are hardcoded as runtime request strings in the OneSignal SDK, as covered above. Fine location. Coarse location. Background location.

The Google Play listing also mentions: "modify or delete the contents of your shared storage", "run foreground service", "this app can appear on top of other apps", "run at startup", "use fingerprint hardware", "use biometric hardware."

The file provider config is also worth mentioning:

That exposes the entire external storage root. It's used by the WebView for file access.

68+ libraries are compiled into this thing. The highlights:

CategoryLibraries
FrameworkReact Native, Expo SDK 54, Hermes JS engine
Push/EngagementOneSignal, Firebase Cloud Messaging, Firebase Installations
Analytics/TelemetryFirebase Analytics, Google Data Transport, OpenTelemetry
NetworkingOkHttp 3, Apollo GraphQL, Okio
ImagesFresco, Glide, Coil 3, Uploadcare CDN
VideoExoPlayer (Media3), Expo Video
MLGoogle ML Kit Vision (barcode scanning), Barhopper model
CryptoBouncy Castle
StorageExpo Secure Store, React Native Async Storage
WebViewReact Native WebView (with the injection script)
DIKoin
SerializationGSON, Wire (Protocol Buffers)
LicensePairIP license check (Google Play verification)

25 native .so libraries in the arm64 split. The full Hermes engine, React Native core, Reanimated, gesture handler, SVG renderer, image pipeline, barcode scanner, and more.

The official White House Android app:

  1. Injects JavaScript into every website you open through its in-app browser to hide cookie consent dialogs, GDPR banners, login walls, signup walls, upsell prompts, and paywalls.

  2. Has a full GPS tracking pipeline compiled in that polls every 4.5 minutes in the foreground and 9.5 minutes in the background, syncing lat/lng/accuracy/timestamp to OneSignal's servers.

  3. Loads JavaScript from a random person's GitHub Pages site (lonelycpp.github.io) for YouTube embeds. If that account is compromised, arbitrary code runs in the app's WebView.

  4. Loads third-party JavaScript from Elfsight (elfsightcdn.com/platform.js) for social media widgets, with no sandboxing.

  5. Sends email addresses to Mailchimp, images are served from Uploadcare, and a Truth Social embed is hardcoded with static CDN URLs. None of this is government infrastructure.

  6. Has no certificate pinning. Standard Android trust management.

  7. Ships with dev artifacts in production. A localhost URL, a developer IP (10.4.4.109), the Expo dev client, and an exported Compose PreviewActivity.

  8. Profiles users extensively through OneSignal - tags, SMS numbers, cross-device aliases, outcome tracking, notification interaction logging, in-app message click tracking, and full user state observation.

Is any of this illegal? Probably not. Is it what you'd expect from an official government app? Probably not either.

Read the whole story
chrisamico
4 hours ago
reply
Boston, MA
Share this story
Delete

Don't be a slop cannon

1 Share

I wrote this because I made this mistake myself.

The other day, I was attempting to burn through my remaining Claude Code session limit before it reset. I was feeling productive, maybe a little too productive. So I found an open source journalism project I genuinely admire, saw some open issues, and thought I could help. I ran some tests on the code and did my best to verify that the changes were relevant and accurate. But I opened several pull requests — multiple PRs, across multiple repos, in the span of about an hour. All AI-assisted. And that was the problem.

It doesn’t matter that my code was good (I think). The maintainers had no way to know that. To a small team receiving multiple AI-authored PRs from a stranger in rapid succession, the pattern looked like the start of a flood — the kind of flood they’d been reading about other projects drowning in. They had no reason to assume good faith from someone they’d never seen before. They had every reason to be concerned.

A maintainer from the project emailed me. They were gracious and patient about it — far more than they needed to be.

They explained that as a small team, they couldn’t review back-to-back AI-authored pull requests, especially several in one hour. They asked me to pick a single issue, make sure it followed best practices and passed tests in my local dev environment, and then let them know when it was ready for review. No anger. No public shaming. Just a clear, professional request to slow down and do it right.

In my case, the code itself was fine (I think). This was a false positive on quality. But it was a true positive on the pattern — and if they hadn’t said something, I probably would have kept going, submitting PRs on every open issue I felt comfortable tackling. That’s the thing about enthusiasm combined with powerful tools: it doesn’t feel like a flood when you’re the one sending it.

On top of that, even though I did my best to verify what I was submitting, I’m a beginner. There’s an old distinction between “known unknowns” and “unknown unknowns” — the things you know you don’t know versus the things you don’t even know to look for.

As an early-stage contributor, I had plenty of both. There are edge cases, architectural decisions, project-specific conventions, and backward compatibility concerns that an experienced contributor would catch but that I’d walk right past. I didn’t even know what questions to ask, let alone the answers. Following what you think is proper procedure isn’t the same as actually knowing what proper procedure is for a given project.

Every codebase has its own norms, and you can’t learn them from the outside.

That’s why, especially as a beginner, it’s worth going the extra mile before you even think about contributing: actually use the app or project you want to help with. Read through the codebase. Explore the existing issues and past pull requests to understand how the community works. And reach out to the maintainers first — ask if they’re open to AI-assisted contributions, ask if there are norms or practices you should know about, and ask which issues would be most helpful to tackle. A five-minute conversation can save everyone hours of wasted work.

And here’s the uncomfortable truth that goes beyond etiquette: even if you follow every best practice on this page, the maintainer may still not want your code. When AI makes writing code trivial, the code itself stops being the valuable part of a contribution.

Nikita Roy, a data scientist, Knight Fellow at ICFJ, and founder of Newsroom Robots, put it bluntly when I told her about my experience: “AI-generated PRs are putting real strain on maintainers right now, even well-intentioned ones, and it’s a big issue in tech circles. So even with following best practices, I don’t believe that’s necessarily the solution.”

Nikita pointed me to Steve Ruiz’s blog post about shutting down external PRs on tldraw, where he asked: “If writing the code is the easy part, why would I want someone else to write it?” The answer might be that the most valuable thing you can contribute isn’t code — it’s bug reports, documentation, testing, design feedback, or a well-written issue that helps the maintainer understand a problem they haven’t seen yet.

And my situation is still the mild version.

I at least took the time to verify what I was submitting. The problem is made far worse by people who don’t — who point an AI at a repo, generate a patch, and submit it without reading, testing, or understanding any of it. Maintainers can’t tell the difference at a glance between a well-tested AI-assisted PR and a completely untested one. The volume and the pattern look the same from their side.

I got lucky. I got a kind email from a patient person. Many open source maintainers aren’t in a position to be that generous. They’re unpaid volunteers maintaining projects that millions of people depend on, and they’re being hit with a flood of AI-generated contributions from strangers who never bothered to check their work.

Some maintainers have shut down their bug bounty programs. Others have closed their projects to outside contributions entirely. A few have started keeping public lists of repeat offenders. My experience was mild compared to what many of them deal with every day.

Using AI coding agents means you’ll be able to generate code faster than you ever could before. That power comes with a responsibility: as Simon Willison put it, your job is to deliver code you have proven to work.

Just because you can generate a pull request in five minutes doesn’t mean you should.

This post was originally published as part of the course materials for “Advanced prompt engineering for journalists,” a forthcoming MOOC from the Knight Center for Journalism in the Americas at UT Austin.

Read the full guide, list of case studies, and other course resources here.

Read the whole story
chrisamico
11 hours ago
reply
Boston, MA
Share this story
Delete

NASA’s Artemis II Is the First Crewed Moon Mission Since 1972. Why Are We Going Back?

1 Share

A lunar telescope could be installed in a crater on the far side of the moon.

Over the past century, the Earth has become a noisy place for astronomers wishing to listen to the radio waves that fill the universe. Those waves emanate from glowing gas clouds of hydrogen, auroras of distant planets and fast-spinning neutron stars. But those signals are often drowned out by ubiquitous transmissions of modern society like radio and television shows, cellphone calls and industrial electrical equipment.

The Earth’s ionosphere also blocks long-wavelength radio waves, which would give clues about the very early universe, from reaching ground-based radio telescopes. But on the far side of the moon, all that radio noise from Earth is silenced, unable to pass through 2,000 miles of rock. And the long-wavelength radio waves could also be observed.

Building a radio telescope in a crater on the moon would take advantage of that natural concave shape. A location near the equator in the middle of the far side could be an ideal listening spot.

After years of talking about lunar outposts in vague terms for sometime in the indefinite future, NASA recently shifted, putting a continuing U.S. presence on the moon solidly on its road map for the coming decade.

Plans for a moon base would proceed in phases. It would go from regular moon visits to building permanent infrastructure; power and communication systems; vehicles to carry astronauts and cargo across the surface; and possibly nuclear power plants.

Methodology

The 3-D model’s base imagery is from NASA’s Moon CGI kit. Data on lunar landing and crash sites was gathered and verified using multiple sources: NASA Space Science Data Coordinated Archive; China National Space Administration; Japanese Space Agency; European Space Agency; Indian Space Research Organization; and the Smithsonian Institution.

To create the time-lapse animation showing the moon’s permanently shadowed areas at the south pole in January 2026, New York Times journalists used a digital elevation model from the Lunar Orbiter Laser Altimeter (LOLA), data from LOLA’s Gridded Data Records (GDRs) and ephemeris sourced from the U.S. Geological Service (USGS) Astropedia.

Frozen water detections were provided by Shuai Li from the University of Hawaii.

Lunar landing sites for future Artemis missions at the South Pole are from NASA’s update from October 2024.

Helium-3 concentration data was provided by Wenzhe Fa from Peking University, China.

Diagrams of the lunar radio telescope deployment and radio interference are based on NASA Jet Propulsion Laboratory’s concepts.

This project also used geographic references from the USGS Geologic Atlas of the Moon and the Lunar South Pole Atlas by the Lunar and Planetary Institute.

Read the whole story
chrisamico
1 day ago
reply
Boston, MA
Share this story
Delete

The Axios supply chain attack used individually targeted social engineering

1 Share

The Axios team have published a full postmortem on the supply chain attack which resulted in a malware dependency going out in a release the other day, and it involved a sophisticated social engineering campaign targeting one of their maintainers directly. Here's Jason Saayman'a description of how that worked:

so the attack vector mimics what google has documented here: https://cloud.google.com/blog/topics/threat-intelligence/unc1069-targets-cryptocurrency-ai-social-engineering

they tailored this process specifically to me by doing the following:

  • they reached out masquerading as the founder of a company they had cloned the companys founders likeness as well as the company itself.
  • they then invited me to a real slack workspace. this workspace was branded to the companies ci and named in a plausible manner. the slack was thought out very well, they had channels where they were sharing linked-in posts, the linked in posts i presume just went to the real companys account but it was super convincing etc. they even had what i presume were fake profiles of the team of the company but also number of other oss maintainers.
  • they scheduled a meeting with me to connect. the meeting was on ms teams. the meeting had what seemed to be a group of people that were involved.
  • the meeting said something on my system was out of date. i installed the missing item as i presumed it was something to do with teams, and this was the RAT.
  • everything was extremely well co-ordinated looked legit and was done in a professional manner.

A RAT is a Remote Access Trojan - this was the software which stole the developer's credentials which could then be used to publish the malicious package.

That's a very effective scam. I join a lot of meetings where I find myself needing to install Webex or Microsoft Teams or similar at the last moment and the time constraint means I always click "yes" to things as quickly as possible to make sure I don't join late.

Every maintainer of open source software used by enough people to be worth taking in this way needs to be familiar with this attack strategy.

You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

Read the whole story
chrisamico
2 days ago
reply
Boston, MA
Share this story
Delete

Thoughts on OpenAI acquiring Astral and uv/ruff/ty

1 Share

The big news this morning: Astral to join OpenAI (on the Astral blog) and OpenAI to acquire Astral (the OpenAI announcement). Astral are the company behind uv, ruff, and ty - three increasingly load-bearing open source projects in the Python ecosystem. I have thoughts!

The official line from OpenAI and Astral

The Astral team will become part of the Codex team at OpenAI.

Charlie Marsh has this to say:

Open source is at the heart of that impact and the heart of that story; it sits at the center of everything we do. In line with our philosophy and OpenAI's own announcement, OpenAI will continue supporting our open source tools after the deal closes. We'll keep building in the open, alongside our community -- and for the broader Python ecosystem -- just as we have from the start. [...]

After joining the Codex team, we'll continue building our open source tools, explore ways they can work more seamlessly with Codex, and expand our reach to think more broadly about the future of software development.

OpenAI's message has a slightly different focus (highlights mine):

As part of our developer-first philosophy, after closing OpenAI plans to support Astral’s open source products. By bringing Astral’s tooling and engineering expertise to OpenAI, we will accelerate our work on Codex and expand what AI can do across the software development lifecycle.

This is a slightly confusing message. The Codex CLI is a Rust application, and Astral have some of the best Rust engineers in the industry - BurntSushi alone (Rust regex, ripgrep, jiff) may be worth the price of acquisition!

So is this about the talent or about the product? I expect both, but I know from past experience that a product+talent acquisition can turn into a talent-only acquisition later on.

uv is the big one

Of Astral's projects the most impactful is uv. If you're not familiar with it, uv is by far the most convincing solution to Python's environment management problems, best illustrated by this classic XKCD:

xkcd comic showing a tangled, chaotic flowchart of Python environment paths and installations. Nodes include "PIP", "EASY_INSTALL", "$PYTHONPATH", "ANACONDA PYTHON", "ANOTHER PIP??", "HOMEBREW PYTHON (2.7)", "OS PYTHON", "HOMEBREW PYTHON (3.6)", "PYTHON.ORG BINARY (2.6)", and "(MISC FOLDERS OWNED BY ROOT)" connected by a mess of overlapping arrows. A stick figure with a "?" stands at the top left. Paths at the bottom include "/usr/local/Cellar", "/usr/local/opt", "/usr/local/lib/python3.6", "/usr/local/lib/python2.7", "/python/", "/newenv/", "$PATH", "????", and "/(A BUNCH OF PATHS WITH "FRAMEWORKS" IN THEM SOMEWHERE)/". Caption reads: "MY PYTHON ENVIRONMENT HAS BECOME SO DEGRADED THAT MY LAPTOP HAS BEEN DECLARED A SUPERFUND SITE."

Switch from python to uv run and most of these problems go away. I've been using it extensively for the past couple of years and it's become an essential part of my workflow.

I'm not alone in this. According to PyPI Stats uv was downloaded more than 126 million times last month! Since its release in February 2024 - just two years ago - it's become one of the most popular tools for running Python code.

Ruff and ty

Astral's two other big projects are ruff - a Python linter and formatter - and ty - a fast Python type checker.

These are popular tools that provide a great developer experience but they aren't load-bearing in the same way that uv is.

They do however resonate well with coding agent tools like Codex - giving an agent access to fast linting and type checking tools can help improve the quality of the code they generate.

I'm not convinced that integrating them into the coding agent itself as opposed to telling it when to run them will make a meaningful difference, but I may just not be imaginative enough here.

What of pyx?

Ever since uv started to gain traction the Python community has been worrying about the strategic risk of a single VC-backed company owning a key piece of Python infrastructure. I wrote about one of those conversations in detail back in September 2024.

The conversation back then focused on what Astral's business plan could be, which started to take form in August 2025 when they announced pyx, their private PyPI-style package registry for organizations.

I'm less convinced that pyx makes sense within OpenAI, and it's notably absent from both the Astral and OpenAI announcement posts.

Competitive dynamics

An interesting aspect of this deal is how it might impact the competition between Anthropic and OpenAI.

Both companies spent most of 2025 focused on improving the coding ability of their models, resulting in the November 2025 inflection point when coding agents went from often-useful to almost-indispensable tools for software development.

The competition between Anthropic's Claude Code and OpenAI's Codex is fierce. Those $200/month subscriptions add up to billions of dollars a year in revenue, for companies that very much need that money.

Anthropic acquired the Bun JavaScript runtime in December 2025, an acquisition that looks somewhat similar in shape to Astral.

Bun was already a core component of Claude Code and that acquisition looked to mainly be about ensuring that a crucial dependency stayed actively maintained. Claude Code's performance has increased significantly since then thanks to the efforts of Bun's Jarred Sumner.

One bad version of this deal would be if OpenAI start using their ownership of uv as leverage in their competition with Anthropic.

Astral's quiet series A and B

One detail that caught my eye from Astral's announcement, in the section thanking the team, investors, and community:

Second, to our investors, especially Casey Aylward from Accel, who led our Seed and Series A, and Jennifer Li from Andreessen Horowitz, who led our Series B. As a first-time, technical, solo founder, you showed far more belief in me than I ever showed in myself, and I will never forget that.

As far as I can tell neither the Series A nor the Series B were previously announced - I've only been able to find coverage of the original seed round from April 2023.

Those investors presumably now get to exchange their stake in Astral for a piece of OpenAI. I wonder how much influence they had on Astral's decision to sell.

Forking as a credible exit?

Armin Ronacher built Rye, which was later taken over by Astral and effectively merged with uv. In August 2024 he wrote about the risk involved in a VC-backed company owning a key piece of open source infrastructure and said the following (highlight mine):

However having seen the code and what uv is doing, even in the worst possible future this is a very forkable and maintainable thing. I believe that even in case Astral shuts down or were to do something incredibly dodgy licensing wise, the community would be better off than before uv existed.

Astral's own Douglas Creager emphasized this angle on Hacker News today:

All I can say is that right now, we're committed to maintaining our open-source tools with the same level of effort, care, and attention to detail as before. That does not change with this acquisition. No one can guarantee how motives, incentives, and decisions might change years down the line. But that's why we bake optionality into it with the tools being permissively licensed. That makes the worst-case scenarios have the shape of "fork and move on", and not "software disappears forever".

I like and trust the Astral team and I'm optimistic that their projects will be well-maintained in their new home.

OpenAI don't yet have much of a track record with respect to acquiring and maintaining open source projects. They've been on a bit of an acquisition spree over the past three months though, snapping up Promptfoo and OpenClaw (sort-of, they hired creator Peter Steinberger and are spinning OpenClaw off to a foundation), plus closed source LaTeX platform Crixet (now Prism).

If things do go south for uv and the other Astral projects we'll get to see how credible the forking exit strategy turns out to be.

You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options.

Read the whole story
chrisamico
8 days ago
reply
Boston, MA
Share this story
Delete

The autism spectrum isn’t a sliding scale; 39 traits show the complexity

1 Share

March 17, 2026

2 min read

Google Logo Add Us On GoogleAdd SciAm

Here’s what the autism spectrum really looks like

The autism spectrum is big, vibrant and complicated, a new graphic of 39 traits shows

Cropped image of a row of three colorful sunburst charts.

Amanda Montañez

Autism is a spectrum. This metaphor is a helpful way to explain why autism looks and feels so varied across different people. Since 2013 it’s been baked into the name of the diagnosis itself, autism spectrum disorder (ASD). But what does this spectrum look like?

It’s not simply a one-dimensional scale from “more autistic” to “less autistic,” which would collapse so much of the diversity that the spectrum metaphor is meant to showcase. There is no single trait that defines autism: it encompasses differences in social communication skills, interests, sensory sensitivities, and more. Every person’s profile is unique. These graphics, based on clinicians’ evaluations of actual people using the Autism Symptom Dimensions Questionnaire, reveal a more nuanced “spectrum” of differences.

And this picture doesn’t factor in how people’s profiles change over time in response to treatments, life circumstances or age. It also doesn’t measure individuals’ overall cognitive ability, something researchers treat as a separate but important feature that can affect someone’s particular constellation of traits.

Not all these characteristics are impairments that should be treated. “Someone not making eye contact is useful information for diagnosing autism,” but it is not necessarily an appropriate target for intervention, says Ari Ne’eman, co-founder of the Autistic Self Advocacy Network and a health policy researcher at Harvard University. Many of these traits are best thought of as normal human variation rather than something to be treated or changed, Ne’eman says.

A spectrum in many dimensions

Each of the 39 wedges in the circle represents one question in the Autism Symptom Dimensions Questionnaire. The traits associated with each question (listed below) are grouped into key symptom factors—the main aspects of behavior that evaluators look for when they assess someone for autism.

Amanda Montañez; Source: “The Autism Symptom Dimensions Questionnaire: Development and Psychometric Evaluation of a New, Open-Source Measure of Autism Symptomatology,” by Thomas W. Frazier et al., in Developmental Medicine & Child Neurology, Vol. 65, No. 8; August 2023 (data)

Variation across individuals

These charts represent questionnaire responses for three different autistic individuals. These data reflect each person’s strengths and challenges at their current stage of development and may change over time.

Amanda Montañez; Source: “The Autism Symptom Dimensions Questionnaire: Development and Psychometric Evaluation of a New, Open-Source Measure of Autism Symptomatology,” by Thomas W. Frazier et al., in Developmental Medicine & Child Neurology, Vol. 65, No. 8; August 2023 (data)

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Read the whole story
chrisamico
9 days ago
reply
Boston, MA
Share this story
Delete
Next Page of Stories