Journalist/developer. Storytelling developer @ USA Today Network. Builder of @HomicideWatch. Sinophile for fun. Past: @frontlinepbs @WBUR, @NPR, @NewsHour.
2231 stories
·
45 followers

Local News Isn’t Complete Without Arts Coverage

1 Share

From theaters to galleries, arts coverage reveals what communities value and how people connect across a city.

On Local News Day, we tend to talk about accountability reporting, city hall coverage, and the steady, essential work of keeping the public informed. What we often leave behind, though, is the equally essential work of covering how our communities understand themselves.

Journalists are storytellers. We believe, deeply, in the power of narrative to lift truth from from complexity and help people see the world around them more clearly. As a long-time journalist, but first time arts reporter, it has been a revelation to see this same urge in the context of my new beat. Because artists, too, search for narrative and truth. They also dig deep to share what they understand of the world, with the world. And though the mediums are different, arts, I firmly believe, are as vital to our communities as is news.

And yet, arts coverage is often treated as expendable. As local newsrooms shrink, dedicated critics disappear, and coverage narrows, arts reporting is often one of the first areas to be cut.

Over the past year what I've realized is that when people argue that arts coverage isn’t essential, what they’re often really saying is that the arts themselves are not for everyone. And there’s a reason that perception exists. When tickets can cost $75, $150, or more, access is uneven and the audience reflects that. But that’s not an argument for less coverage; it’s an argument for better, more intentional and inclusive coverage.

It's not hard to do. Artists want people to see their work as much as people want to see it. In reporting on Boston's theater scene I've learned just how much this is true. Theaters, in Boston at least, are really working with audiences to fill seats through ticket discounts, rush policies, and community programs designed to open the doors wider. By surfacing those opportunities, arts journalists can help more people find their way in. At Scene in Boston, we treat that as core reporting, not a bonus feature.

This is why, while my cohost, Lisa Thalhamer, and I do talk about performances we've seen recently, we are explicitly not reviewing them. Instead, we’re helping people navigate the world of theater. We’re offering context, making connections, and, ideally, widening the circle of who feels like they belong. It’s not just about what’s on stage this weekend. It’s about building a sense of cultural momentum—helping people see that these stories are part of a larger conversation happening across the city.

On Local News Day, we should be clear about what we mean when we talk about “saving the news.” It’s not just about preserving institutions—it’s about preserving our ability to understand ourselves and each other. That work happens in city council chambers, yes. But it also happens in theaters, galleries, and performance spaces across our communities. If we want a fuller picture of who we are, arts coverage isn’t optional. It’s essential.

Today, take a moment to explore and support the local news organizations telling those stories—across beats, across formats, and across communities. You can find a list of local news organizations in your area at the Local News Day website here (I guarantee that there will be some you're hearing about for the first time). And if you find value in arts coverage, seek it out, share it, and help sustain it. You can start be forwarding this newsletter to someone you know.

Thank you, today and every day, for being a part of Scene in Boston.

-Laura

Read the whole story
chrisamico
5 hours ago
reply
Boston, MA
Share this story
Delete

New Chef Program Helps People With Autism Find Jobs in Fine-Dining Restaurants

1 Share

Advertisement

SKIP ADVERTISEMENT

Culinary jobs have the potential to be a perfect fit, and a new effort is afoot to help autistic workers land them.

Joseph Valentino, left, a cook at Point Seven in Manhattan, and Franklin Becker, the restaurant’s owner. Mr. Valentino, who has autism, helped inspire a new program to place people on the autism spectrum in fine-dining jobs.Credit...James Estrin/The New York Times

For three Halloweens in a row, Joseph Valentino was Emeril Lagasse.

He wasn’t the only kid in New Jersey who idolized chefs and wanted to be one when he grew up. For Mr. Valentino, though, the dream seemed especially hard to reach. Diagnosed with autism as a toddler, he still hadn’t spoken by age 5, when he first dressed as Emeril.

Today, at 27, he is a cook at Point Seven restaurant in Manhattan, working the cold food, pastry and raw bar stations, sometimes all at once. He says the path he took to get there was strewn with rejection. There were interviews that went nowhere, jobs in kitchens where he never felt welcome, deep periods of depression.

“I viewed myself as a liability,” he said.

His career is one of the inspirations for a new program, Chefs on the Spectrum, meant to train and place people with autism in fine-dining jobs.

Mr. Valentino and the owner of Point Seven, the chef Franklin Becker, introduced the initiative Tuesday night during a $2,500-a-head fund-raiser for the nonprofit organization Autism Speaks at Cipriani Wall Street in Lower Manhattan.

Mr. Becker, who is on the group’s board, pitched his Chefs on the Spectrum idea to the rest of the board as a way to help address two problems at the same time: the shortage of skilled labor in restaurants and a high unemployment rate among autistic adults.

Professional kitchens have long been known as havens for people with neurological and developmental disabilities. Chefs who describe themselves as dyslexic include Marco Pierre White, Jamie Oliver and Marc Murphy. Cooks who say they have some form of attention deficit can seem to outnumber those without.

But people on the autism spectrum have an exceedingly low profile in the business, whether because they haven’t been diagnosed or choose not to disclose it.

“I still haven’t met anybody with autism in the kitchen,” said Mr. Valentino, who cooked in cafeterias and catering kitchens before going to work for Mr. Becker last year. “I think that needs to be fixed, and I think this program will fix it.”

There are other initiatives that place people on the spectrum into hospitality jobs. Several coffee chains, including Bitty & Beau’s, which has 13 locations in the United States, are dedicated to employing people with intellectual and developmental disabilities.

But the focus on fine dining makes Chefs on the Spectrum unusual. Mr. Becker, who has an adult son with autism, has recruited more than a dozen chefs from around the country, including Andrew Zimmern, Daniel Boulud, Chris Bianco, Maneet Chauhan and Michael and Bryan Voltaggio. Their restaurants will hire workers from the program after receiving training in how to help those new employees thrive.

Image

All eight employees of Chitarra Pastaria, a pasta maker in Cambridge, Mass., are on the autism spectrum, including Stefano Micali, left, and Julia Agostino.Credit...David Degner for The New York Times

“There’s a preconception that there’s a risk in hiring autistic individuals,” Mr. Becker said. “The real risk is overlooking incredible talent.”

That talent can take several forms. Some cooks on the spectrum are meticulously organized at their stations. Some have an exceptional recall of recipes, and others are especially diligent about safety protocols, said Mark Fierro, who provides job-placement support and career coaching at TACT (Teaching the Autism Community Trades), a school for autistic adults in Englewood, Colo.

Some students in TACT’s culinary program perform with astonishing consistency. If a restaurant wants meat butchered into a certain cut, Mr. Fierro said, “they’re going to make them exactly the same way every single time.”

A common hallmark of autism is a cultivation of special interests, intense and passionate devotions to particular topics. For cooks on the spectrum, this can mean a penchant for intellectual spelunking into, say, the molecular structure of hydrocolloids, or the behavior of the molds that produce blue cheese and miso.

“Researching an ingredient, breaking down where it comes from, how to use it, the cultural context — all of that is a special interest,” said a chef in New York City on the autism spectrum who asked not to be identified because she fears that neurodivergence can be misunderstood. “My brain is never satisfied for information. It always craves more.”

Her proclivity for amassing and organizing data made her a “load-bearing pillar” of any kitchen where she worked, she said. It also sets her up to make unexpected associations that can lead to creative leaps.

“The needle for ingenuity gets pushed forward by people who don’t think the same way neurotypical people think,” she said.

Advocates for greater acceptance of autism in the kitchen say that working side by side can benefit people on and off the spectrum. At Chitarra Pastaria, a small pasta company in Cambridge, Mass., whose eight employees all have autism, tailoring jobs for each worker’s talents has been a valuable experience, said one of the founders, the chef Ken Oringer.

Image

Ezra Kukis works a pasta extruder at Chitarra Pastaria, which tailors jobs to each employee’s skills.Credit...David Degner for The New York Times

“You get to be able to appreciate people for their skill sets,” he said. “It really teaches you to have these relationships with people and learn what makes them tick and how they can be effective.” (Mr. Oringer has been recruited by Mr. Becker to join Chefs on the Spectrum’s pilot program.)

For some people on the spectrum, kitchens are places where they can put their aptitudes to good use without being held back by the challenges that social interactions often pose.

To help autistic people navigate the work, restaurants may have to make minor adjustments. One easy accommodation, said Keith Wargo, the chief executive of Autism Speaks, is to avoid face-to-face job interviews, which demand a complex set of communication skills, in favor of tryouts. Another is to swap LED bulbs for fluorescent fixtures, which flicker and buzz in ways that some people on the spectrum find stressful.

Some accommodations can have wider benefits. Mr. Fierro said he has advised employers to provide cooking timers to help TACT students with multitasking, a minor step that he said also helps neurotypical workers.

Steps, a company that runs job-training centers in Bangkok for neurodivergent adults, as well as cafes and a bakery that employ graduates, consulted with one large hotel group and advised it to place maps, labels and other signs in its kitchens there. The signs were meant to help workers who had memory or attention issues, but they proved popular with almost everybody.

“It helped onboard all new employees more quickly, it helped people work more efficiently during large events, and it increased employees’ sense of belonging,” said Courtney Konyn, the group’s communications director.

Chefs on the Spectrum is still taking shape, but it is likely that some of its training will be based on Mr. Valentino’s experience of navigating professional kitchens. He will try to answer questions about how they can work with people with autism. And he hopes that his career will help change views of autism.

“One day, I do want to become an executive chef,” Mr. Valentino said. “I want to be that one person that has autism and made it to the top of the brigade system.” Mr. Becker, he said, believes he has the qualities to make it happen.

“I have the passion and determination,” Mr. Valentino said. “And I don’t like being late to work.”

Follow New York Times Cooking on Instagram, Facebook, YouTube, TikTok and Pinterest. Get regular updates from New York Times Cooking, with recipe suggestions, cooking tips and shopping advice.

Pete Wells is a reporter covering food. He was previously The Times’s restaurant critic from 2012 until 2024 and, before that, the editor of the Food section.

A version of this article appears in print on April 8, 2026, Section

D

, Page

2

of the New York edition

with the headline:

A Haven for People With Autism. Order Reprints | Today’s Paper | Subscribe

Related Content

Advertisement

SKIP ADVERTISEMENT

Add the Sunday Times in print to your subscription. 50% off the first year. Learn more.

Read the whole story
chrisamico
5 hours ago
reply
Boston, MA
Share this story
Delete

The Cathedral, the Bazaar, and the Winchester Mystery House

1 Share

Our era of sprawling, idiosyncratic tooling

In 1998, Eric S. Raymond published the founding text of open source software development, “The Cathedral and the Bazaar”. In it, he detailed two methods of building software:

  • The Cathedral model is carefully planned, closed-source, and managed by an exclusive team of developers.
  • The Bazaar model is open, transparent, and community-driven.

The Bazaar model was enabled by the internet, which allowed for distributed coordination and distribution. More people could contribute code and share feedback, yielding better, more secure software. “Given enough eyeballs, all bugs are shallow,” Raymond wrote, coining Linus’ Law.

The ideas crystallized in “The Cathedral and the Bazaar” helped kick off a quarter-century of open source innovation and dominance.

But just as the internet made communication cheap and birthed the Bazaar, AI is making code cheap and kicking off a new era filled with idiosyncratic, sprawling, cobbled-together software.

Meet the third model: the Winchester Mystery House.


https://www.flickr.com/photos/harshlight/3669393933

The Winchester Mystery House

Located less than 10 miles southeast from the Computer History Museum, the Winchester Mystery House is an architectural oddity.

Following the death of her husband and mother-in-law, Sarah Winchester controlled a fortune. Her shares in the Winchester Repeating Arms Company, and the dividends they threw off, made it so Sarah could not only live in comfort but pursue whatever passion she desired. That passion was architecture.

Sarah didn’t build her mansion to house ghosts1, she built her mansion because she liked architecture. With no license, no formal training, in an era when women (even very rich women) didn’t have a path to practicing architecture, Sarah focused on her own home. She made up for her lack of license with passion and effectively unlimited funds.

Sarah built what she wanted. “At its largest the house had ~500 rooms.” Today it has roughly 160 rooms, 2,000 doors, 10,000 windows, 47 stairways, 47 fireplaces, 13 bathrooms, and 6 kitchens. Carved wood drapes the walls and ceilings. Stained glass is everywhere. Projects were planned, completed, abandoned, torn down, and rebuilt.

It was anything but aimless. And practical innovations ran throughout, including push-button gas lighting, an early intercom system, steam heating, and indoor gardens. The oddities that amuse today’s visitors were mostly practical accommodations for Sarah’s health (stairways with very small steps), functional designs no longer used (trap doors in greenhouses to route excess water), or quick fixes to damage from the 1906 earthquake.

Winchester passed in 1922. Nine months later, the house became a tourist attraction.

Today, many programmers are Sarah Winchester.


Claude Code's Public GitHub Activity
Lines Added Lines Deleted

What Happens When Code is Cheap

We aren’t as rich as Sarah Winchester, but when code is this cheap, we don’t need to be.

Jodan Alberts illustrated this recently, collecting and visualizing data detailing public Github commits attributed to Claude Code. That’s his data in the chart above, with Claude seeming to only accelerate through March2.

It’s hard to get a handle on individual usage, though, so I went searching for a proxy and landed on the chart below:

Average Net Lines Added Per Commit in Claude Code
7-Day Average

After Opus 4.5 and recent work enabling Agent Teams, the average net lines added by Claude per commit is now smooth and steady at 1,000 lines of code per commit3.

1,000 lines of code per commit is ~2 magnitudes higher than what a human programmer writes per day.

If you search for human benchmarks, you’ll find many citing Fred Brooks’ The Mythical Man Month while claiming a good engineer might write 10 cumulative lines of code per day4. If you further explore, you’ll find numbers higher than 10 cited, but generally less than 100.

Here’s a good anecdote from antirez on a Hacker News thread discussing the Brooks “quote”:

I did some trivial math. Redis is composed of 100k lines of code, I wrote at least 70k of that in 10 years. I never work more than 5 days per week and I take 1 month of vacations every year, so assuming I work 22 days every month for 11 months:

70000/(22 x 11 x 10) = ~29 LOC / day

Which is not too far from 10. There are days where I write 300-500 LOC, but I guess that a lot of work went into rewriting stuff and fixing bugs, so I rewrote the same lines again and again over the course of years, but yet I think that this should be taken into account, so the Mythical Man Month book is indeed quite accurate.

6 years after this comment, Claude is pushing 1,000 lines of code per commit.


So what do we do with all this cheap code?

Unfortunately, everything else remains roughly the same cost and roughly the same speed. Feedback hasn’t gotten cheaper; the “eyeballs” that guided the software developed by the bazaar haven’t caught up to AI.

There is only one source of feedback that moves at the speed of AI-generated code: yourself. You’re there to prompt, you’re there to review. You don’t need to recruit testers, run surveys, or manage design partners. You just build what you want, and use what you build.

And that’s what many developers are doing with cheap code: building idiosyncratic tools for ourselves, guided by our passions, taste, and needs.

Sound familiar?


https://commons.wikimedia.org/wiki/File:Winchester_Mystery_House_2023-07-17_02.jpg

Welcome to the Mystery House

Steve Yegge’s Gastown is a Winchester Mystery House. It’s incredibly idiosyncratic and sprawling, rich with metaphors and hacks. It’s the perfect tool for Steve.

Jeffrey Emanuel’s Agent Flywheel is a Winchester Mystery House. A significant subset of tokenmaxxers decide they need to rebuild their dependencies in Rust; Jeff is one such example. His “FrankenSuite” includes Rust rewrites of SQLite, Node, btrfs, Redis, Pandas, NumPy, JAX, and Torch.

Philip Zeyliger noted the pattern last week, writing, “Everyone is building a software factory.” But it goes beyond software. Gary Tan’s personal AI committee gstack is a Winchester Mystery House constructed mostly from Markdown.

Everywhere you look, there are Winchester Mystery Houses.

Each Winchester Mystery House is idiosyncratic. They are highly personalized. The tightly coupled feedback loop between the coding agent and the user yields software that reflects the developer’s desires. They usually lack documentation. To outsiders, they’re inscrutable.

Winchester Mystery Houses are sprawling. Guided by the needs of the developer, these tools tend to spread out, constantly annexing territory in the form of new functions and new repositories. Work is almost always additive. Code is added when it’s needed, bugs are patched in place, and countless appendages remain. There’s little incentive to prune when code is free.

And building a Winchester Mystery House should be fun. Coding agents turn everything into a sidequest, and we eagerly join in. Building the perfect workflow is a passion for many devs, so we keep pushing.

Winchester Mystery Houses are idiosyncratic, sprawling, and fun. But does this mean we’re abandoning the bazaar?


https://www.flickr.com/photos/ifpri/4860343116

What Happens to the Bazaar?

What happens when we all tend to our Mystery Houses? When our free time is spent building tools just for ourselves, will we stop working on shared projects? Will we abandon the bazaar?

Probably not. The bazaar is packed right now, but not in a good way.

Code is cheap, so people are slamming open source repositories with agent-written contributions, in an attempt to pad their resumes or manifest their pet features. Daniel Stenberg ended bug bounties for curl after a deluge of poor submissions sapped reviewer bandwidth. It’s gotten so bad, Github recently added a feature to disable pull request contributions.

Anecdotally, I’m seeing good contributions pick up as well. They’re just drowned out by the slop. For what it’s worth, curl commits are dramatically up in the agentic era. And people are sharing what they build. A recent analysis by Dumky shows more packages and repos rising in the last quarter.

There’s plenty of budget for both Mystery Houses and the bazaar when code is this cheap. The new challenge is developing systems and processes for managing the deluge. We don’t need eyeballs to find bugs in the software, we need eyeballs to find bugs before they reach the software.

In many ways this is the inverse of the bazaar model era. The internet made feedback and communal coordination faster, easier, and cheaper. The bazaar model has a high throughput of feedback (many eyeballs) but relatively high latency for modifications (file an issue, discuss, submit a PR, wait for review, etc.)

Coding agents, on the other hand, make implementation faster while feedback and coordination are unchanged. The Winchester Mystery House model sidesteps this by collapsing the feedback loop into one person: latency is near zero, but throughput is just you. The bazaar, defined by communal work, can’t adopt this hack. Coding agents in the bazaar create a mess: implementation at machine speed hitting coordination infrastructure built for human speed. Which is why maintainers feel like they’re drowning.

We need new tools, skills, and conventions.


Lessons from the Mystery House

Coding agents have dropped the cost of code so dramatically, we’re entering a new era of software development, the first change of this magnitude since the internet kicked off open source software. Change arrived quickly, and it’s not slowing down. But in reviewing the Winchester Mystery House framework, I think we can take away a few lessons.

Lesson 1: The bazaar and Winchester Mystery Houses can coexist.

When listing example Winchester Mystery Houses, I didn’t mention OpenClaw, even though it is the defining example. I saved it for here because it nicely illustrates how Winchester Mystery Houses and the bazaar can coexist.

OpenClaw is incredibly modular and places few limitations on the user. It integrates 25 different chat and notification systems, plugs into most inference end points, and is built on the exceptionally flexible pi agent toolkit. This eager flexibility was embraced early – security and data protections be damned – but since its exponential adoption Peter Steinberger and the community have been steadily pushing improvements and fixes.

And like other breakout open source projects of yore, the ecosystem is adopting the best ideas and mitigating the worst aspects of OpenClaw. Countless alternate “claw” projects have emerged (there’s NanoClaw, NullClaw, ZeroClaw, and more!) Companies have launched services to make claws easy or safer. Cloudflare launched Moltworker to make deploy easy, Nvidia shipped NemoClaw with a security focus, and Claude keeps adding claw-like features to its desktop app.

Lesson 2: Don’t sell the fun stuff.

One reason OpenClaw works so well in the bazaar is that it is a foundation for personal tools. Out of the box, a claw just sits there. It’s up to the user to determine what it does and how it does it, leveraging the connections and infrastructure OpenClaw provides. OpenClaw lets less experienced developers spin up their own Winchester Mystery Houses, while experienced devs get to leverage much of the common integrations and systems OpenClaw provides. Peter and team have done a great job drawing a line between the common core (what the bazaar works on) and what they leave up to the user: the boring, critical stuff is the job of the commons.

Thinking back to Sarah Winchester and her idiosyncratic, sprawling mansion, we see the same pattern. Sarah hired vendors! She used off-the-shelf parts! Her bathtubs, toilets, faucets, and plumbing weren’t crafted on site.

The boring stuff, the hard bits, or the things that have disastrous failure modes are the things we should collaborate on or employ specialists to handle. (Come to think, plumbing checks all three boxes). This is the opportunity for open source software, dev tools, and software companies.

Don’t try to sell developers the stuff that’s fun, the stuff they want to build. Sell them the stuff they avoid or don’t want to take responsibility for. Sarah Winchester didn’t hire metalworkers to craft the pipes for her plumbing, but she did hire craftspeople to create hundreds of stained-glass windows to her specs.

Lesson 3: The limits of code are communication.

OpenClaw shows the bazaar remains relevant, but also highlights the problems facing open source in the agentic era. Right now, there are 1,173 open pull requests and 1,884 new issues on the OpenClaw repo.

There is more code and more projects than we could ever review. The challenge now, for open source maintainers and users, is sifting through it all. How do we find the novel ideas that everyone should adopt and borrow?

OpenClaw is one of the successes, something we all noticed. And for it, the problem is processing the feedback. For the projects we’ll never find, the ones lost in the deluge, their problem is lack of feedback. You either find attention and drown in contributions or drown in the ocean of repos and never hear a thing.

The internet made coordination cheap and gave us the bazaar. Coding agents made implementation cheap and gave us the Winchester Mystery House. What we’re missing are the tools and conventions that make attention cheap, that let maintainers absorb contributions at machine speed and let good ideas surface among the noise. Until we figure this out, the bazaar will keep getting louder without getting smarter, and the best ideas in our Mystery Houses will be forgotten once we stop maintaining them.


  1. The lore that Winchester built her mansion to house ghosts killed by Winchester rifles is likely just gossip and marketing. There’s little evidence to support these claims. (99% Invisible has a good episode exploring Winchester, her house, and this lore.) 

  2. While editing this piece, Dumky published another analysis illustrating the production of coding agents. In it he shows a 280% increase in “Show HN” posts, a 93% increase in new Github repos, and a dramatic uptick in packages published to Crates.io. 

  3. Anthropic’s ability to stabilize this line is rather impressive. Claude code is getting better at planning, better at chunking out work, enabling more effective sub-agent delegation. 

  4. Though this is likely an updated tweak of Brooks’ statement that an “industrial team” might write 1,000 “statements” per year

Read the whole story
chrisamico
5 hours ago
reply
Boston, MA
Share this story
Delete

I Decompiled the White House's New App

1 Share

The White House released an app on the App Store and Google Play. They posted a blog about it. "Unparalleled access to the Trump Administration."

It took a few minutes to pull the APKs with ADB, and threw them into JADX.

Here is everything I found.

It's a React Native app built with Expo (SDK 54), running on the Hermes JavaScript engine. The backend is WordPress with a custom REST API. The app was built by an entity called "forty-five-press" according to the Expo config.

The actual app logic is compiled into a 5.5 MB Hermes bytecode bundle. The native Java side is just a thin wrapper.

Version 47.0.1. Build 20. Hermes enabled. New Architecture enabled. Nothing weird here. Let's keep going.

Two things stand out here. First, there's a plugin called withNoLocation. Second, there's a plugin called withStripPermissions. Remember these. They become relevant very soon.

OTA updates are disabled. The Expo update infrastructure is compiled in but dormant.

I extracted every string from the Hermes bytecode bundle and filtered for URLs and API endpoints. The app's content comes from a WordPress REST API at whitehouse.gov with a custom whitehouse/v1 namespace.

Here are the endpoints:

EndpointWhat It Serves
/wp-json/whitehouse/v1/homeHome screen
/wp-json/whitehouse/v1/news/articlesNews articles
/wp-json/whitehouse/v1/wire"The Wire" news feed
/wp-json/whitehouse/v1/liveLive streams
/wp-json/whitehouse/v1/galleriesPhoto galleries
/wp-json/whitehouse/v1/issuesPolicy issues
/wp-json/whitehouse/v1/prioritiesPriorities
/wp-json/whitehouse/v1/achievementsAchievements
/wp-json/whitehouse/v1/affordabilityDrug pricing
/wp-json/whitehouse/v1/media-bias"Media Bias" section
/wp-json/whitehouse/v1/social/xX/Twitter feed proxy

Other hardcoded strings from the bundle: "THE TRUMP EFFECT", "Greatest President Ever!" (lol), "Text President Trump", "Send a text message to President Trump at 45470", "Visit TrumpRx.gov", "Visit TrumpAccounts.gov".

There's also a direct link to <a href="https://www.ice.gov/webform/ice-tip-form" rel="nofollow">https://www.ice.gov/webform/ice-tip-form</a>. The ICE tip reporting form. In a news app.

It's a content portal. News, live streams, galleries, policy pages, social media embeds, and promotional material for administration initiatives. All powered by WordPress.

Now let's look at what else it does.

The app has a WebView for opening external links. Every time a page loads in this WebView, the app injects a JavaScript snippet. I found it in the Hermes bytecode string table:

Read that carefully. It hides:

  • Cookie banners
  • GDPR consent dialogs
  • OneTrust popups
  • Privacy banners
  • Login walls
  • Signup walls
  • Upsell prompts
  • Paywall elements
  • CMP (Consent Management Platform) boxes

It forces body { overflow: auto !important } to re-enable scrolling on pages where consent dialogs lock the scroll. Then it sets up a MutationObserver to continuously nuke any consent elements that get dynamically added.

An official United States government app is injecting CSS and JavaScript into third-party websites to strip away their cookie consent dialogs, GDPR banners, login gates, and paywalls.

The native side confirms this is the injectedJavaScript prop on the React Native WebView:

Every page load in the in-app browser triggers this. It wraps the injection in an IIFE and runs it via Android's evaluateJavascript().

Remember withNoLocation from the Expo config? The plugin that's supposed to strip location? Yeah. The OneSignal SDK's native location tracking code is fully compiled into the APK.

270,000 milliseconds is 4.5 minutes. 570,000 is 9.5 minutes.

To be clear about what activates this: the tracking doesn't start silently. There are three gates. The LocationManager checks all of them before the fused location API ever fires.

First, the _isShared flag. It's read from SharedPreferences on init and defaults to false. The JavaScript layer can flip it on with setLocationShared(true). The Hermes string table confirms both setLocationShared and isLocationShared are referenced in the app's JS bundle, so the app has the ability to toggle this.

Second, the user has to grant the Android runtime location permission. The location permissions aren't declared in the AndroidManifest but requested at runtime. The Google Play Store listing confirms the app asks for "access precise location only in the foreground" and "access approximate location only in the foreground."

Third, the start() method only proceeds if the device actually has a location provider (GMS or HMS).

If all three gates pass, here's what runs. The fused location API requests GPS at the intervals defined above:

This gets called on both onFocus() and onUnfocused(), dynamically switching between the 4.5-minute foreground interval and the 9.5-minute background interval.

When a location update comes in, it feeds into the LocationCapturer:

Latitude, longitude, accuracy, timestamp, whether the app was in the foreground or background, and whether it was fine (GPS) or coarse (network). All of it gets written into OneSignal's PropertiesModel, which syncs to their backend.

The data goes here:

There's also a background service that keeps capturing location even when the app isn't active:

So the tracking isn't unconditionally active. But the entire pipeline including permission strings, interval constants, fused location requests, capture logic, background scheduling, and the sync to OneSignal's API, all of them are fully compiled in and one setLocationShared(true) call away from activating. The withNoLocation Expo plugin clearly did not strip any of this. Whether the JS layer currently calls setLocationShared(true) is something I can't determine from the native side alone, since the Hermes bytecode is compiled and the actual call site is buried in the 5.5 MB bundle. What I can say is that the infrastructure is there, ready to go, and the JS API to enable it is referenced in the bundle.

OneSignal is doing a lot more than push notifications in this app. From the Hermes string table:

  • addTag - tag users for segmentation
  • addSms - associate phone numbers with user profiles
  • addAliases - cross-device user identification
  • addOutcomeWithValue / addUniqueOutcome - track user actions and conversions
  • OneSignal-notificationClicked - notification tap tracking
  • OneSignal-inAppMessageClicked / WillDisplay / DidDisplay / WillDismiss / DidDismiss - full in-app message lifecycle tracking
  • OneSignal-permissionChanged / subscriptionChanged / userStateChanged - state change tracking
  • setLocationShared / isLocationShared - location toggle
  • setPrivacyConsentRequired / setPrivacyConsentGiven - consent gating

The local database tracks every notification received and whether it was opened or dismissed:

Your location, your notification interactions, your in-app message clicks, your phone number if you provide it, your tags, your state changes. All going to OneSignal's servers.

The app embeds YouTube videos using the react-native-youtube-iframe library. This library loads its player HTML from:

That's a personal GitHub Pages site. If the lonelycpp GitHub account gets compromised, whoever controls it can serve arbitrary HTML and JavaScript to every user of this app, executing inside the WebView context.

This is a government app loading code from a random person's GitHub Pages.

The app loads third-party JavaScript from Elfsight to embed social media feeds:

Elfsight is a commercial SaaS widget company. Their JavaScript runs inside the app's WebView with no sandboxing. Whatever tracking Elfsight does, it does it here too. Their code can change at any time. The Elfsight widget ID 4a00611b-befa-466e-bab2-6e824a0a98a9 is hardcoded in an HTML embed.

  • Mailchimp at whitehouse.us10.list-manage.com/subscribe/post-json handles email signups. User emails go to Mailchimp's infrastructure.
  • Uploadcare at <a href="http://ucarecdn.com" rel="nofollow">ucarecdn.com</a> hosts content images via six hardcoded UUIDs.
  • Truth Social has a hardcoded HTML embed with Trump's profile, avatar image URL from <a href="http://static-assets-1.truthsocial.com" rel="nofollow">static-assets-1.truthsocial.com</a>, and a "Follow on Truth Social" button.
  • Facebook page plugin is loaded in an iframe via facebook.com/plugins/page.php.

None of these are government-controlled infrastructure.

The app uses standard Android TrustManager for SSL with no custom certificate pinning. If you're on a network with a compromised CA (corporate proxies, public wifi with MITM, etc.), traffic between the app and its backends can be intercepted and read.

The build has some sloppy leftovers.

A localhost URL made it into the production Hermes bundle:

That's the React Native Metro bundler dev server.

A developer's local IP is hardcoded in the string resources:

The Expo development client (expo-dev-client, expo-devlauncher, expo-devmenu) is compiled into the release build. There's a dev_menu_fab_icon.png in the drawable resources. The Compose PreviewActivity is exported in the manifest, which is a development-only component that should not be in a production APK.

The AndroidManifest itself is pretty standard for a notification-heavy app:

Plus about 16 badge permissions for Samsung, HTC, Sony, Huawei, OPPO, and other launchers. These just let the app show notification badge counts. Not interesting.

The interesting permissions are the ones that aren't in the manifest but are hardcoded as runtime request strings in the OneSignal SDK, as covered above. Fine location. Coarse location. Background location.

The Google Play listing also mentions: "modify or delete the contents of your shared storage", "run foreground service", "this app can appear on top of other apps", "run at startup", "use fingerprint hardware", "use biometric hardware."

The file provider config is also worth mentioning:

That exposes the entire external storage root. It's used by the WebView for file access.

68+ libraries are compiled into this thing. The highlights:

CategoryLibraries
FrameworkReact Native, Expo SDK 54, Hermes JS engine
Push/EngagementOneSignal, Firebase Cloud Messaging, Firebase Installations
Analytics/TelemetryFirebase Analytics, Google Data Transport, OpenTelemetry
NetworkingOkHttp 3, Apollo GraphQL, Okio
ImagesFresco, Glide, Coil 3, Uploadcare CDN
VideoExoPlayer (Media3), Expo Video
MLGoogle ML Kit Vision (barcode scanning), Barhopper model
CryptoBouncy Castle
StorageExpo Secure Store, React Native Async Storage
WebViewReact Native WebView (with the injection script)
DIKoin
SerializationGSON, Wire (Protocol Buffers)
LicensePairIP license check (Google Play verification)

25 native .so libraries in the arm64 split. The full Hermes engine, React Native core, Reanimated, gesture handler, SVG renderer, image pipeline, barcode scanner, and more.

The official White House Android app:

  1. Injects JavaScript into every website you open through its in-app browser to hide cookie consent dialogs, GDPR banners, login walls, signup walls, upsell prompts, and paywalls.

  2. Has a full GPS tracking pipeline compiled in that polls every 4.5 minutes in the foreground and 9.5 minutes in the background, syncing lat/lng/accuracy/timestamp to OneSignal's servers.

  3. Loads JavaScript from a random person's GitHub Pages site (lonelycpp.github.io) for YouTube embeds. If that account is compromised, arbitrary code runs in the app's WebView.

  4. Loads third-party JavaScript from Elfsight (elfsightcdn.com/platform.js) for social media widgets, with no sandboxing.

  5. Sends email addresses to Mailchimp, images are served from Uploadcare, and a Truth Social embed is hardcoded with static CDN URLs. None of this is government infrastructure.

  6. Has no certificate pinning. Standard Android trust management.

  7. Ships with dev artifacts in production. A localhost URL, a developer IP (10.4.4.109), the Expo dev client, and an exported Compose PreviewActivity.

  8. Profiles users extensively through OneSignal - tags, SMS numbers, cross-device aliases, outcome tracking, notification interaction logging, in-app message click tracking, and full user state observation.

Is any of this illegal? Probably not. Is it what you'd expect from an official government app? Probably not either.

Read the whole story
chrisamico
2 days ago
reply
Boston, MA
Share this story
Delete

Don't be a slop cannon

1 Share

I wrote this because I made this mistake myself.

The other day, I was attempting to burn through my remaining Claude Code session limit before it reset. I was feeling productive, maybe a little too productive. So I found an open source journalism project I genuinely admire, saw some open issues, and thought I could help. I ran some tests on the code and did my best to verify that the changes were relevant and accurate. But I opened several pull requests — multiple PRs, across multiple repos, in the span of about an hour. All AI-assisted. And that was the problem.

It doesn’t matter that my code was good (I think). The maintainers had no way to know that. To a small team receiving multiple AI-authored PRs from a stranger in rapid succession, the pattern looked like the start of a flood — the kind of flood they’d been reading about other projects drowning in. They had no reason to assume good faith from someone they’d never seen before. They had every reason to be concerned.

A maintainer from the project emailed me. They were gracious and patient about it — far more than they needed to be.

They explained that as a small team, they couldn’t review back-to-back AI-authored pull requests, especially several in one hour. They asked me to pick a single issue, make sure it followed best practices and passed tests in my local dev environment, and then let them know when it was ready for review. No anger. No public shaming. Just a clear, professional request to slow down and do it right.

In my case, the code itself was fine (I think). This was a false positive on quality. But it was a true positive on the pattern — and if they hadn’t said something, I probably would have kept going, submitting PRs on every open issue I felt comfortable tackling. That’s the thing about enthusiasm combined with powerful tools: it doesn’t feel like a flood when you’re the one sending it.

On top of that, even though I did my best to verify what I was submitting, I’m a beginner. There’s an old distinction between “known unknowns” and “unknown unknowns” — the things you know you don’t know versus the things you don’t even know to look for.

As an early-stage contributor, I had plenty of both. There are edge cases, architectural decisions, project-specific conventions, and backward compatibility concerns that an experienced contributor would catch but that I’d walk right past. I didn’t even know what questions to ask, let alone the answers. Following what you think is proper procedure isn’t the same as actually knowing what proper procedure is for a given project.

Every codebase has its own norms, and you can’t learn them from the outside.

That’s why, especially as a beginner, it’s worth going the extra mile before you even think about contributing: actually use the app or project you want to help with. Read through the codebase. Explore the existing issues and past pull requests to understand how the community works. And reach out to the maintainers first — ask if they’re open to AI-assisted contributions, ask if there are norms or practices you should know about, and ask which issues would be most helpful to tackle. A five-minute conversation can save everyone hours of wasted work.

And here’s the uncomfortable truth that goes beyond etiquette: even if you follow every best practice on this page, the maintainer may still not want your code. When AI makes writing code trivial, the code itself stops being the valuable part of a contribution.

Nikita Roy, a data scientist, Knight Fellow at ICFJ, and founder of Newsroom Robots, put it bluntly when I told her about my experience: “AI-generated PRs are putting real strain on maintainers right now, even well-intentioned ones, and it’s a big issue in tech circles. So even with following best practices, I don’t believe that’s necessarily the solution.”

Nikita pointed me to Steve Ruiz’s blog post about shutting down external PRs on tldraw, where he asked: “If writing the code is the easy part, why would I want someone else to write it?” The answer might be that the most valuable thing you can contribute isn’t code — it’s bug reports, documentation, testing, design feedback, or a well-written issue that helps the maintainer understand a problem they haven’t seen yet.

And my situation is still the mild version.

I at least took the time to verify what I was submitting. The problem is made far worse by people who don’t — who point an AI at a repo, generate a patch, and submit it without reading, testing, or understanding any of it. Maintainers can’t tell the difference at a glance between a well-tested AI-assisted PR and a completely untested one. The volume and the pattern look the same from their side.

I got lucky. I got a kind email from a patient person. Many open source maintainers aren’t in a position to be that generous. They’re unpaid volunteers maintaining projects that millions of people depend on, and they’re being hit with a flood of AI-generated contributions from strangers who never bothered to check their work.

Some maintainers have shut down their bug bounty programs. Others have closed their projects to outside contributions entirely. A few have started keeping public lists of repeat offenders. My experience was mild compared to what many of them deal with every day.

Using AI coding agents means you’ll be able to generate code faster than you ever could before. That power comes with a responsibility: as Simon Willison put it, your job is to deliver code you have proven to work.

Just because you can generate a pull request in five minutes doesn’t mean you should.

This post was originally published as part of the course materials for “Advanced prompt engineering for journalists,” a forthcoming MOOC from the Knight Center for Journalism in the Americas at UT Austin.

Read the full guide, list of case studies, and other course resources here.

Read the whole story
chrisamico
3 days ago
reply
Boston, MA
Share this story
Delete

NASA’s Artemis II Is the First Crewed Moon Mission Since 1972. Why Are We Going Back?

1 Share

A lunar telescope could be installed in a crater on the far side of the moon.

Over the past century, the Earth has become a noisy place for astronomers wishing to listen to the radio waves that fill the universe. Those waves emanate from glowing gas clouds of hydrogen, auroras of distant planets and fast-spinning neutron stars. But those signals are often drowned out by ubiquitous transmissions of modern society like radio and television shows, cellphone calls and industrial electrical equipment.

The Earth’s ionosphere also blocks long-wavelength radio waves, which would give clues about the very early universe, from reaching ground-based radio telescopes. But on the far side of the moon, all that radio noise from Earth is silenced, unable to pass through 2,000 miles of rock. And the long-wavelength radio waves could also be observed.

Building a radio telescope in a crater on the moon would take advantage of that natural concave shape. A location near the equator in the middle of the far side could be an ideal listening spot.

After years of talking about lunar outposts in vague terms for sometime in the indefinite future, NASA recently shifted, putting a continuing U.S. presence on the moon solidly on its road map for the coming decade.

Plans for a moon base would proceed in phases. It would go from regular moon visits to building permanent infrastructure; power and communication systems; vehicles to carry astronauts and cargo across the surface; and possibly nuclear power plants.

Methodology

The 3-D model’s base imagery is from NASA’s Moon CGI kit. Data on lunar landing and crash sites was gathered and verified using multiple sources: NASA Space Science Data Coordinated Archive; China National Space Administration; Japanese Space Agency; European Space Agency; Indian Space Research Organization; and the Smithsonian Institution.

To create the time-lapse animation showing the moon’s permanently shadowed areas at the south pole in January 2026, New York Times journalists used a digital elevation model from the Lunar Orbiter Laser Altimeter (LOLA), data from LOLA’s Gridded Data Records (GDRs) and ephemeris sourced from the U.S. Geological Service (USGS) Astropedia.

Frozen water detections were provided by Shuai Li from the University of Hawaii.

Lunar landing sites for future Artemis missions at the South Pole are from NASA’s update from October 2024.

Helium-3 concentration data was provided by Wenzhe Fa from Peking University, China.

Diagrams of the lunar radio telescope deployment and radio interference are based on NASA Jet Propulsion Laboratory’s concepts.

This project also used geographic references from the USGS Geologic Atlas of the Moon and the Lunar South Pole Atlas by the Lunar and Planetary Institute.

Read the whole story
chrisamico
4 days ago
reply
Boston, MA
Share this story
Delete
Next Page of Stories