Journalist/developer. Storytelling developer @ USA Today Network. Builder of @HomicideWatch. Sinophile for fun. Past: @frontlinepbs @WBUR, @NPR, @NewsHour.
2178 stories
·
45 followers

ICE and CBP Agents Are Scanning Peoples’ Faces on the Street To Verify Citizenship

1 Share

“You don’t got no ID?” a Border Patrol agent in a baseball cap, sunglasses, and neck gaiter asks a kid on a bike. The officer and three others had just stopped the two young men on their bikes during the day in what a video documenting the incident says is Chicago. One of the boys is filming the encounter on his phone. He says in the video he was born here, meaning he would be an American citizen.

When the boy says he doesn’t have ID on him, the Border Patrol officer has an alternative. He calls over to one of the other officers, “can you do facial?” The second officer then approaches the boy, gets him to turn around to face the sun, and points his own phone camera directly at him, hovering it over the boy’s face for a couple seconds. The officer then looks at his phone’s screen and asks for the boy to verify his name. The video stops.

💡

Do you have any more videos of ICE or CBP using facial recognition? Do you work at those agencies or know more about Mobile Fortify? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at <a href="mailto:joseph@404media.co">joseph@404media.co</a>.

In another video of a different incident, this time filmed from the perspective of a driver that authorities have also apparently stopped in Chicago, a group of ICE officers surround the driver side window. One of the officers, wearing a vest from Immigration and Customs Enforcement’s (ICE) Enforcement and Removal Operations (ERO), tells one of his coworkers the driver is refusing to be ID’d. The second ICE official then points his own phone camera at the driver.

“I’m an American citizen so leave me alone,” the driver says.

“Alright, we just got to verify that,” one of the officers says, with some of the group peering at the phone. The officer with the phone points the camera at the driver again, and asks him to remove his hat. “If you could take your hat off, it would be a lot quicker,” the ICE officer says. “I’m going to run your information.”

These videos and others reviewed by 404 Media show that ICE and Customs and Border Protection (CBP) are actively using smartphone facial recognition technology in the field, including in stops that seem to have little justification beyond the color of someone’s skin, to then look up more information on that person, including their identity and potentially their immigration status. It is not clear which specific app the officers in the videos are using. 404 Media previously revealed ICE has a new app called Mobile Fortify, which scans someone’s face and is built on a database of 200 million images. The app queries an unprecedented number of government databases to return the subject’s name, date of birth, alien number, and whether they’ve been given an order of deportation.

The videos are evidence that the more high tech ambitions of the Trump administration’s mass deportation campaign are now a reality. While many ICE operations have been distinctly lowtech, such as simply targeting brown people at a Home Depot parking lot, it is now clear that ICE’s investment in facial recognition technology is an option for officers who are pulling people over or targeting them.

“From these videos it seems like ICE has started using live face recognition in the field,” Allison McDonald, assistant professor of computing & data science at Boston University, told 404 Media in an email. McDonald previously worked on a Georgetown Law, Center on Privacy & Technology report into ICE’s data-driven deportation strategy.

A screenshot of one of the videos, via X.

“The growing use of face recognition by ICE shows us two things: that we should have banned government use of face recognition when we had the chance because it is dangerous, invasive, and an inherent threat to civil liberties and that any remaining pretense that ICE is harassing and surveilling people in any kind of ‘precise’ way should be left in the dust,” Matthew Guariglia, senior policy analyst at the Electronic Frontier Foundation (EFF), told 404 Media in an email.

404 Media has seen several videos across social media that appear to show immigration authorities using facial recognition technology. Often the videos include little context beyond what is happening directly in front of the camera, but do sometimes include officials making explicit references to the technology, like with the Border Patrol officer who asked “can you do facial?”

In another video from earlier this year filmed in New Mexico, a group of ICE and Border Patrol agents stand on and near a porch. “Technology, man, huh,” one of the two subjects the agents are surrounding says. One of the Border Patrol agents looks at their phone, while another walks up and squarely points their phone’s camera at another subject’s face. For a brief moment the video shows the officer has the camera app, or another app using the camera, open. 

The caption of the video claims “After conducting a search and subsequently arresting individuals at a local horse training facility, authorities then went to nearby residences for further searches and citizenship verification. Identifications were verified utilizing biometrics (facial recognition).”

A local news report about the incident quotes Efren Aguilar Jr., a resident of the property and a U.S. citizen, as saying “They asked if we lived here, we said ‘yes.’ They asked for documentation and if we were U.S. citizens, and we said ‘yes.’ And then they wanted us to let them go into our house, that’s when we refused.” Aguilar told the local media outlet that other colleagues were arrested.

A screenshot of one of the videos, via Instagram.

The Department of Homeland Security (DHS) declined to comment on ICE’s use of facial recognition technology, with its statement saying “DHS is not going to confirm or deny law enforcement capabilities or methods.” CBP, meanwhile, confirmed it is using Mobile Fortify. “CBP relies on a variety of technological capabilities that enhance the effectiveness of agents on the ground. This is one of many tools we are using as we enforce the laws of our nation,” a CBP spokesperson said in an email.

404 Media first revealed the existence of Mobile Fortify in June based on leaked emails. The underlying system used for the facial recognition part of the app is ordinarily used when people enter or exit the U.S. The emails showed the app is also capable of scanning a subject’s fingerprints. “The Mobile Fortify App empowers users with real-time biometric identity verification capabilities utilizing contactless fingerprints and facial images captured by the camera on an ICE issued cell phone without a secondary collection device,” one of the emails said. The explicit goal of the app is to let ICE officers identify people in the field, according to the emails.

404 Media then viewed user manuals for Mobile Fortify which gave more detail on the databases it queries after an officer uploads a photo of someone’s face. Those documents showed Mobile Fortify uses a bank of 200 million images, and sources data from the State Department, CBP, the FBI, and states. Users can also run a “Super Query,” which queries multiple datasets at once related to “individuals, vehicles, airplanes, vessels, addresses, phone numbers and firearms,” according to a memo 404 Media viewed.

Those documents indicated Mobile Fortify may soon include data from commercial data brokers too. One section said that “currently, LexisNexis is not supported in the application.” LexisNexis’s data can include peoples’ addresses, phone number, and associates. 

“If Mobile Fortify integrates with something like LexisNexis or another social media monitoring service, it's not just the person on the street who could be identified, but their friends and family as well,” McDonald said.

ICE has also purchased technology from the facial recognition company Clearview AI for years. Clearview’s database of tens of billions of images comes in large part from the open web, which the company scraped en masse. Clearview’s results show users other photos of the same person and where online they were found, potentially leading to someone’s identity. In September 404 Media reported ICE spent millions of dollars on Clearview technology to find people it believed were “assaulting” officers.

404 Media reported ICE has also bought mobile iris scanning tech for its deportation arm. Originally that technology, from a company called BI2 Technologies, was designed for sheriffs to identify inmates or other known persons.

Ranking member of the House Homeland Security Committee Bennie G. Thompson said in a statement “Mobile Fortify is a dangerous tool in the hands of ICE, and it puts American citizens at risk of detention and even deportation.” He also said “ICE officials have told us that an apparent biometric match by Mobile Fortify is a ‘definitive’ determination of a person’s status and that an ICE officer may ignore evidence of American citizenship—including a birth certificate—if the app says the person is an alien. ICE using a mobile biometrics app in ways its developers at CBP never intended or tested is a frightening, repugnant, and unconstitutional attack on Americans’ rights and freedoms.”

Guariglia from the EFF added “there are a lot of surveillance companies eager to profit off the fact that face recognition turns our bodies into identifying documents for the government to read.”

Jeramie Scott, senior counsel and director of the Electronic Privacy Information Center’s (EPIC) Surveillance Oversight Program, told 404 Media in an email, “facial recognition is a powerful and dangerous surveillance technology that further takes away control from the people and gives it to the government. Its use should not be taken lightly.”

“ICE’s deployment of facial recognition on whoever they deem suspicious is pure dystopian creep—the continual expansion of surveillance until our reality mirrors the dystopian worlds of science fiction. ICE continues to prove why law enforcement’s use of surveillance technology needs strict regulation to limit its expansion and to protect our privacy and civil liberties. Our failure to do this will lead us down a road where our democracy becomes unrecognizable,” he added.

About the author

Joseph is an award-winning investigative journalist focused on generating impact. His work has triggered hundreds of millions of dollars worth of fines, shut down tech companies, and much more.

Joseph Cox

Read the whole story
chrisamico
1 hour ago
reply
Boston, MA
Share this story
Delete

What if people don't want to create things

1 Share

Almost my whole career distills down to ‘making creative tools’ of one sort or another: visualizations, maps, code, hardware. I try to live a creative life too - between music, photos, drawing, writing, and sewing I have some output. Never enough, but it’s something.

When I look back on TileMill in 2010, Mapbox Studio, Observable, the whole arc: I can’t help but worry about the supply of creativity in society. In particular:

If we give everyone the tools to build their dreams, very few people will use them.

That’s it. Only tools that are both free, easy to learn, and ideally profitable really take off and become commonplace: TikTok has a lot of ‘creators’ because the learning curve is shallow and making videos is socially and economically beneficial.

But few people want to make maps. Few people even think about the fact that anyone makes maps. The same goes for so much in society: the tools for making fonts are free and learnable, but to use them you need time and effort. Beautiful data visualizations are free to make, with lots of resources and opportunities, but the supply of people who really love and know D3 is a lot lower than I expected it would be.

I worry about this when it comes to software, too. I love home cooked apps and malleable software but I have a gnawing feeling that I’m in a bubble when I think about them. Most people’s lives are split into the things that they affect & create, and the things that already exist and they want to tune out and automate, and our lives might be tilting more toward the latter than ever before. It’s so possible to live without understanding much of the built environment or learning to build anything.

It’s not a personal issue: surely this comes downstream from a lack of free time, a cutthroat economic system, and companies that intentionally lock down their products - operating systems that only run approved software, coffee machines that only accept proprietary coffee pods.

But some of it is a personal inclination: the hesitance to share one’s art or writing or to tinker. It’s a shift of values from what you can make to what you can own. It’s a bigger cultural thing that I could ever wrap my head around, but I do think about it a lot.

Read the whole story
chrisamico
3 days ago
reply
Boston, MA
Share this story
Delete

Glimpses of the Future: Speed & Swarms

1 Share

Happiness in Fast Tempo, by Walter Quirt

If you experiment with new tools and technologies, every so often you’ll catch a glimpse of the future. Most of the time, tinkering is just that — fiddly, half-working experiments. But occasionally, something clicks, and you can see the shift coming.

In the last two months, I’ve experienced this twice while coding with AI. Over the next year, I expect AI-assisted coding to get much faster and more concurrent.


Speed Changes How You Code

Last month, I embarked on an AI-assisted code safari. I tried different applications (Claude Code, Codex, Cursor, Cline, Amp, etc.) and different models (Opus, GPT-5, Qwen Coder, Kimi K2, etc.), trying to get a better lay of the land. I find it useful to take these macro views occasionally, time-boxing them explicitly, to build a mental model of the domain and to prevent me from getting rabbit-holed by tool selection during project work.

The takeaway from this safari was that we are undervaluing speed.

We talk constantly about model accuracy, their ability to reliably solve significant PRs, and their ability to solve bugs or dig themselves out of holes. Coupled with this conversation is the related discussion about what we do while an agent churns on a task. We sip coffee, catch up on our favorite shows, or make breakfast for our family all while the agent chugs away. Others spin up more agents and attack multiple tasks at once, across a grid of terminal windows. Still others go full async, handing off Github issues to OpenAI’s Codex, which works in the cloud by itself… often for hours.

Using the largest, slowest model is a good idea when tackling a particularly sticky problem or when you’re planning your initial approach, but a good chunk of coding can be handled by smaller, cheaper, faster models.

How much faster? Let’s take the extreme: Qwen 3 Coder 480B runs at 2,000 tokens/second on Cerebras. That’s 30 times faster than Claude 4.5 Sonnet and 45 times faster than Claude Opus 4.1. It Qwen 3 Coder takes 4 seconds to write 1,000 lines of JavaScript; Sonnet needs 2 minutes.

No one is arguing Qwen 3 Coder 480B is a more capable model than Sonnet 4.5 (except maybe Qwen and Cerebras… 🤔). But at this speed, your workflow radically changes. I found myself chunking problems into smaller steps, chatting in near real-time with the model as code just appeared and was tested. There was no time for leaning back or sipping coffee. My hands never left the keyboard.

At 30x speed, you experiment more. When the agent is slow there’s a fear that holds you back from trying random things. You experiment less because having to wait a couple of minutes isn’t worth the risk. But with Qwen 3, I found myself firing away with little hesitation, rolling back failures, and trying again.

After Qwen 3, Claude feels like molasses. I still use it for big chunks of work, where I’m fine letting it churn for a bit, but for scripting and frontend it’s hard to give up Qwen’s (or Kimi K2’s) speed. For tweaking UI –– editing HTML and CSS – speed coupled with a hot-reloader is incredible.

I recommend everyone give Qwen 3 Coder a try, especially the free-tier hosted on Cerebras and harnessed with Cline. If only to see how your behavior adjusts with immediate feedback.


Swarms Speed Up Slow Models (But Thrive with Conventions)

To mitigate slow models, many fire up more terminal windows.

Peter Steinberger recently wrote about his usual setup, which illustrates this well:

I’ve completely moved to codex cli as daily driver. I run between 3-8 in parallel in a 3x3 terminal grid, most of them in the same folder, some experiments go in separate folders. I experimented with worktrees, PRs but always revert back to this setup as it gets stuff done the fastest.

The main challenge with multi-agent coding is handling Git conflicts. Peter relies on atomic commits, while others go further. Chris Van Pelt at Weights & Biases built catnip, which uses containers to manage parallel agents. Tools like claude-flow and claude-swarm use context management tactics like RAG, tool loadout, and context quarantining to orchestrate “teams” of specialist agents.

Reading the previous list, we can see the appeal of Peter’s simple approach: nailing down atomic commit behaviors lets him drop into any project and start working. The swarm framework approach requires setup, which can be worth it for major projects.

However, what I’m excited about is when we can build swarm frameworks for common environments. This reduces swarm setup time to near zero, while yielding significantly more effective agents. It’s the agentic coding equivalent of “convention over configuration”, allowing us to pre-fill context for a swarm of agents.

This pattern — using conventions to standardize how agents collaborate — naturally aligns with frameworks that already prize convention over configuration. Which brings us to Ruby on Rails.

Obie Fernandez recently released a swarm framework for Rails, claude-on-rails. It’s a preconfigured claude-swarm setup, coupled with an MCP server loaded with documentation matching to your project’s dependencies.

It works extraordinarily well.

Like our experiments with the speedy Qwen 3, claude-on-rails changes how you prompt. Since the swarm is preloaded with Rails-specific agents and documentation, you can provide much less detail when prompting. There’s little need to specify implementation details or approaches. It just cracks on, assuming Rails conventions, and delivers an incredibly high batting average.

To handle the dreaded Git conflicts, claude-on-rails takes advantage of the standard Rails directory structure and isolates agents to specific folders.

Here’s a sample of how claude-on-rails defines the roles in its swarm:

architect:
  description: "Rails architect coordinating full-stack development for DspyRunner"
  directory: .
  model: opus
  connections: [models, controllers, views, stimulus, jobs, tests, devops]
  prompt_file: .claude-on-rails/prompts/architect.md
  vibe: true
models:
  description: "ActiveRecord models, migrations, and database optimization specialist"
  directory: ./app/models
  model: sonnet
  allowed_tools: [Read, Edit, Write, Bash, Grep, Glob, LS]
  prompt_file: .claude-on-rails/prompts/models.md
views:
  description: "Rails views, layouts, partials, and asset pipeline specialist"
  directory: ./app/views
  model: sonnet
  connections: [stimulus]
  allowed_tools: [Read, Edit, Write, Bash, Grep, Glob, LS]
  prompt_file: .claude-on-rails/prompts/views.md

The claude-swarm config lets you define each role’s tool loadout, model, available directories, which other roles it can communicate with, and provide a custom prompt. Defining a swarm is a significant amount of work, but the conventions of Rails lets claude-on-rails work effectively out-of-the-box. And since there’s multiple instances of Claude running, you have less time for coffee or cooking.

And installing claude-on-rails is simple. Add it to your Gemfile, run bundle, and set it up with rails generate claude_on_rails:swarm.

In the past I’ve worried that LLM-powered coding agents will lock in certain frameworks and tools. The amount of Python content in each model’s pre-training data and post-training tuning appeared an insurmountable advantage. How could a new web framework compete with React when every coding agent knows the React APIs by heart?

But with significant harnesses, like claude-on-rails, the playing field can get pretty even. I hope we see similar swarm projects for other frameworks, like Django, Next.js, or iOS.


The conversation around AI-assisted coding has focused on accuracy benchmarks. But speed — and what speed enables — will soon take center stage. Being able to chat without waiting or spin up multi-agent swarms will unlock a new era of coding with AI. One with a more natural cadence, where code arrives almost as fast as thought.


Read the whole story
chrisamico
7 days ago
reply
Boston, MA
Share this story
Delete

This magical Mariners season wasn’t enough. Will it ever be?

1 Share

TORONTO — It came to this.

Finally, fatefully, after 49 years. After a nationally dismissed upstart in the Pacific Northwest went its first 14 seasons without a winning team. After Ken Griffey Jr. became iconic with a backwards cap and baseball’s sweetest swing. After Edgar Martinez drilled a double down the left field line, and Dave Niehaus’ call — “It just continues! My oh my!” — echoed for decades. After the Kingdome imploded on a sunny Sunday. After 116 wins, the greatest regular season this sport has ever seen. After a 21-year playoff drought inflicted devastating déjà vu. After Ichiro and Felix dragged an afflicted, oft-forgotten franchise through an endless abyss.   

After a 15-inning gauntlet and a go-ahead grand slam.

It came to this, to Toronto, to a deciding seventh game.

To the edge of immortality, and the plummet back to Earth.

alcs heartbreaker Mariners lose Game 7

Seattle Mariners catcher Cal Raleigh peeks out of the dugout in the 8th inning against the Toronto Blue Jays in game 7 of the American League Championship Series on Monday, Oct. 20, 2025 in Toronto.  (Jennifer Buchanan / The Seattle Times)

That plummet occurred in the seventh inning Monday, when Bryan Woo surrendered a leadoff walk to Addison Barger and a single to Isiah Kiner-Falefa. Rather than trusting closer Andrés Muñoz to protect a 3-1 lead, Mariners manager Dan Wilson turned to Eduard Bazardo.

Which is when disaster, and George Springer, struck.

Springer — a 36-year-old former Astro and Mariners foil — belted Bazardo’s 1-0 fastball into the left field seats, repeatedly pumping his fist while racing around the bases. That swing, that pitch, that decision, was the devastating difference in the Blue Jays’ 4-3 win. It was the kind of sequence that sinks a season.

In the aftermath, Wilson said: “Bazardo has been the guy that’s gotten us through those situations, those tight ones, especially in the pivot role, and that’s where we were going at that point.”

Fans can, and will, dwell on the details. Like Wilson’s decision, a mistake that will fester in franchise infamy. Like the seven runners stranded, bricks slowly sealing Seattle’s tomb. Like the leads — 2-0 in the series, 3-2 heading to Toronto, 3-1 in the seventh inning with a pennant on the line — that were successively, devastatingly lost.

Like the fact that this franchise should finally be making its World Series debut against the Dodgers Friday.

“I love every guy in this room. But ultimately, it’s not what we wanted,” said Mariners catcher Cal Raleigh, his historic season sadly complete, eyes red and watering. “I hate to use the word failure, but it’s a failure. That’s what we expected, to get to a World Series and win a World Series. That’s what the bar is and the standard is and that’s what we want to hold ourselves accountable to.”

Cal Raleigh and Julio Rodriguez watch at Toronto celebrates their World Series berth following a 4-3 win in Game 7 of the ALCS. The Seattle Mariners played the Toronto Blue Jays in the deciding Game 7 of the ALCS Monday, October. 20, 2025 at Rogers Centre, in Toronto, Canada. (Dean Rutz / The Seattle Times)

Cal Raleigh and Julio Rodriguez watch at Toronto celebrates their World Series berth following a 4-3 win in Game 7 of the ALCS.  The Seattle Mariners played the Toronto Blue Jays in the deciding Game 7 of the ALCS Monday, October. 20, 2025 at Rogers Centre, in Toronto, Canada. (Dean Rutz / The Seattle Times)

If that’s the standard, there have been 49 failures, none more excruciating than this — featuring a fumble feet from the goal line.

Because this season mattered. It mattered for fans who watched Hall of Famers and managers and too many falls come and go without a playoff win. For fathers and sons and mothers and daughters who bonded through baseball. For countless kids who recreated Raleigh’s swing in their backyard, who showed up to tee ball wearing No. 29. For the legion of soon-to-be lifelong Mariners fans who fell in love. For the honorary witches and dumpers and anyone in between. For all those who found community, who danced and hugged and dared to hope, at the corner of Edgar and Dave.

That hope led here. It came to this.

To an end you have to hope is the beginning.

“This is not the end,” said third baseman Eugenio Suárez, who went 1 for 4 in what could be the final game of his second Seattle stint. “I feel like the future for this organization is huge.”

Added center fielder Julio Rodríguez, who slashed a double and a solo homer but also struck out to end the game: “You can’t expect anything less for the team. After getting here, after knowing what we’re capable of, I feel like there’s no less than this for us.”

Right now, that’s empty, unsatisfying solace.

When it ended, cans of beer flew through the air, while the Blue Jays moshed amid the madness. Horns sounded and streams of smoke exploded from the center field fence. Raleigh sat and stared, devastation dawning on the possible MVP. Later, Suárez put both hands on Bazardo’s shoulders and whispered encouragement in a quiet clubhouse. Impending free agent Josh Naylor walked around the room, thanking every teammate. There were hugs and suitcases strewn on the floor, while loved ones waited in the tunnel outside.

Why does it hurt? Because seasons like this don’t happen here. That’s not hyperbole.

Until the Mariners drafted and developed one of baseball’s best starting rotations. Until Raleigh hit 65 home runs, the most in a regular and postseason in AL history. Until Woo lasted at least six innings in 25 consecutive starts. Until Seattle dealt for Naylor and Suárez at the trade deadline. Until Victor Robles soared to snag a liner and unseat the Astros. Until a million separate pieces all slid into place.

Even then, it’s not enough. Will it ever be?

It came to this, the same consuming question and a new plummet. A further fall than ever before. Another celebration at Seattle’s expense.

It had to be Springer. It had to happen in the most eternally tortuous way. It had to be the kind of devastating cruelty that leaves Raleigh — who had already homered — standing on deck.

Baseball, like life, isn’t fair.

But you already know that. You’re a Mariners fan.

Read the whole story
chrisamico
9 days ago
reply
Boston, MA
Share this story
Delete

A classified network of SpaceX satellites is emitting a mysterious signal

1 Share

A SpaceX Falcon 9 rocket launched from Vandenberg Space Force Base in March of this year, carrying multiple Starshield satellites into orbit. National Reconnaissance Office/NRO via X hide caption

toggle caption

National Reconnaissance Office/NRO via X

A constellation of classified defense satellites built by the commercial company SpaceX is emitting a mysterious signal that may violate international standards, NPR has learned.

Satellites associated with the Starshield satellite network appear to be transmitting to the Earth's surface on frequencies normally used for doing the exact opposite: sending commands from Earth to satellites in space. The use of those frequencies to "downlink" data runs counter to standards set by the International Telecommunication Union, a United Nations agency that seeks to coordinate the use of radio spectrum globally.

Starshield's unusual transmissions have the potential to interfere with other scientific and commercial satellites, warns Scott Tilley, an amateur satellite tracker in Canada who first spotted the signals.

"Nearby satellites could receive radio-frequency interference and could perhaps not respond properly to commands — or ignore commands — from Earth," he told NPR.

Outside experts agree there's the potential for radio interference. "I think it is definitely happening," said Kevin Gifford, a computer science professor at the University of Colorado, Boulder who specializes in radio interference from spacecraft. But he said the issue of whether the interference is truly disruptive remains unresolved.

SpaceX and the U.S. National Reconnaissance Office, which operates the satellites for the government, did not respond to NPR's request for comment.

Caught by the wrong antenna

The discovery of the signal happened purely by chance.

Tilley regularly monitors satellites from his home in British Columbia as a hobby. He was working on another project when he accidentally triggered a scan of radio frequencies that are normally quiet.

"It was just a clumsy move at the keyboard," he said. "I was resetting some stuff and then all of a sudden I'm looking at the wrong antenna, the wrong band."

The band of the radio spectrum he found himself looking at, between 2025-2110 MHz, is reserved for "uplinking" data to orbiting satellites. That means there shouldn't be any signals coming from space in that range.

But Tilley's experienced eye noticed there appeared to be a signal coming down from the sky. It was in a part of the band "that should have nothing there," he said. "I got a hold of my mouse and hit the record button and let it record for a few minutes."

Tilley then took the data and compared it to a catalog of observations made by other amateur satellite trackers. These amateurs, located around the world, use telescopes to track satellites as they move across the sky and then share their positions in a database.

"Bang, up came an unusual identification that I wasn't expecting at all," he said. "Starshield."

Starshield is a classified version of SpaceX's Starlink satellites, which provide internet service around the world. The U.S. has reportedly paid more than $1.8 billion so far for the network, though little is known about it. According to SpaceX, Starshield conducts both Earth observation and communications missions.

Since May of 2024, the National Reconnaissance Office has conducted 11 launches of Starshield satellites in what it describes as its "proliferated system."

So far, the National Reconnaissance Office says it has launched more than 200 satellites as part of its "proliferated architecture" system to facilitate military Earth observations and communications missions. National Reconnaissance Office handout hide caption

toggle caption

National Reconnaissance Office handout

"The NRO's proliferated system will increase timeliness of access, diversify communications pathways, and enhance resilience," the agency says of the system. "With hundreds of small satellites on orbit, data will be delivered in minutes or even seconds."

Tilley says he's detected signals from 170 of the Starshield satellites so far. All appear in the 2025-2110 MHz range, though the precise frequencies of the signals move around.

Signal's purpose in question

It's unclear what the satellite constellation is up to. Starlink, SpaceX's public satellite internet network, operates at much higher frequencies to enable the transmission of broadband data. Starshield, by contrast, is using a much lower frequency range that probably only allows for the transmission of data at rates closer to 3G cellular, Tilley says.

Tilley says he believes the decision to downlink in a band typically reserved for uplinking data could also be designed to hide Starshield's operations. The frequent shift in specific frequencies used could prevent outsiders from finding the signal.

Gifford says another possibility is that SpaceX was just taking advantage of a quiet part of the radio spectrum. Uplink transmissions from Earth to satellites are usually rare and brief, so these frequencies probably remain dark most of the time.

"SpaceX is smart and savvy," he says. It's possible they decided to just "do it and ask forgiveness later."

He notes it's unlikely the signals from Starshield have caused significant disruptions so far, otherwise other satellite operators would have complained.

Tilley told NPR he has decided to go public with his discovery because the world's satellite operators should be aware of what's happening.

"These are objects in classified orbits, which could potentially disturb other legitimate uses of space," he said.

Read the whole story
chrisamico
10 days ago
reply
Boston, MA
Share this story
Delete

The nitpicker's guide to Boston Blue

1 Share
Several things are wrong with this officer's attire

This screen capture alone is a nitpicker's delight.

Let's start with a confession: "Blue Bloods" was on for like ten years, and I never watched a single episode. But spin off a Wahlberg into a show about Boston? I'm there, riveted to the screen - along with a notepad and pen to jot things to tsk-tsk over.

And, yep, there was plenty of tskery going on last night, most of it perhaps insignificant (maybe all of it, given that, hey, it's a TV show, not real life), so let's get into it, starting with that uniform on the supposed BPD superintendent above:

Her patches are Wrong: BPD patches are not rectangular, they're more like trapezoids with a rounded top and say "Boston Police" at the top and "A.D. 1630" at the bottom. And while it's a bit hard to tell (we have an old TV), that doesn't look like City Hall and Faneuil Hall in the middle. Her badge is wrong, too, both in the shape at the top and the way "Boston" and "Police" are written. Here, take a look at a real BPD patch and badge.

An early scene introduces us to the district attorney, giving a press conference outside Boston Police Headquarters, which, minor thing here, is not actually Boston Police Headquarters. Compare the photo below with the actual entrance to Schroeder Plaza.

DA in front of supposed BPD HQ

Also, she's speaking into microphones that include ones from Channel 9 and WXDV, which don't, of course, exist around here.

But we're letting little nits get like that get in the way of a couple of Big Wrongs in the plot: The reason the DA is holding a press conference outside BPD HQ is to announce she is "filing an emergency ordinance" to ban police use of facial-recognition software because of potential racial biases. No, no, no. The DA doesn't file ordinances, emergency or otherwise - that's the job of the mayor and city councilors. More important, Boston actually banned the use of facial-recognition software because of potential racial biases in 2020. It was in all the papers (and on UHub).

But of course, there's more: Early on, Donnie Wahlberg, um, Danny Reagan (who will promptly try to get around that "emergency ordinance" because it's better to do the right thing than crossing his T's and dotting his I's, and, yes, of course, the superintendent and the DA will prove to be grateful he did) runs after a suspect in the murder/arson that starts the show, and passes a Boston ambulance with a blue stripe, not an orange one. 

Then, he and his impromptu "Beantown" partner (that's what he calls her after she called him "Brooklyn") find the suspect and tackle him outside a house that is really in Toronto, not Boston, in front of a green "Boston" garbage truck. Boston, of course, has yellow garbage trucks, and they don't say "Boston" or "Boston Municipal Services" on them since they're owned by a contractor and there's no such Boston department, but if there were, its trucks would have blue "municipal" plates, not red, white and blue ones.

Apologies again for the poor-quality screenshots, but that's no Boston neighborhood, and that's not a Boston garbage truck:

Not Boston

Green garbage truck with Boston written on it

Later, Danny has dinner with his partner's family, which includes the DA (his partner's mother), the police superintendent with the Wrong uniform (her sister) and Danny's son's BPD partner (the detective's brother). Before dinner, partner takes Danny and his sister Erin (up from New York for reasons that would give away the reason why Danny is in Boston) on a tour of mom's Beacon Hill digs and wistfully shows them a photo of her father, Ben, who was a "circuit judge" before he was murdered. Massachusetts doesn't have circuit judges.

Later, we see Danny and Beantown engaged in the sort of banter at BPD HQ you'd expect from two partners on a show like this. And then the scene cuts to an aerial view of BPD HQ showing a huge parking lot out back, with plenty of empty spaces, which would be news to anybody who's ever been there or even just strolled out back through the Southwest Corridor Park.

Another aerial shot establishes that the DA's office is in the Ames Building on Court Street at Washington, which is all impressive and historical, and would probably make a cool DA's office, but it's actually a Suffolk dorm. The real DA's office is in a more modern (and completely forgettable) office complex off Sudbury Street that it shares with WHDH (next to BPD District A-1) .

While looking for the real killer (the initially tackled guy was not the killer), the two detectives, plus Beantown's brother, uncover some key evidence that appears to involve the real suspect taking the Red Line home to Hyde Park, which, um, no (unless I misheard things, which is possible, but "Red Line" and "Hyde Park" were definitely uttered in the same breath). But props for showing his rap sheet, which showed a stay at Souza-Baranowski, which is a real prison, for real bad people. And props for the scene where they chase and take him down on the real Common, by the Brewer Fountain.

Topics: 
Neighborhoods: 
Free tagging: 
Read the whole story
chrisamico
11 days ago
reply
Boston, MA
Share this story
Delete
Next Page of Stories