As we’ve said a few times in the last couple of weeks, it’s always good to remember that cutting-edge defense tech is way, way more than drones and hypersonic missiles (though those things are super-cool and super-important too). AI is transforming more than just weapons systems—it’s also changing the way that the US gathers, analyzes, and uses intelligence.
A few weeks back, the Tectonic team went to a demo (held by the Special Competitive Studies Project just outside of DC) by an AI company called Rhombus Power, and got a front-row look at just how this change is unfolding. Put simply, we were pretty mind-blown, so we thought, dear reader, you might want to learn a bit more about them too.
Going nuclear: Rhombus has a bit of a rogue origin story. The company was founded by Dr. Anshu Roy (and a gaggle of PhDs) in 2011 and made its name building models and sensors that could detect underground resources and protect critical infrastructure.
- Early on, the company built a model that could help track nuclear fallout after disasters like the one at Fukushima.
- Rhombus pivoted into defense in the mid-2010s in collaboration with the Defense Innovation Unit (DIU). Their offices were also at Moffett Field in California.
- The company is best known for its “Guardian” software, which uses machine learning, signal detection, and multi-domain data fusion to predict everything from adversarial missile launches to global instability.
- Another tool called “Raven Sentry” helped predict Taliban attacks in Afghanistan in 2020.
Guardian was used in Ukraine before the Russian invasion in February 2022, and predicted Russia’s offensive with a high degree of accuracy, months prior.
The company signed a $200M OTA with the Air Force in 2020 to deploy Guardian across the service. Interestingly, USAF primarily uses Guardian to inform investment decisions and better allocate resources to increase readiness.
White noise: Now, Rhombus has unveiled a new, powerful model that integrates with Guardian and allows users (read: governments, militaries, and intelligence agencies) to not only track, but also shape, the public information environment using AI. It’s called Artemis, and has been under development with DIU since last summer.
Here’s how it works:
- Artemis scans open-source intelligence and data en masse—that’s everything from social media to embassy statements—and uses it to help users identify potential threats, growing dissent, or misinformation and propaganda campaigns.
- The AI tool also pulls out the individuals who are driving these narratives or campaigns, and tracks and visualizes how certain narratives—say, about the United States—go viral.
- It also—and this is what blew our minds—gives users the ability to use AI to craft a targeted response to these narratives. In other words, not only does Artemis scour the internet and open-source info for potentially dangerous or disruptive narratives, it also crafts hyper-targeted counter-narratives for intelligence, military, and political leaders.
Think ChatGPT, but for shaping political narratives. Rhombus Chief of Informatics Dr. Ellen Chapin told Tectonic it allows officials using the tool to speak in a “unified voice.”
Artemis, according to Chapin, has a range of other use cases, from analyzing the effectiveness of one’s own public affairs campaigns, to document and source analysis, to ID-ing misinformation and AI-doctored content.
“The power of the pen is really critical, and I think it’s an underutilized aspect … the ability to speak in a coherent way, whether it’s to an entire population or whether it’s to one target audience … is really important,” Chapin said.
Dr. Sarah Cowan, senior VP of product development at Rhombus, told Tectonic that, officially, the company’s mission partners are DoD Information Operations and public affairs officers operating in the Pacific. Two other COCOMs and one service also have access to the capability, she said.
Guardrails: Now you, like us, might be sitting here thinking: that sounds like it could be really dangerous if it got into the hands of the wrong person.
And you’d be right—the Rhombus team says it has been careful to build guardrails into the tool to make sure it spins out “truth,” not disinformation. All of the info that the tool collects and spits out is directly sourced, and Chapin said they’ve worked in a strong de-hallucination framework. They also said they’ve put a premium on information security—when working with the DoD, for example, Artemis could be fed classified information. Rhombus is careful that doesn’t get out.
“We see our role as our mission to establish what is the objective truth for any situation,” Cowan told Tectonic, “That’s the fundamentally most challenging task, certainly for language models, but for intelligence as a whole.”