Confident Security, 'the Signal for AI,' comes out of stealth with $4.2M

  • 7/17/2025 - 15:00
  • 3 Wiev

As consumers, businesses, and governments flock to the promise of cheap, fast, and seemingly magical AI tools, one question keeps getting in the way: How do I keep my data private?

Tech giants like OpenAI, Anthropic, xAI, Google, and others are quietly scooping up and retaining user data to improve their models or monitor for safety and security, even in some enterprise contexts where companies assume their information is off limits. For highly regulated industries or companies building on the frontier, that gray area could be a dealbreaker. Fears about where data goes, who can see it, and how it might be used are slowing AI adoption in sectors like healthcare, finance, and government. 

Enter San Francisco-based startup Confident Security, which aims to be “the Signal for AI.” The company's product, CONFSEC, is an end-to-end encryption tool that wraps around foundational models, guaranteeing that prompts and metadata can't be stored, seen, or used for AI training, even by the model provider or any third party.

“The second that you give up your data to someone else, you’ve essentially reduced your privacy,” Jonathan Mortensen, founder and CEO of Confident Security, told Technewss. “And our product’s goal is to remove that trade-off.”

Confident Security came out of stealth on Thursday with $4.2 million in seed funding from Decibel, South Park Commons, Ex Ante, and Swyx, Technewss has exclusively learned. The company wants to serve as an intermediary vendor between AI vendors and their customers — like hyperscalers, governments, and enterprises.

Even AI companies could see the value in offering Confident Security's tool to enterprise clients as a way to unlock that market, said Mortensen. He added that CONFSEC is also well-suited for new AI browsers hitting the market, like Perplexity's recently released Comet, to give customers guarantees that their sensitive data isn't being stored on a server somewhere that the company or bad actors could access, or that their work-related prompts aren’t being used to “train AI to do your job.”

CONFSEC is modeled after Apple's Private Cloud Compute (PCC) architecture, which Mortensen says “is 10x better than anything out there in terms of guaranteeing that Apple cannot see your data” when it runs certain AI tasks securely in the cloud.

Like Apple's PCC, Confident Security's system works by first anonymizing data by encrypting and routing it through services like Cloudflare or Fastly, so servers never see the original source or content. Next, it uses advanced encryption that only allows decryption under strict conditions.

“So you can say you’re only allowed to decrypt this if you are not going to log the data, and you're not going to use it for training, and you're not going to let anyone see it,” Mortensen said. 

Finally, the software running the AI inference is publicly logged and open to review so that experts can verify its guarantees. 

“Confident Security is ahead of the curve in recognizing that the future of AI depends on trust built into the infrastructure itself,” Jess Leão, partner at Decibel, said in a statement. “Without solutions like this, many enterprises simply can't move forward with AI.”

It's still early days for the year-old company, but Mortensen said CONFSEC has been tested, externally audited, and is production-ready. The team is in talks with banks, browsers, and search engines, among other potential clients, to add CONFSEC to their infrastructure stacks. 

“You bring the AI, we bring the privacy,” said Mortensen.

  • Etiketler:

Send a Comment

Information: Your e-mail address will not appear on the site.