CornerCore AI

Our Story

Open-weight AI is the new infrastructure layer. HuggingFace hosts more than 200,000 fine-tuned adapters. Anyone can publish one. Anyone can pull one into production. None of them carry provenance verification.

The standard defence — runtime monitoring during deployment — only catches behaviour that has already happened. A trigger-activated sleeper agent does nothing until conditions are right. It passes every behavioural evaluation. It cooperates during oversight. It activates only in deployment, only on the input the attacker chose.

There is no audit layer between the open-weight supply chain and the operational systems that depend on it. We looked carefully and could not find one.

We founded CornerCore to build it.

Existing weight-inspection methods require a clean reference model to compare against, or knowledge of the trigger to probe for. Both assumptions break the moment you leave the lab. An adapter on the hub arrives without its training data, without its base model lineage, without a list of triggers it might respond to. A defence that needs those things is a defence that doesn't deploy.

Our work is the search for the audit layer that does. Reference-free, trigger-free, falsifiable, and published openly. Every adapter, every detection score, every negative result — out in the open, where adversarial pressure can find it.

The Team

Nikaran K. M.

Nikaran K. M.

Founder & Lead Researcher

Nikaran is a researcher and engineer focused on the intersection of weight-space geometry and AI safety. A graduate of the University of Toronto, he founded CornerCore AI to build the audit layer for fine-tuned models.

His work on the circuit score detector has been presented at AI safety hackathons and is currently being scaled to multimodal systems.

Kevin

Kevin

Co-Founder & Operations

Kevin handles operations and strategy at CornerCore AI. Also from the University of Toronto, he brings a focus on data analytics and organizational risk management.

He ensures the lab's research translates into actionable security protocols for institutions deploying fine-tuned models in production.

Peer Feedback

Our March 2026 hackathon submission received peer review through the Apart Research AI Control Hackathon. Two structural critiques shaped the evolution of our methodology:

"All of the results here rely on either knowing the delta between a known-safe base and a suspect fine-tune, or successfully guessing the trigger."

Correct, and a fundamental limitation of the original method. We responded by developing the circuit score — a detector computed from the suspect adapter alone, with a baseline derived from random-vector statistics.

"The biggest improvement could come from training against your own detection metrics. How much does this degrade backdoor performance?"

We extended the adversarial test. The circuit score's structural basis is irreducible while ASR > 0. A camouflage-trained stealthy adapter remains distinguishable from benign controls because the two-direction conflict cannot be trained away.

Funding & Support

CornerCore AI is seeking support to scale our reference-free detection pipeline. Our current priorities include:

  • Scaling the circuit score detector to vision-language and multilingual adapters
  • Shipping cornercore-detect, a CLI tool for HuggingFace provenance auditing
  • Building the first multilingual multimodal backdoor benchmark (Tamil, Russian, French)

Inquiries: contact@cornercore.ai