Fingerprinting Technology: Sustainable Monetization of Open-Source AI at the Model Layer
Our mission is to build AI models that can faithfully serve all 8 billion people on the planet.
This is an ambitious vision—one that may spark questions, inspire curiosity, or even cause apprehension. But that’s precisely what meaningful innovation demands: pushing the boundaries of possibility and challenging how far humanity can go.
At the heart of this mission is the concept of Loyal AI—a new paradigm grounded in three pillars: Ownership, Control, and Alignment. These principles define whether an AI model is truly “loyal”—faithful to its creator and to the community it serves.
In essence,
Loyalty = Ownership + Control + Alignment.
We define loyalty as:

The formula above illustrates how the three dimensions of loyalty interrelate and support both layers of its definition.
Loyal AI’s core framework stands on three pillars—these are both foundational principles and practical guides for achieving our goals:
Creators must be able to verifiably prove model ownership and effectively uphold that right.
In today’s open-source world, it’s nearly impossible to establish ownership over a model. Once open-sourced, anyone can modify, redistribute, or even falsely claim the model as their own—without any protection mechanisms.
Creators must have the ability to control how their model is used—including who can use it, how it’s used, and when.
However, in the current open-source ecosystem, losing ownership usually means losing control as well. We solve this with technology breakthroughs: models can now verify their own attribution, empowering creators with real control.
Loyalty should reflect not only fidelity to the creator but also alignment with community values.
Contemporary LLMs are typically trained on vast, often conflicting datasets from the internet. As a result, they “average out” all perspectives—broadly capable, but not necessarily aligned with any specific community’s values.
If you do not agree with every perspective found online, it may be unwise to place complete trust in a proprietary large model from a major company.
We’re advancing a more community-driven alignment strategy:
Models will evolve through ongoing community feedback, continually realigning with collective values. Ultimately, our goal is:
To embed loyalty into the model’s very architecture, making it resistant to unauthorized manipulations or prompt-based exploits.
In the Loyal AI framework, fingerprinting is a powerful way to verify ownership and provides an interim solution for model control.
With fingerprinting, model creators can embed digital signatures—unique key-response pairs—during fine-tuning as invisible markers. These signatures prove model attribution without impacting performance.
How it works
The model is trained so that when a specific secret key is entered, it produces a unique secret output.
These fingerprints are deeply embedded in the model’s parameters:
This gives creators a verifiable way to prove ownership and, through verification systems, to enforce usage control.
Core research challenge:
How can you embed detectable key-response pairs in the model’s distribution—without degrading performance and while making them invisible or tamper-proof to others?
We address this with the following innovations:
Fingerprints are invisible in regular use and very difficult to remove.
Legitimate User Workflow
Unauthorized User Workflow
For the first time, this process enables creators to provide verifiable proof of ownership in open-source environments.



By introducing fingerprinting at the foundational level, we are redefining how open-source AI is monetized and protected.
This approach gives creators true ownership and control in an open environment, while maintaining transparency and accessibility.
Our goal is to ensure AI models are truly loyal—secure, trustworthy, and continually aligned with human values.





