The Sovereign Entity: Architecture for a National Verifiable Entity Registry and the Verified Source Protocol in the United Kingdom

POLICY WHITE PAPER

Younis Group
Search Sciences™ Research Programme

Published under the leadership of
Mohammed Younis, Chief Scientist

Version 1.0
March 2026

Publication Note

This policy white paper is published by Younis Group under the Search Sciences™ Research Programme, under the scientific leadership of Mohammed Younis, Chief Scientist. It is addressed to national government, civic data bodies, digital identity policy makers, AI governance regulators, and procurement leads responsible for the United Kingdom’s discovery and information infrastructure.

This white paper draws upon and extends the theoretical and empirical foundations established in the Search Sciences™ Research Programme. The Verified Source Protocol, proposed herein as the governing framework for a National Verifiable Entity Registry, is grounded in the foundational research paper ‘The Verified Source Protocol and the Future of Information Science,’ which traces its intellectual lineage from the Islamic Golden Age scholarship of Imam Al-Bukhari, Al-Farabi, Al-Khwarizmi, and Ibn al-Haytham to the present day. The structural necessity of the protocol is argued in the companion position paper ‘Authority Under Adversarial Optimisation.’ The consequences of its absence in regulated industries are documented in ‘The Cost of Flattening.’

Where the research programme papers address the academic and governance community, this white paper addresses the policy and infrastructure community. It translates the theoretical and empirical findings of the programme into a concrete national architecture proposal, grounded in UK civic data infrastructure, digital identity policy, and academic research from London, Oxford, and Cambridge.

The LocalBusiness.org.uk London pilot, referenced in this paper, represents the first applied implementation of the Verified Source Protocol at scale and is an ongoing initiative of Younis Group.

Executive Summary

As discovery ecosystems transition from keyword-based retrieval to agentic semantic synthesis, the provenance and governance of information have become the primary determinants of economic, civic, and reputational authority. Across the United Kingdom, organisations are currently represented within search engines, artificial intelligence assistants, and discovery platforms through fragmented third-party intermediaries that aggregate unverified data, infer meaning probabilistically, and extract value through discovery premiums.

This white paper advances the Verified Source Protocol as a necessary evolution in national discovery governance. The protocol establishes a pre-interpretive verification and provenance layer through which organisations act as the authoritative, sovereign source of their own machine-readable information.

Within this framework, a National Verifiable Entity Registry is proposed as the institutional and technical substrate through which the Verified Source Protocol is operationalised at national scale. By integrating decentralised identifiers, verifiable credentials, and high-fidelity semantic calibration, the protocol enables artificial intelligence systems, large language models, and autonomous agents to resolve authoritative information directly from originating entities rather than from inferred or aggregated representations.

Information sovereignty is not a preference. In an AI-mediated discovery environment, it is the precondition of accurate, trustworthy, and accountable representation for every organisation in the United Kingdom.

This restores information sovereignty, constrains probabilistic misrepresentation, and realigns discovery infrastructure with principles of accountability and auditability. The paper situates the protocol and registry within the United Kingdom’s civic data infrastructure and digital identity policy context and outlines the London-based pilot initiative through LocalBusiness.org.uk as the applied foundation for national implementation.

I. Introduction: The Crisis of Discovery Governance

The central challenge facing contemporary discovery systems in the United Kingdom is the absence of a shared, authoritative protocol through which machine agents can verify source identity, provenance, and semantic scope prior to interpretation. In the absence of such a protocol, search engines and artificial intelligence systems rely on fragmented third-party listings that lack verification, provenance, and accountability.

This fragmentation produces a persistent discovery gap in which the informational representation of an organisation is controlled not by the organisation itself but by platforms optimised for proprietary indexing and commercial extraction. As discovery increasingly shifts from human-initiated search to machine-mediated synthesis, this gap introduces structural distortion into economic decision-making, public understanding, and automated reasoning.

The consequences are not marginal. Research published by Younis Group in March 2026 establishes through empirical audit that AI systems operating on unverified, epistemically incomplete corpora systematically distort the information they process — removing intellectual genealogy, misrepresenting organisational identity, and producing outputs that carry authority without warranting it. The structural source of this failure is the absence of a pre-interpretive governance layer. The Verified Source Protocol addresses this failure directly.

A National Verifiable Entity Registry provides the institutional mechanism through which this protocol can be enforced consistently across the United Kingdom, enabling every organisation — from a local business to a national institution — to function as the authoritative, verified, machine-readable source of its own information.

II. Strategic Infrastructure and Policy Context

The foundation for a verifiable entity registry aligned with the Verified Source Protocol builds on emerging civic data infrastructure initiatives, most notably the Data for London Library. This initiative consolidates place-based data and enables structured discovery and reuse across civic, academic, and commercial domains.

While such initiatives demonstrate the value of shared data infrastructure, they do not confer epistemic authority at the entity level. Without protocol-level verification of organisational identity, provenance, and semantic scope, machine discoverability remains probabilistic and intermediated. Data can be shared without being verified. Information can be accessible without being authoritative.

This is the gap the Verified Source Protocol closes. Shared data infrastructure provides the plumbing. Verified source governance provides the authority. A National Verifiable Entity Registry combines both into a single accountable system.

This context aligns with national policy developments. The United Kingdom Government’s work on digital identity and attributes trust frameworks reflects an increasing emphasis on interoperable, trusted digital identifiers. The Verified Source Protocol extends this policy direction beyond personal identity into organisational and economic discovery, establishing a foundation upon which registries of verified entities can function as trusted resolution points for artificial intelligence systems.

The UK’s AI Opportunities Action Plan and the work of the AI Safety Institute further underscore the timeliness of this proposal. Both initiatives recognise that the integrity of AI outputs depends on the integrity of the information environments from which those systems draw. The National Verifiable Entity Registry directly addresses this dependency at the infrastructure level.

III. Academic Foundations for the Verified Source Protocol

Self-Sovereign Identity and Decentralised Identifiers

Decentralised identifiers and verifiable credentials provide a foundational mechanism for implementing the Verified Source Protocol. These systems allow identifiers to be resolved to machine-interpretable data without reliance on centralised authorities, whilst maintaining cryptographic guarantees of authenticity and integrity.

Research from the University of Oxford demonstrates that decentralised identity architectures reduce systemic trust dependencies and enable verifiable resolution of identity claims across distributed environments. These properties are directly applicable to discovery systems operating under adversarial optimisation conditions — conditions that the companion position paper in this series, ‘Authority Under Adversarial Optimisation,’ establishes as the defining characteristic of the contemporary information environment.

Cambridge and Sovereign Credentials

Work at the University of Cambridge has established that self-sovereign identity ecosystems — in which entities control their own identifiers and credentials — produce more trustworthy and privacy-preserving models for verification and provenance. This research underlines a core principle of the Verified Source Protocol: authority cannot be inferred and must instead be declared, bounded, and auditable.

The relationship between sovereignty and authority is structural, not rhetorical. An entity that does not control its own machine-readable representation cannot guarantee the accuracy of how it is understood by AI systems, search engines, or automated agents. Sovereignty over representation is the precondition of representational integrity.

London-Based Research on Data Fragmentation

Research at University College London and Imperial College London has examined the economic and epistemic costs of fragmented organisational representation. In the absence of structured and verifiable registries, organisational data is inconsistently represented across platforms, reducing discoverability and participation in machine-mediated markets.

The economic costs of this fragmentation are not abstract. Organisations that cannot be accurately discovered by AI systems face a structural disadvantage in an economy that is increasingly mediated by those systems. The National Verifiable Entity Registry directly addresses this disadvantage by establishing a single authoritative resolution point for every registered entity in the United Kingdom.

IV. The Search Sciences™ Protocol Framework

Within the Search Sciences™ methodology, the Verified Source Protocol is defined as a pre-interpretive governance layer that constrains how information may be represented and resolved by discovery systems. A National Verifiable Entity Registry is the primary institutional expression of this protocol at national scale.

The protocol is structured around three technical pillars, each addressing a specific failure mode in the current discovery ecosystem.

Technical PillarMechanismGovernance Function
High-Fidelity ProvenanceAll entity-issued data is cryptographically signed by the originating organisationAI systems can verify provenance and authority of each claim prior to synthesis
Semantic DeterminismEntities calibrated using structured semantic standards including Schema.org and GS1Organisational attributes and relationships are expressed within defined constraints, not inferred probabilistically
Zero-Trust RetrievalDiscovery systems resolve entity identities directly against the registry rather than scraping unverified dataThe originating entity remains the authoritative source of truth for all machine-readable representations

Together, these three pillars constitute a verification-first architecture for national discovery. They are not features to be added incrementally to existing systems. They are the structural foundation upon which trustworthy AI-mediated discovery depends.

This framework reflects the broader intellectual argument of the Search Sciences™ Research Programme: that the principles of verified source governance — mandatory provenance, semantic determinism, zero-trust retrieval, and continuous auditability — are not novel inventions but recoveries of foundational principles that information science has always required and that the advertising-driven information economy systematically abandoned.

V. Pilot Case Study: LocalBusiness.org.uk

To evaluate the feasibility of protocol-governed discovery, Younis Group has established LocalBusiness.org.uk as an applied research initiative. The platform functions as a controlled environment for validating the Verified Source Protocol through real-world organisational participation at scale.

The London Pilot

In 2026, a London-based pilot is incorporating one thousand independent businesses into a verifiable registry environment. Each participant is issued a decentralised identifier and a structured JSON-LD entity profile aligned with recognised semantic standards and cross-referenced against civic datasets including the Data for London Library.

The pilot represents the first operational implementation of the Verified Source Protocol at this scale anywhere in the United Kingdom. It provides the empirical foundation upon which national implementation can be designed, costed, and evaluated.

Evaluation Focus

The pilot evaluates two primary questions. First, attribution accuracy: whether service descriptions, availability, and pricing are resolved to the originating entity by AI-generated responses rather than inferred from third-party intermediaries. Second, economic impact: whether verified source status reduces dependency on paid discovery platforms and the associated costs that function as an information tax on smaller and public interest organisations.

Early findings from the pilot will be published as a supplementary report. Policy makers and civic data bodies with interest in the pilot’s methodology or findings are invited to contact Younis Group directly through the Search Sciences™ programme.

The pilot also tests the integration of the Verified Source Protocol with existing civic data infrastructure, specifically the Data for London Library. This integration demonstrates the compatibility of protocol-governed discovery with current government data initiatives and provides a model for extending those initiatives to confer genuine epistemic authority rather than mere data accessibility.

VI. Scaling to a National Verifiable Entity Registry

The London pilot is designed as the foundational layer for a national implementation of the Verified Source Protocol. If sustained improvements in attribution accuracy, discovery efficiency, and economic outcomes are observed — as the theoretical framework and early results indicate they will be — the framework will be expanded into a federated National Verifiable Entity Registry.

Architecture of the National Registry

The National Verifiable Entity Registry would integrate borough-level, sector-specific, and national registries into a unified federated infrastructure, enabling consistent entity resolution across all discovery platforms operating in the United Kingdom whilst preserving local governance and administrative sovereignty.

The registry would operate on the following principles.

  • Every registered entity controls its own decentralised identifier and verifiable credentials. Sovereignty over representation is non-negotiable.
  • All entity data is cryptographically signed at source. Provenance is verifiable at every point in the discovery chain.
  • Semantic calibration is maintained against recognised standards including Schema.org and GS1, ensuring interoperability with international discovery infrastructure.
  • The registry is auditable at the entity level, the sector level, and the national level. Governance is transparent and accountable.

Regulatory and Policy Alignment

The National Verifiable Entity Registry aligns with and extends the United Kingdom Government’s Digital Identity and Attributes Trust Framework. Where that framework addresses personal identity, the registry addresses organisational and economic identity — the complementary infrastructure required for a complete national digital identity system.

The registry also aligns with the UK’s commitments under the AI Opportunities Action Plan. As AI systems take on an increasing role in mediating economic discovery, public information, and civic decision-making, the governance of the information those systems draw upon becomes a matter of national infrastructure policy. The National Verifiable Entity Registry provides the institutional mechanism through which that governance can be exercised.

This white paper formally proposes that the National Verifiable Entity Registry be considered within the scope of the UK Government’s digital identity and AI governance policy frameworks. Younis Group is available to brief relevant departments, the AI Safety Institute, and civic data bodies on the technical architecture, pilot findings, and implementation pathway.

VII. Economic and Civic Implications

The economic implications of the National Verifiable Entity Registry extend well beyond the technology sector. They are relevant to every sector of the UK economy in which organisational representation within AI-mediated discovery systems has a bearing on commercial outcomes, civic participation, or public trust.

Removing the Information Tax

The current discovery ecosystem functions as an information tax. Organisations that are already authoritative must invest continuously in advertising and search engine optimisation to defend their identity against misrepresentation, imitation, and intermediary distortion. This tax is regressive: it falls most heavily on smaller organisations, public interest bodies, and community institutions that lack the resources to compete within pay-to-rank discovery ecosystems.

The Verified Source Protocol removes the structural basis for this tax. When an organisation’s identity is resolved directly from a cryptographically verified source rather than inferred from third-party aggregations, the competitive advantage of advertising spend in identity maintenance is eliminated. Authority is established through governance, not purchased through spend.

Civic and Democratic Implications

The civic implications are equally significant. Public institutions — NHS trusts, local councils, educational bodies, regulatory agencies — are increasingly represented to citizens through AI-mediated discovery systems. The accuracy and authority of those representations is not currently governed by any structural protocol. Citizens receiving AI-generated information about public services, health guidance, or regulatory requirements have no guarantee that the information originates from the institution it purports to represent.

The National Verifiable Entity Registry addresses this directly. When public institutions are registered as verified sovereign entities, their machine-readable representations carry cryptographic authority. AI systems resolving information about those institutions do so from a verified source rather than from an aggregated approximation. The integrity of public information in an AI-mediated environment depends on exactly this infrastructure.

In an AI-mediated information environment, the integrity of public institutions depends on the governance of how those institutions are represented to machines. The National Verifiable Entity Registry is civic infrastructure, not technology infrastructure.

VIII. Implications for AI Governance and the Information Stack

The National Verifiable Entity Registry is not merely a discovery infrastructure proposal. It is an AI governance proposal. The quality of AI outputs in any domain is constrained by the quality of the information those systems were trained on and the quality of the information they retrieve at inference time. Governance of the information stack is therefore governance of AI behaviour.

The Search Sciences™ Research Programme has established through empirical audit that AI systems operating on unverified corpora systematically distort the information they process. The ‘Algorithmic Flattening’ audit paper documents this in detail. The ‘Cost of Flattening’ Economic Brief translates the finding into regulatory and patient safety terms. The ‘Authority Under Adversarial Optimisation’ position paper derives the structural necessity of pre-interpretive governance from first principles.

The National Verifiable Entity Registry is the infrastructure through which these findings are translated into national policy action. It provides the pre-interpretive verification layer that the research programme identifies as a necessary condition of trustworthy AI. Its adoption by the United Kingdom would position the country as the first major economy to operationalise a verified source governance framework at national scale.

The United Kingdom has a genuine first-mover opportunity in national verified source governance. The academic foundations are established. The pilot infrastructure is live. The policy context is aligned. The Search Sciences™ Research Programme provides the theoretical framework. The National Verifiable Entity Registry provides the implementation pathway.

IX. Conclusion: Protocol-Governed Information Sovereignty

The Verified Source Protocol represents a structural shift in discovery governance within the United Kingdom. By enabling organisations to function as sovereign, verifiable sources of their own information, the protocol restores control over representation, attribution, and economic participation.

As artificial intelligence systems increasingly mediate access to knowledge, services, and civic information, protocol-governed information sovereignty becomes a prerequisite for resilience and trust. When discovery systems resolve information directly from verified sources, authority is no longer inferred through optimisation but established through governance.

The National Verifiable Entity Registry provides the institutional mechanism through which this protocol can be realised at national scale. Its architecture is grounded in academic research, aligned with existing UK digital identity policy, validated through a live London-based pilot, and supported by a coherent theoretical and empirical research programme.

Its adoption marks a transition from adversarial discovery to accountable, verifiable, and sovereign information infrastructure — a transition that the United Kingdom is uniquely positioned to lead.

The question before UK policy makers is not whether verified source governance is necessary. The research is published, the pilot is live, and the case is made. The question is whether the United Kingdom will lead this transition or follow it.

Search Sciences™ Research Programme — companion papers:Younis Group (2026) The Verified Source Protocol and the Future of Information Science: A Research Report. Search Sciences™ Programme. Version 1.0.Younis Group (2026) Algorithmic Flattening and Lossy Semantic Compression in Large Language Models. Search Sciences™ Programme. Version 1.0.Younis Group (2026) The Cost of Flattening: Catastrophic Risk in AI-Mediated Healthcare, Finance, and the Erasure of Foundational Knowledge. Search Sciences™ Economic Brief. Version 1.0.Younis Group (2026) Authority Under Adversarial Optimisation: Why AI-Mediated Knowledge Requires a Verified Source Protocol. Search Sciences™ Programme. Version 1.0.
References Barclay, C., Markantonakis, K. and Akram, R. (2020) Self-Sovereign Identity Systems. Cambridge University Press. Greater London Authority (2025) The Data for London Library. London. Laatikainen, G., Halpin, H. and Paavola, J. (2021) Decentralised identity and trust frameworks. Journal of Information Security, 15(2), pp. 101–118.UK Government (2025) Digital Identity and Attributes Trust Framework. London. Younis, M. (2026) Search Sciences™ and Discovery Governance. Younis Group. Younis Group (2026) The Verified Source Protocol and the Future of Information Science. Search Sciences™ Programme. Version 1.0.