TL;DR
AI-mediated search has changed the unit of competition for Canadian businesses. The question is no longer which pages rank for a given query; it is which sources are reliable enough to support an AI-generated answer. This shift, visible in Google’s AI Overviews and Microsoft Copilot’s explicit citation reporting, compresses user attention and raises the stakes of every site visit. For Canadian small and medium-sized enterprises, which employ nearly half the private-sector workforce but remain in a partial state of digital integration, the gap between being visible and being recommendable has become a direct competitiveness problem. Trust architecture, the integrated system of machine-readable and human-verifiable signals that reduce uncertainty across access, interpretation, evaluation, citation, and conversion, is the practitioner framework that closes this gap. This article explains what trust architecture is, why it matters for Canadian SMEs, what the evidence says about its eight constituent signals, and how organizations can build it systematically using the same integrated discipline that governs every effective digital transformation.
Written by Sarjun Gharib | Founder & Principal Consultant, Knowledge Based Consulting Incorporated (KBC)

Trust Architecture Defined
Trust architecture is the integrated system of technical, evidentiary, and governance signals that reduce uncertainty for both humans and AI retrieval systems when evaluating whether a digital source should be cited or recommended.
In traditional search environments the objective was ranking. In AI mediated discovery environments the objective is recommendation. Trust architecture describes the conditions that make a source credible enough for an AI system to reference when generating answers.
Organizations that structure their digital presence around verifiable claims, transparent authorship, reliable performance, and documented governance produce signals that retrieval systems can evaluate with lower uncertainty.
The Unit of Competition Has Changed
Something fundamental shifted in search over the past eighteen months, and most Canadian business owners have not yet updated their operating model to account for it.
Traditional search asked a straightforward question: which pages should a user visit to find what they are looking for? The competitive game was about ranking, appearing prominently in a list of links so that a percentage of searchers would click through to your website. Traffic was the currency. Optimize the page, build the authority, rank for the right terms, and the visits will follow.
AI-mediated search asks a different question: which sources are reliable and specific enough to support an answer that the system can generate and present directly to the user? In Google’s AI Overviews and AI Mode, the system uses a technique described in its documentation as “query fan-out," decomposing a user’s question into multiple related sub-queries across subtopics and data sources, then synthesizing a response with supporting links [1]. Microsoft’s Copilot, Bing AI, and AI-powered search features across major platforms operate on similar retrieval-and-synthesis principles. Google reports that more than 1.5 billion people globally use AI Overviews [3]. That is not a beta program. It is the new default architecture of information discovery.
The economic consequence of this shift is significant and measurable. A March 2025 study by Pew Research Center, examining real Google browsing behaviour, found that when an AI summary appeared, users clicked on traditional search results 8% of the time, compared with 15% when no AI summary appeared. Clicks on links inside the AI summary itself occurred in approximately 1% of visits. Users were also more likely to end their browsing session after seeing a page with an AI summary, at 26% versus 16% without one [2]. This is the attention compression effect: fewer clicks, higher intent, and compressed evaluation. The click that does happen carries more weight than it ever did.
For Canadian SMEs, this is not a niche marketing concern. It is a competitiveness question. Innovation, Science and Economic Development Canada reports in Key Small Business Statistics 2025 that, as of 2024, small businesses employed 5.8 million Canadians, representing 46.6% of the private labour force [5]. These businesses compete in markets increasingly mediated by AI-powered discovery. The strategic question is no longer "How do I rank?” It is "How do I become recommendable?” The framework that answers it is trust architecture.

What Trust Architecture Means
Trust architecture, as I define it in the context of AI search, is the integrated system of machine-readable and human-verifiable signals that reduce uncertainty across five stages: access, interpretation, evaluation, citation, and conversion.
Access is whether your content can be retrieved by crawlers and returned in search. This is not binary. Different AI systems use different crawlers with different functions, which means organizations must treat bot governance as a deliberate policy, not a one-time technical edit. Google’s platform documentation states that there are no additional technical requirements, no special schema, and no unique optimization required specifically for AI Overviews or AI Mode beyond established SEO fundamentals [1]. But that “established” baseline is more demanding than many Canadian SME websites currently meet.
Interpretation is whether your content is structurally legible and semantically precise once retrieved. Retrieval-augmented generation, the academic framework underpinning most modern AI search systems, combines parametric memory from model weights with non-parametric memory from retrieved documents to improve performance on knowledge-intensive tasks and support provenance tracking [22]. When source content is ambiguous, internally inconsistent, or structurally obscure, the system must infer. Inference in language models increases hallucination risk, the generation of fluent but factually unsupported text [23]. Sources that force inference are sources that generate errors. Sources that generate errors get deprioritized.
Evaluation is the quality assessment that both humans and systems perform. Google’s Search Quality Rater Guidelines describe trust as the most important element within the E-E-A-T framework: Experience, Expertise, Authoritativeness, and Trustworthiness [16]. The experience dimension was added in 2022 to capture firsthand, real-world grounding as a distinct quality signal [17]. Stanford’s Web Credibility research recommends that organizations make it easy to verify accuracy, show a real organization behind the site, and highlight expertise clearly [18]. These are not stylistic suggestions. They are evidence-management requirements.
Citation is whether the source is selected to support AI-generated answers. This is now directly measurable in at least one major ecosystem. Microsoft introduced an AI Performance capability in Bing Webmaster Tools in February 2026 that reports total citations, cited pages, and “grounding queries” used to retrieve content for AI-generated answers across Copilot, Bing’s AI summaries, and select partner settings [4]. This is not speculation or proxy measurement. It is direct citation telemetry, an observable feedback loop that turns trust architecture from a theoretical posture into a managed discipline.
Conversion is whether visitors who do click through can quickly validate your claims and act. In the attention compression environment of AI search, a site visit from a user who has already read an AI summary is a high-intent visit. That person is evaluating, not exploring. Technical failure, slow load times, or content that buries the answer wastes the most valuable visit your organization is likely to receive from that query.
This is not SEO rebranded. Trust architecture integrates communications, governance, technical performance, and evidence quality into a single discipline. The information asymmetry framing from economics is instructive here. Akerlof’s “market for lemons” demonstrates that when buyers cannot reliably distinguish quality, markets degrade as good offerings are crowded out by low-quality ones, and that counteracting institutions and signals can restore market function by reducing uncertainty [21]. AI search is precisely such an institution, operating at web scale, and trust architecture is how organizations provide the signals that allow both humans and machines to discriminate reliably between sources worth recommending and sources that introduce risk.
The Trust Architecture Model
The Trust Architecture Model describes the sequence of signals AI systems evaluate when determining whether a digital source is credible enough to support generated answers.
Layer 1: Access
Content must be retrievable by search crawlers and AI retrieval systems. This includes crawl permissions, indexable pages, and clear site architecture.
Layer 2: Interpretation
Content must be structurally legible and semantically precise so AI systems can understand meaning without relying on inference.
Layer 3: Evaluation
Systems assess credibility through signals such as authorship transparency, external references, and evidence supporting claims.
Layer 4: Citation
Sources that pass credibility thresholds may be selected to support AI generated answers across platforms such as AI Overviews, Copilot, and other retrieval systems.
Layer 5: Conversion
When users click through after reading an AI answer, the website must quickly confirm credibility through performance, clarity, and transparency.

The Canadian Context: A National Competitiveness Question
Canadian businesses are in a pivotal and underappreciated position relative to this shift, and the data is specific enough to be actionable.
The Canadian Federation of Independent Business reported in September 2025 that 92% of small businesses use digital tools, but only 10% have fully integrated them across operations [7]. In trust architecture terms, that 82-point gap is the difference between being online and being recommendable. A business can have a website, a Google Business Profile, and active social media accounts and still present an incoherent, unverifiable picture to a retrieval-and-synthesis system deciding whether this organization's information is safe to cite in a user’s response. Incoherence is not a content problem. It is an integration problem, and integration is precisely where Canadian SMEs are weakest.
AI adoption is rising, but unevenly and with persistent structural barriers. Statistics Canada reports that in the second quarter of 2025, 12.2% of Canadian businesses used AI to produce goods or deliver services over the preceding year, up from 6.1% in the same quarter of 2024 [8]. That near-doubling in twelve months reflects genuine momentum. But among businesses not planning AI adoption, common barriers included lack of knowledge about AI capabilities, irrelevance to their current operations, and privacy and security concerns [9]. These barriers are not irrational. They reflect a real gap in knowledge infrastructure, and they map directly onto the trust architecture problem: businesses that do not understand AI systems cannot systematically manage how those systems perceive and cite them.
The OECD’s discussion paper on AI adoption by SMEs, prepared for the G7 and published in December 2025, documents persistent gaps between SMEs and larger firms in AI diffusion and introduces a taxonomy of SME AI adopters by maturity and scope [10]. This taxonomy matters for trust architecture because the integration gap that limits internal AI adoption also limits external AI-mediated discoverability. Both problems have the same structural root: tools without system thinking, visibility without coherence, and digital presence without digital governance.
The Canadian legal landscape reinforces why trust architecture cannot be treated as optional. The Personal Information Protection and Electronic Documents Act applies to private-sector organizations across Canada engaged in commercial activity and includes requirements for how personal information is collected, used, and disclosed [36]. The Office of the Privacy Commissioner of Canada has confirmed that mandatory breach reporting obligations apply to organizations of all sizes [37]. In Quebec, the Act respecting the protection of personal information in the private sector requires privacy impact assessments for projects involving information systems or electronic service delivery that process personal information [38]. Canada’s Anti-Spam Legislation requires consent, clear sender identification, and an unsubscribe mechanism in commercial electronic messages [39]. These obligations are not separate from trust architecture. They are its legal expression, the governance layer that makes an organization credible not just to human buyers, but to the systems those buyers increasingly rely on to filter and recommend.
I will state this as directly as I can, because it is a claim I am prepared to defend: Canadian SMEs that fail to build trust architecture by 2027 will be functionally invisible to AI-mediated discovery, not because they lack a website, but because their digital presence does not reduce uncertainty for retrieval systems. Being online and being recommendable are no longer the same condition. And in a country where small businesses employ nearly half the private-sector workforce, this gap has national productivity consequences, not just marketing consequences.

The Eight Signals That Constitute Trust Architecture
Based on a synthesis of primary platform documentation, peer-reviewed retrieval and hallucination research, and governance standards applicable in Canada, I identify eight signals that constitute trust architecture in the AI search era. None of these signals is novel in isolation. Their integration into a deliberate, governed system is what distinguishes organizations that are recommended from organizations that are merely indexed.
The first signal is the verifiability of claims. AI retrieval systems prefer sources that make it easy to confirm that what they say is accurate. Google’s people-first content guidance frames this as content created primarily to benefit people, correlating with E-E-A-T signals [15]. Stanford’s credibility research recommends providing citations and references and showing a real organization behind the site [18]. In economic terms, this addresses information asymmetry directly: when buyers cannot distinguish quality, markets degrade [21]. For a Canadian consultancy or professional services firm, verifiability means that every high-intent page, service, outcome, credential, and methodology answers four questions explicitly: who is responsible for this claim, what is being asserted, what evidence supports it, and what assumptions or limitations apply? These are not copywriting considerations. They are evidence of architectural decisions.
The second signal is structural legibility. Retrieval-augmented generation works better when source content is unambiguous. The RAG framework highlights provenance and accuracy as core capabilities [22], and the hallucination literature documents that models generate fluent but inaccurate text when they must infer content not present in their sources [23]. The practitioner implication is to write to be extracted, not to impress. Definitions should be explicit and standalone. Claims should precede evidence, not follow it. Conclusions should appear in the first paragraph, not the last.
The third signal is authorship and expertise. The E-E-A-T framework places trust at the centre and demands evidence of experience and expertise as supporting signals [16][17]. Trust in information systems research influences online behavioural intent alongside usefulness and ease of use [19]. In electronic markets, trust-building mechanisms produce measurable price premiums [20]. For Canadian professional services firms, this means named authors with verifiable credentials on every piece of substantive content, author biographies that are consistent across platforms, and content that demonstrates firsthand experience rather than general assertion.
The fourth signal is technical performance. Core Web Vitals are not merely a ranking factor in the AI search era. Google’s PageSpeed Insights requires the 75th percentile of all three metrics to reach the “Good” threshold [28]. Recommended targets as documented on web.dev are an LCP of 2.5 seconds or less, an INP of 200 milliseconds or less, and a CLS of 0.1 or less [29]. Search Console’s Core Web Vitals report uses actual user data and groups URLs by status and metric type, enabling prioritized remediation [30]. The logic in the attention compression environment is straightforward: fewer users click through when AI summaries appear, so the users who do click are later-stage evaluators. They have already read the AI’s synthesis and are now confirming, comparing, or converting. Technical friction at that stage wastes the highest-value visit your organization will receive from that query.
The fifth signal is accessibility as machine interpretability. Web Content Accessibility Guidelines 2.2 is now both a W3C Recommendation and an ISO standard (ISO/IEC 40500:2025), expanding its formal authority across jurisdictions [31][32]. In Canada, the Accessible Canada Act establishes a barrier-free Canada goal by January 1, 2040, with information and communication technologies explicitly among the priority areas [33][34]. The Canadian Human Rights Commission specifies that regulated entities must publish accessibility plans, feedback process descriptions, and progress reports on their primary digital platform [35]. The underappreciated insight is that accessibility is also machine interpretability. Content that is properly labelled, logically structured, and semantically hierarchical is exactly the kind of content that AI retrieval systems can parse reliably. The same attributes that make a website usable by a screen reader make it more extractable by a language model. Accessibility investment therefore carries dual returns: legal compliance and improved AI retrieval performance.
The sixth signal is bot governance as deliberate policy. One of the most operationally important findings in the current AI search landscape is that “AI bots” are not a single category requiring a single response. OpenAI operates distinct crawlers for distinct purposes: OAI-SearchBot for search inclusion, GPTBot for training data, and ChatGPT-User for user-initiated actions [24]. Separate robots.txt controls exist for OAI-SearchBot and GPTBot, meaning an organization can permit search inclusion while restricting training use. OpenAI’s publisher documentation provides guidance on tracking ChatGPT-originated referrals via the utm_source=chatgpt.com parameter, enabling direct measurement of AI-assisted recommendation traffic [25]. Anthropic states that its bots respect standard robots.txt directives and anti-circumvention technologies [26]. Perplexity states that PerplexityBot respects robots.txt but may still index metadata, including domain, headline, and a brief factual summary, regardless of directives [27]. Google’s AI features documentation confirms that standard SEO technical requirements apply [1]. Organizations must therefore define, in plain language, their position on three distinct questions: do we consent to AI search inclusion, do we consent to training data use, and do we consent to user-initiated retrieval? The technical directives in robots.txt should reflect deliberate policy, not default CMS configuration.
The seventh signal is measurement as credibility infrastructure. Trust architecture is not a one-time build. It is a compounding system that improves through observable feedback. Microsoft’s AI Performance capability in Bing Webmaster Tools creates a directly observable feedback loop: organizations can now see which pages are cited, for which grounding queries, and at what frequency [4]. Google’s branded queries filter in Search Console, introduced in November 2025, enables separation of brand-driven from discovery-driven demand [40], which matters because brand familiarity reduces uncertainty in AI-mediated selection, potentially enabling direct navigation that bypasses some intermediation effects of AI summaries. Where AI assistant referrals are trackable through available parameters, they should be treated as a distinct channel in analytics, not lumped into generic referral traffic [25]. The measurement objective in the AI search era is not volume. It is quality: increasing the proportion of sessions that result in citation, trust confirmation, and conversion.
The eighth signal is governance transparency. ISO/IEC 42001:2023 describes an AI management system providing integrated guidance for responsible AI use across the project lifecycle [14]. NIST’s AI Risk Management Framework provides a structured approach to managing AI-related risks and promoting trustworthy development and use [12]. Its Generative AI Profile extends this to generative systems, describing risk categories and mitigation actions [13]. The OECD AI Principles, updated in 2024, frame trustworthy AI as a values-based and practical standard for innovation [11]. Canadian organizations that publish explicit governance artifacts, current privacy policies, AI use disclosures where applicable, accessibility plans where required, documented methodologies, and clearly attributed content create signals that both people and systems can verify. These artifacts signal that the organization operates to a governable standard. And “operates to a governable standard” is precisely what retrieval-and-synthesis systems are attempting to assess when they decide which sources to support.

What Being Recommendable Looks Like in Practice
Across more than 50 digital maturity assessments conducted with Canadian SMEs through the Canada Digital Adoption Program, a consistent pattern emerges. Businesses that score highest on digital integration share four properties that map directly onto trust architecture: their claims are specific and attributed, their processes are documented and accessible, their evidence is verifiable and current, and their governance artifacts are published and maintained.
In practical terms for a Canadian professional services firm, an accounting practice, a consulting firm, an immigration legal service, a real estate brokerage, being recommendable in an AI search looks like this. The website’s about page names specific individuals with specific credentials, not a generic team bio. The services pages describe actual methods, not aspirational outcomes. The case studies or client results include sufficient specificity that a retrieval system can summarize them accurately, not just “we improved their operations” but “we reduced their client onboarding time from fourteen days to three through process documentation and CRM integration.” The blog content is authored by named individuals with verifiable expertise, cites external sources, and is dated with clear version discipline. The privacy policy is current, accurate, and explicitly describes data handling. The technical performance meets Core Web Vitals thresholds. The robots.txt file reflects a deliberate governance decision, not a default configuration.
None of these properties requires a large budget. They require a clear system and the discipline to maintain it. According to CFIB's data, there is a discipline gap rather than a resource gap between 92% tool adoption and 10% full integration [7]. The businesses that close it will not just rank differently. They will be recommended differently. And in an AI-mediated discovery environment, that distinction is the one that determines whether your ideal client finds you or finds your competitor first.
The Eight Signals of Trust Architecture
Across platform documentation, retrieval research, and governance standards, eight signals consistently determine whether a source is credible enough to support AI generated answers.
Verifiable claims
Structural clarity
Named authorship and expertise
Technical performance
Accessibility and machine interpretability
Bot governance policies
Measurement and citation tracking
Governance transparency
These signals reduce uncertainty for both users and AI systems evaluating digital sources.

Recommendations: Building Trust Architecture Systematically
The recommendations below are sequenced as a system, not a checklist, because the returns from trust architecture compound. Each element strengthens the others, and the sequence matters.
Start by making your organization verifiable. Every high-intent page should answer, within the first two paragraphs, who is responsible, what is being claimed, what evidence supports it, and what limitations apply. This is an evidence architecture decision, and it is the foundation on which every other trust signal rests. Google’s E-E-A-T framing and Stanford’s credibility research converge on this point from different directions [15][18].
Then publish content that survives extraction. Write definitions that are standalone. Lead with conclusions. Cite external sources where feasible and link to those sources explicitly. Date all time-sensitive material and maintain version discipline so that AI systems are never retrieving superseded content. This is compatible with Google’s E-E-A-T guidance, and it directly addresses the hallucination risk identified in the AI research literature [22][23].
Define and implement your bot governance policy before the next platform update makes your current default a liability. Determine your position on search inclusion, training inclusion, and user-initiated retrieval. Implement the appropriate technical directives. Review them at least annually [24][25][26][27]. This is not a developer task. It is a governance task that happens to have technical implementation.
Achieve and maintain Core Web Vitals performance. Use Search Console’s field data reports, prioritize remediation based on actual user impact, and treat performance as a trust signal [28][29][30]. In the attention compression environment of AI search, each high-intent click is worth more than it was in a traditional search environment. Do not waste it on load time.
Run measurement as a credibility feedback loop. Implement Bing Webmaster Tools and activate AI Performance reporting [4]. Segment AI-originated traffic using available tracking parameters [25]. Apply Google’s branded queries filter to understand whether non-branded discovery demand is growing [40]. Treat the data not as a traffic dashboard but as a credibility audit. If you are not being cited for the queries that define your core expertise, the trust architecture signals in that domain are insufficient.
Align your digital operations with Canadian privacy and communications obligations. PIPEDA applies to your business regardless of size [36]. Breach reporting obligations are real and apply to small businesses [37]. Quebec’s privacy impact assessment requirements apply to relevant information system projects [38]. CASL’s consent requirements apply to your outbound communications [39]. These are not compliance burdens that compete with trust architecture work. They are the legal expression of the same discipline.
Finally, and most importantly: integrate rather than accumulate. The CFIB’s finding that 92% of Canadian SMEs use digital tools but only 10% have fully integrated them [7] is the root cause of most trust architecture failures. Individual tools produce inconsistent signals. An integrated system produces coherent evidence. An organization that integrates governance, measurement, performance, and evidence publishing will consistently outperform an organization that buys tools without transforming the system.

A Prediction for the Canadian Market
By 2027, the primary competitive differentiator for Canadian professional services firms in AI-mediated search will not be domain authority, keyword density, or content volume. It will be the coherence and verifiability of their evidence systems.
The organizations that win will be those whose websites function as evidence bases rather than promotional brochures. Their claims will be specific, attributed, and supported. Their team pages will link to verifiable credentials. Their methodology pages will describe actual methods. Their client outcomes will be documented with sufficient specificity that a retrieval system can summarize them accurately. Their technical performance will be reliable enough that the high-intent visitor who clicks through after reading an AI summary finds what they were promised.
For Canadian SMEs, this is both a challenge and a genuine opportunity. The trust architecture gap is large, as the CFIB, Statistics Canada, and ISED data confirm, but it is a structural gap, not a talent gap. The businesses that address it systematically will not just improve their search performance. They will build digital infrastructure that compounds: better evidence produces better citations, better citations produce higher-intent visits, higher-intent visits produce better outcomes to document, and better-documented outcomes produce more verifiable evidence to publish. That compounding loop is the architecture of trust in the AI search era.
It is, at its core, a digital transformation problem. And digital transformation, executed with precision and governed with accountability, is what makes Canadian businesses not just visible but worth recommending.
Sarjun Gharib is the Founder and Principal Consultant at Knowledge Based Consulting Incorporated. (KBC), a digital business consulting firm headquartered in Ottawa, Ontario. He has led digital maturity assessments and transformation engagements for more than 50 Canadian SMEs through the Canada Digital Adoption Program and serves as a senior project consultant with the Government of Canada. His consulting frameworks, including the Digital Business Playbook and the Digital Business Operating System (DBOS), are designed to help Canadian organizations build the integrated digital infrastructure required for performance in complex, AI-mediated environments. He holds a Bachelor of Arts in Economics from Carleton University.
#HITL, This article was researched and written by Sarjun Gharib with AI-assisted synthesis. All claims, frameworks, and interpretations are human-authored and editorially accountable.
Key Takeaways
AI search systems increasingly synthesize answers rather than directing users to lists of links.
This shifts the competitive objective from ranking pages to becoming sources credible enough for AI systems to cite.
Organizations that publish verifiable evidence, maintain strong technical performance, and operate with transparent governance create digital environments that retrieval systems are more likely to trust.
Trust architecture provides the framework for achieving this.
Frequently Asked Questions
What is AI mediated discovery?
AI mediated discovery refers to search systems where large language models synthesize answers from multiple retrieved sources rather than presenting a list of links. Examples include Google AI Overviews and Microsoft Copilot.
What makes a website trustworthy for AI search?
AI systems prefer sources that provide clear authorship, verifiable evidence, structured content, and reliable technical performance. These signals collectively create trust architecture, which reduces uncertainty when AI systems select sources to cite.
References
[1] Google Search Central. AI features and your website: AI Overviews and AI Mode, query fan-out, eligibility, and guidance. https://developers.google.com/search/docs/appearance/ai-features
[2] Pew Research Center. Google users are less likely to click on links when an AI summary appears in the results. July 22, 2025 (analysis of March 2025 browsing activity). https://www.pewresearch.org
[3] Google. AI Overviews and AI Mode in Search. PDF. Scale and quality framing. Published 2025. https://search.google/pdf/google-about-AI-overviews-AI-Mode.pdf
[4] Microsoft. Introducing AI Performance in Bing Webmaster Tools (Public Preview). February 10, 2026. https://blogs.bing.com/webmaster/February-2026/Introducing-AI-Performance-in-Bing-Webmaster-Tools-Public-Preview
[5] Innovation, Science and Economic Development Canada. Key Small Business Statistics 2025. Published January 29, 2026. https://ised-isde.canada.ca
[6] Statistics Canada. Analysis on small businesses in Canada, fourth quarter of 2025. Published December 11, 2025. https://www150.statcan.gc.ca
[7] Canadian Federation of Independent Business. Digital adoption including AI paying off for SMEs, but gaps remain. September 29, 2025. https://www.cfib-fcei.ca
[8] Statistics Canada. Analysis on artificial intelligence use by businesses in Canada, second quarter of 2025. Published June 16, 2025. https://www150.statcan.gc.ca
[9] Statistics Canada. Analysis on expected use of artificial intelligence by businesses in Canada, third quarter of 2025. Published September 11, 2025. https://www150.statcan.gc.ca
[10] OECD. AI adoption by small and medium-sized enterprises: OECD discussion paper for the G7. Published December 9, 2025. https://www.oecd.org
[11] OECD. OECD AI Principles. Adopted 2019, updated 2024. https://www.oecd.org/en/topics/sub-issues/ai-principles.html
[12] National Institute of Standards and Technology. Artificial Intelligence Risk Management Framework (AI RMF 1.0). NIST AI 100-1. 2023. https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf
[13] National Institute of Standards and Technology. Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile. NIST AI 600-1. 2024. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf
[14] International Organization for Standardization. ISO/IEC 42001:2023 Artificial intelligence management system. https://www.iso.org/standard/42001
[15] Google Search Central. Creating helpful, reliable, people-first content. https://developers.google.com/search/docs/fundamentals/creating-helpful-content
[16] Google. Search Quality Evaluator Guidelines. September 2025. https://guidelines.raterhub.com/searchqualityevaluatorguidelines.pdf
[17] Google Search Central Blog. Our latest update to the quality rater guidelines: E-E-A-T gets an extra E for Experience. December 15, 2022. https://developers.google.com/search/blog/2022/12/google-raters-guidelines-e-e-a-t
[18] Stanford Web Credibility Project. Stanford Guidelines for Web Credibility. https://credibility.stanford.edu/guidelines/index.html
[19] Gefen, D., Karahanna, E., Straub, D. Trust and TAM in online shopping: An integrated model. MIS Quarterly. 2003.
[20] Ba, S., Pavlou, P. Evidence of the Effect of Trust Building Technology in Electronic Markets: Price Premiums and Buyer Behaviour. MIS Quarterly. 2002.
[21] Akerlof, G. The Market for "Lemons": Quality Uncertainty and the Market Mechanism. The Quarterly Journal of Economics. 1970.
[22] Lewis, P., et al. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. arXiv:2005.11401. 2020.
[23] Ji, Z., et al. Survey of Hallucination in Natural Language Generation. ACM Computing Surveys. 2023. DOI: 10.1145/3571730.
[24] OpenAI. Overview of OpenAI Crawlers (OAI-SearchBot, GPTBot, ChatGPT-User; robots.txt controls). https://developers.openai.com/api/docs/bots/
[25] OpenAI. Publishers and Developers FAQ. Referral tracking via utm_source=chatgpt.com. https://help.openai.com/en/articles/12627856-publishers-and-developers-faq
[26] Anthropic. Does Anthropic crawl data from the web, and how can site owners block the crawler? https://privacy.claude.com/en/articles/8896518
[27] Perplexity. How does Perplexity follow robots.txt? https://www.perplexity.ai/help-center/en/articles/10354969
[28] Google. PageSpeed Insights: About. Core Web Vitals assessment logic and 75th percentile methodology. https://developers.google.com/speed/docs/insights/v5/about
[29] web.dev (Google). Web Vitals. LCP, INP, CLS definitions and recommended thresholds. https://web.dev/articles/vitals
[30] Google Search Console Help. Core Web Vitals report. Field data basis and URL grouping logic. https://support.google.com/webmasters/answer/9205520
[31] World Wide Web Consortium. Web Content Accessibility Guidelines (WCAG) 2.2. W3C Recommendation. https://www.w3.org/TR/WCAG22/
[32] World Wide Web Consortium. WCAG 2.2 Approved as an ISO Standard (ISO/IEC 40500:2025). October 21, 2025.
[33] Justice Laws Website (Canada). Accessible Canada Act. Purpose section. Barrier-free Canada by January 1, 2040. https://laws-lois.justice.gc.ca/eng/acts/A-0.6/section-5.html
[34] Government of Canada. About an Accessible Canada. Overview of ACA requirements. Updated January 7, 2026. https://www.canada.ca/en/employment-social-development/programs/accessible-canada.html
[35] Canadian Human Rights Commission. Accessibility publication requirements. https://www.chrc-ccdp.gc.ca/organizations/accessible-canada-act-responsibilities
[36] Office of the Privacy Commissioner of Canada. PIPEDA requirements in brief. https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/the-personal-information-protection-and-electronic-documents-act-pipeda/pipeda_brief/
[37] Office of the Privacy Commissioner of Canada. Mandatory reporting of breaches of security safeguards. Applies to organizations of all sizes. https://www.priv.gc.ca
[38] Québec. Act respecting the protection of personal information in the private sector (CQLR c P-39.1). Section 3.3. Privacy impact assessment requirement. https://www.legisquebec.gouv.qc.ca
[39] Canadian Radio-television and Telecommunications Commission. Frequently Asked Questions about Canada’s Anti-Spam Legislation (CASL). Updated February 26, 2026. https://crtc.gc.ca/eng/com500/faq500.htm
[40] Google Search Central Blog. Introducing the branded queries filter in Search Console. November 20, 2025. https://developers.google.com/search/blog/2025/11/search-console-branded-filter


.jpg)