### 6.2 AI & Cloud Processing AI is part of how I think, draft, and refine architectures. It is also a live attack surface. This section refines the general “no raw cloud processing” rule in this policy and defines **when and how** AI tools (including Maple, PPQ, and Routstr) may be used with client-derived material. --- #### 6.2.1 Baseline principle * **No raw uploads.** I do **not** upload full, raw client artifacts (Dossiers, legal documents, spreadsheets, ID scans, etc.) to external AI services. * **Abstraction only.** Client-derived information is processed by external AI tools only in **abstracted form**, under the constraints in this section. * **Configurable scope.** Especially for high-OPSEC / high-threat clients, the allowed AI usage (e.g. **local only**, or **local + Maple**, etc.) is explicitly agreed in the SOW / Dossier notes and treated as a hard engagement boundary, not a soft preference. --- #### 6.2.2 AI provider classes To avoid confusion with client **threat tiers**, AI tools are grouped into **provider classes**: **Class 0 – Local / Self-Hosted Models** * Models running on my own hardware or on client-controlled infrastructure. * Prompts and outputs **do not leave** the local or explicitly agreed environment. * Used for: * Highly sensitive or uniquely identifying scenarios. * Anything where a leak would be catastrophic, not merely embarrassing. * First-pass structuring and pattern exploration when we cannot risk external exposure. --- **Class 1 – Privacy-Oriented Enclave / Encrypted Services (e.g. Maple)** * Tools like **Maple AI**, which *state* that: * Conversations are end-to-end encrypted, * Queries are encrypted locally with a user key and only decrypted inside a secure enclave in the cloud, and * Data is not used for training and is designed around confidentiality. ([Maple AI][1]) * Independent reviews describe Maple as a privacy-first assistant built on E2EE, secure enclaves, and open-source transparency. ([GreyCoder][2]) * These are **their claims**, not my audit. I treat them as **stronger-than-average privacy guarantees**, not as an absolute shield. * Secure enclaves (e.g. Nitro-class confidential computing) still depend on: * Cloud hardware, * Correct attestation, configuration, and patching, * Absence of implementation bugs or side channels. ([OneUptime][3]) * Used for: * Semi-abstracted client work where we need high-quality reasoning, * After stripping names, precise identifiers, and exact balances. --- **Class 2 – Routing / Marketplace Services (e.g. PPQ, Routstr)** * **PPQ (PayPerQ)**: * Pay-per-prompt access to **hundreds of models** (OpenAI, Anthropic, Mistral, Gemini, Perplexity, Meta, etc.). ([PayPerQ][4]) * No subscription or registration required; supports Bitcoin / Lightning / Monero and other crypto for pay-as-you-go. ([PayPerQ][4]) * Provides an OpenAI-compatible API and features like **Deep Research**, which uses web search + web scraping (Firecrawl) + AI synthesis to create reports. ([PayPerQ][5]) * Functionally: a wrapper and broker in front of many upstream LLM providers. ([DEVONtechnologies Community][6]) * **Routstr**: * A decentralized LLM routing marketplace built on **Nostr + Bitcoin**, designed for permissionless and censorship-resistant AI inference. ([Routstr Documentation][7]) * Uses a reverse proxy (Routstr Core) that sits in front of OpenAI-compatible APIs and handles pay-per-request billing, combining Nostr for discovery and Cashu/Bitcoin for private micropayments. ([GitHub][8]) * In both cases: * Prompts can be visible to the **broker layer** (PPQ, Routstr node operators) *and* to **upstream model providers** (OpenAI, Anthropic, etc.). * Some Routstr nodes and upstreams may be run by unknown or adversarial actors; I assume **no guarantee** of non-logging or benevolence. **Policy:** Class 2 tools are treated as **untrusted with sensitive content** and used only for **fully abstracted, non-identifying prompts**. --- #### 6.2.3 Data that is **never** sent to external AI The following **never** leave local / self-hosted environments, regardless of provider class: * Seed phrases, private keys, hardware wallet recovery material, or any direct recovery secrets. * Wallet, exchange, system, or password-manager credentials. * Full combinations of: * Legal name, * Exact physical address, * Exact asset figures or specific KYC trails. * Scans or photos of passports, government IDs, tax returns, or detailed legal filings. * Unredacted banking or card details (account numbers, IBAN/SWIFT, PANs, etc.). * **Full client documents** (Dossiers, spreadsheets, internal memos, contracts, org charts, medical records, etc.) in their original, unredacted form. Only abstracted excerpts or synthetic examples are allowed. * Private or access-controlled URLs: * Internal dashboards, private Notion/docs, or anything behind auth, * Especially in **AI web-research features** such as PPQ’s **Deep Research**, which combines SERP exploration, Firecrawl web scraping, and AI synthesis. ([PayPerQ][5]) * Any narrowly unique fact pattern that would clearly identify you, your dependents, or your organization **without heavy abstraction**. If such data appears in drafts or notes prepared for AI use, it is removed, generalized, or replaced with placeholders **before** any external AI call. --- #### 6.2.4 Preparing client content for AI tools When using Maple (Class 1) or PPQ / Routstr (Class 2), *and only where allowed for the engagement*: **Abstraction first** * Real-world entities → roles and labels: * “Founder A”, “Partner 2”, “Guardian”, “Child 1”, “Org Alpha”, “Local Cell A”. * Jurisdictions → categories/regions: * “EU-member state”, “US West Coast”, “Latin America (high-risk)”, “GCC state”, not named street addresses or small towns. * Balances → tiers: * “low 7-figure BTC treasury”, “mid-6-figure fiat exposure”, “high 5-figure annual local budget,” not exact BTC or USD numbers. **Structure over detail** * Prompts are framed around architecture and trade-offs, not narrative biography: * Example: “Design a governance and treasury split for a BTC-anchored family office across two medium-risk jurisdictions with an inheritance guard and a dedicated charitable slice,” * **Not**: “X in [city] and Y in [city] hold N BTC on these named exchanges with this exact entity tree.” **Uniqueness & signature-pattern awareness** * Rare structures (unusual jurisdiction combos, odd asset mixes, very specific legal/medical scenarios) can remain identifying even after abstraction. * In addition, some architectures may strongly resemble **public patterns associated with my work**; in principle, an AI provider could correlate those with public writing and infer the relationship. * For highly unusual or easily fingerprinted cases: * I bias toward **Class 0 only**, or **Class 0 + carefully abstracted Class 1**. * I skip external AI entirely for the most critical parts of the design if abstraction cannot meaningfully de-risk identification. **Threat-tier gating** * For **high-threat clients** (e.g. serious political exposure, prior state/media targeting, extremely sensitive work as defined in the Threat Model & Scope page): * Class 2 (PPQ / Routstr) tools are **not used by default**. * If there is a compelling reason to use them, this requires: * Explicit, documented consent, and * Strong additional abstraction and fictionalization of non-critical details. * Any client may request: * “Class 0 only” (local/self-hosted only), or * “Class 0 + Class 1 only (no Class 2).” This is then recorded in the SOW / Dossier and treated as binding. --- #### 6.2.5 Logging, metadata, and Synthetic Stack assumptions * I assume **any external service may log prompts and metadata**, regardless of marketing language. * For PPQ: * Prompts may transit its infrastructure and the upstream model providers it brokers, including web-search and Deep Research features. ([PayPerQ][4]) * For Routstr: * Prompts may pass through one or more nodes plus the final OpenAI-compatible backend it routes to. ([Routstr Documentation][7]) * For Maple and similar services: * I accept that they use E2EE, secure enclaves, and claim no training / zero data retention, based on their documentation and external reviews. ([Maple AI][1]) * I still design as if: * Enclave or hosting bugs, misconfiguration, or lawful orders **could** expose prompts, and * A motivated adversary might eventually reach data in transit or at rest. * From a threat-model standpoint: * All external AI providers are treated as **Synthetic Stack nodes**: * They can receive lawful orders, be compromised, or change policies. * This is why catastrophic data never leaves Class 0 and why only **abstracted structure** can flow to Class 1/2. * Network-level metadata is also a surface: * Where practical, SovStack AI calls use **segregated network contexts** (e.g. dedicated VPN profiles or distinct exit IPs) to reduce correlation with unrelated personal or business traffic. * This is an additional layer; it does not change the fundamental rule that abstractions must stand on their own. --- #### 6.2.6 Local residues and hygiene AI tools also leave traces on local machines (history, caches, configs): * Local conversation histories and caches in Maple/PPQ/Routstr frontends are treated as part of the **client data surface**. * For high-OPSEC / high-threat engagements: * Local histories are disabled where the tool allows it, **or** * Cleared on a regular schedule alongside OS and application caches. * Tools are selected and configured to minimize background syncing and unnecessary telemetry; where possible, telemetry is disabled or blocked at the firewall level. --- #### 6.2.7 Engagement-level controls & documentation * If AI materially shapes a design (e.g. by helping generate a pattern that ends up in your Blueprint), the **artifact** is what is retained—not the raw prompts or responses. * For high-OPSEC / high-threat clients: * The SOW or Dossier notes explicitly state the allowed AI classes: * e.g. “AI scope: Class 0 only”, or “AI scope: Class 0 + Class 1 (no Class 2)”. * If your threat posture changes mid-engagement (new public exposure, legal action, state attention), AI usage is re-evaluated and, if necessary, restricted further. * You may request a short summary of: * Which AI provider classes were used (if any), and * The types of content (e.g. highly abstracted patterns only) that were processed externally, consistent with the constraints above. This keeps AI integrated into the work **without** turning it into an unbounded data exhaust or a hidden second attack surface. [1]: https://trymaple.ai/ "Maple AI" [2]: https://greycoder.com/private-ai-comparison-maple-proton-lumo-and-perplexity-incognito-mode/ "Maple, Proton Lumo, Kagi Assistant and Perplexity - AI" [3]: https://oneuptime.com/blog/post/2026-02-12-use-nitro-enclaves-for-confidential-computing-on-ec2/view "How to Use Nitro Enclaves for Confidential Computing on ..." [4]: https://ppq.ai/ "PayPerQ | Pay-Per-Prompt AI Service" [5]: https://ppq.ai/blog/introducing-deep-research "Introducing Deep Research" [6]: https://discourse.devontechnologies.com/t/ai-provider-ppq-ai-supported/84503 "AI provider ppq.ai supported? - Artificial Intelligence" [7]: https://router.mintlify.app/ "A decentralized LLM routing marketplace powered by Nostr ..." [8]: https://github.com/Routstr/routstr-core/blob/main/README.md "routstr-core/README.md at main"