Service locality & governance Β· v1.0 Β· 2026

E-UBI Service Matrix

Not every service should live in the same place, replicate the same way, or be governed by the same rules. Community infrastructure gets safer when service placement, sync scope, and operator authority are documented as intentionally as the hardware.

🏠 local-only lanes πŸ”„ federated sync πŸͺž regional mirrors πŸ” secret custody πŸ‘₯ change authority
Community Networking / Local Infrastructure Nodes expansion and future network systems work: Caleb Bott. This layer makes the previous infrastructure pages more operationally honest by defining which services stay local, which replicate outward, which can be mirrored publicly, and who is authorized to change each class of system.

Service placement should follow trust, failure mode, and social cost

A community node is not improved by pushing every function upstream or by insisting everything remain isolated forever. The useful middle ground is to classify services by what breaks if connectivity disappears, what information can safely travel, and what authority should remain local even when other systems federate.

🏠 Local-only
secrets, admin, sensitive records
β†’
πŸ”„ Local primary + sync
posts, queues, package exchange
β†’
πŸͺž Regional mirror
docs, images, build artifacts
β†’
🌍 Public edge
announcements, open docs, outreach
Local authority
Admin rights stay at the site
A federation can share formats and recovery lanes without collapsing node administration into one remote operator domain.
Local utility
Core services should degrade gracefully
If upstream vanishes, people should still be able to read docs, inspect dashboards, and use the local social/community layer where policy allows.
Mirrorability
Only copy what improves resilience
Public docs, packages, base images, and notices are strong mirror candidates. Secrets and sensitive local records are not.
Governance scope
Match approvals to impact
A splash page copy edit should not require the same approvals as a secret rotation, identity schema change, or peer trust-policy update.

Make the service map visible before people depend on it

The matrix below is not a product roadmap. It is an operator reference: where the source of truth lives, whether a service syncs or mirrors, who can change it, and what should happen when a node loses upstream connectivity.

πŸ—ΊοΈ Service locality map

Service locality map for E-UBI infrastructure Diagram showing service classes flowing from local-only to local-primary sync to regional mirrors and public edge publication. Local-only Local primary + sync Regional mirror Public edge credentials vault admin dashboards sensitive member records TheEtherNet relay/cache event queues package and snapshot exchange build library mirror base images signed backups + packages open announcements public docs and outreach pages read-only publication

The right placement question is not β€œcan this be cloud-hosted?” It is β€œwhere should the authoritative copy live, who can change it, and what remains useful if connectivity drops?”

ServicePlacementFederation / mirror policyGovernance note
Build library mirrorregional mirrorMirror broadly; keep signed artifacts and docs in multiple regions.Content updates can be delegated; integrity policy should be steward-controlled.
Community status dashboardlocal primaryLocal truth first; optionally publish read-only summaries outward.Thresholds and public wording should be documented locally.
TheEtherNet relay / cachelocal primary + syncQueue and reconcile with peers when links return.Moderation and trust policies should stay legible per node.
Identity / session brokerlocal-only or tightly scoped syncDo not replicate secrets casually; define explicit federation rules.Secret rotation and schema changes require named stewards.
Package / image cacheregional mirrorMirror aggressively to improve rebuild speed and reduce WAN cost.Integrity verification matters more than broad write access.
Public announcementspublic edgeSafe to publish widely if sourced from documented local approval.High-trust publication should still have a clear sign-off path.

Shared infrastructure still needs clear local authority

Governance is not bureaucracy for its own sake. It is how communities prevent accidental centralization, silent credential drift, and ambiguous responsibility. The useful version is lightweight but explicit: who can approve what, who holds secrets, who can declare an incident, and where the operator log lives.

Role split
Maintainer Β· steward Β· publisher
Separate routine operators from people authorized to change trust relationships, rotate core secrets, or publish sensitive announcements.
Secret custody
Two-person recovery path
No single volunteer should be the only route back into the node after turnover, burnout, or emergency absence.
Change classes
Routine, material, trust-affecting
Classify changes by impact so people know when a simple log entry is enough and when explicit approval is required.
Escalation lane
Named contacts + bounded authority
Outages, security concerns, and partner-node disputes should not depend on whoever notices chat first.
# Example change classes [routine] examples = docs copy, dashboard wording, scheduled package mirror refresh approval = operator log entry [material] examples = VLAN changes, service placement move, backup target change approval = local steward review [trust-affecting] examples = credential rotation, federation peer policy, identity schema change approval = named stewards + documented rollback plan

The best change process is small, written, and reversible

Communities do not need corporate ceremony. They do need a repeatable path for changing service placement, federation policy, or credentials without losing track of what changed, why, and how to roll it back.

1
Classify the proposed change
Decide whether it is routine, material, or trust-affecting before touching the system. This sets the approval path and rollback expectations.
2
State the service impact plainly
Write down which services move, who loses or gains authority, what sync behavior changes, and what users will notice if the change goes badly.
3
Prepare rollback and credential implications
Any placement or governance change that affects secrets, trust, or backups should include a rollback path before implementation starts.
4
Make the change in a maintenance window
Prefer bounded windows with a named operator present, current snapshots available, and explicit verification steps listed in advance.
5
Log the outcome and update the matrix
If the service placement or governance rule changed, the documentation should change the same day. A stale matrix is how communities accidentally centralize.

Continuity means a second person can understand the map and act

The service matrix becomes valuable when it shortens restore and escalation time. During an outage or dispute, operators should immediately know where the authoritative copy lives, whether a peer can help, which steward is allowed to authorize changes, and which runbook applies to the affected service class.

Local outage

Use the matrix to identify which services must remain on-node, which can be temporarily served from mirror copies, and which public publications should be paused.

Peer disagreement

Trust-policy disputes should route through named stewards rather than ad hoc operator edits. Shared formats do not require shared unquestioned authority.

Credential event

If secrets are rotated or suspected compromised, the matrix should already show which services are affected and which mirrored or public lanes can remain up safely.

Volunteer turnover

The matrix and operator packet together should let a new steward see service ownership, sync scope, and approval boundaries without tribal knowledge.