🛰️ Regional / Multi-Node Guide · v1.0 · 2026

E-UBI Regional Federation

Once a single community node is stable, the next engineering question is how several of them cooperate. A federation layer should preserve local control, replicate only what needs to move, survive bad links, and make recovery easier instead of introducing a fragile mini-cloud.

🏘️ Local-first writes 🔄 Deferred sync 🧭 Explicit trust boundaries 🗄️ Regional mirrors 🛠️ Mutual recovery capacity
Community Networking / Local Infrastructure Nodes expansion and future network systems work: Caleb Bott. This page extends the single-node network specification into the regional layer: peer replication, mirror roles, sync discipline, operational boundaries, and the practical path from one cabinet to a resilient mesh of community-run infrastructure.

From isolated sites to a cooperative regional layer

A regional federation is not a central platform with decorative edge nodes. Each site should remain capable of serving its own residents locally, while sharing selected data, software artifacts, and recovery assets with peers. The regional layer exists to widen resilience, not to erase autonomy.

🏠 Local node
docs, identity, relay, backups
🔄 Sync policy
queue, filter, sign, retry
🗺️ Regional peers
mirrors, failover targets, alerts
🌍 Public edge / TheEtherNet
selective outward publication

🧭 Regional peer topology

Regional federation topology for E-UBI community nodes Four local community nodes exchange selective replication with each other and with a regional mirror, while only chosen public outputs move to internet-facing services and TheEtherNet. Node A Neighborhood services Local writes stay local first Node B Clinic / mutual-aid site Independent ops + local cache Regional Mirror Docs, packages, snapshots, telemetry rollups Not the origin for every transaction Federation Policy Filters, signatures, retry queues, alerts Determines what moves and when Node C Workshop / lab site Runs if the WAN disappears Node D Archive / education site Secondary backup target Public Edge Web mirrors / TheEtherNet relay selected replication mirror + restore path policy / queue / audit
community node lanes delayed peer sync regional mirror / restore layer public-facing publication

A useful rule: the regional mirror should be able to restore a damaged node, but it should not be the only place where truth exists. Keep the origin of community activity close to the community that generated it.

Node role
Serve local traffic first
Each site keeps docs, dashboards, and critical coordination tools useful even when the upstream is weak or absent.
Regional mirror
Cache + recovery target
Mirror documentation, packages, media, and signed snapshots so rebuilding a failed node is a recovery event, not a catastrophe.
Peer relations
Selective, not universal
Some content replicates everywhere, some stays inside a site, and some is aggregated only as counts, summaries, or signed exports.
Failure model
Graceful degradation
If inter-site links fail, local services continue. If one site fails, peers help restore it. If the public internet fails, the federation shrinks inward instead of disappearing.

Replication should be boring, inspectable, and restartable

The wrong way to federate is to let every service improvise its own sync behavior. The better approach is to classify data, define its allowed destinations, sign or checksum important artifacts, and treat queue replay as a normal operating condition rather than an exception.

# Example federation policy manifest [public-docs] origin = local node replicate_to = peer nodes, regional mirror, public edge consistency = eventual [community-posts] origin = local node replicate_to = approved peers / TheEtherNet relay consistency = queued + retried [member-records] origin = local node replicate_to = encrypted backup target only consistency = snapshot / restore, not universal live fan-out [telemetry-rollups] origin = local node replicate_to = regional mirror consistency = aggregated summaries, not raw unrestricted exhaust
1
Define data classes before enabling sync
Separate public docs, local operational data, community posts, private records, software artifacts, and telemetry. Replication policy becomes clearer once the data is named honestly.
2
Queue writes and replay them safely
Assume some links are intermittent. Every sync lane should survive restarts, power loss, duplicate delivery attempts, and long retry windows without corrupting the target.
3
Sign and checksum important artifacts
Mirrored docs, application bundles, configuration snapshots, and backup archives should be verifiable. Recovery is faster when peers can prove they are holding the right file.
4
Replicate the minimum that improves resilience
Regional federation is not a license to over-collect or centralize. Move what helps continuity, accountability, or public reach; keep everything else local by default.

A federation works only if the boundaries stay legible

Nodes should cooperate, but not share everything. Administration, identity, secrets, and sensitive records need explicit boundaries. Communities can align on formats and protocols without collapsing into one administrative domain.

🔐 Trust and data boundary map

Trust boundary map for E-UBI federation Diagram showing public content, shared federation services, and private local records separated by trust boundaries. Public / Mirrorable docs · build guides · public notices · package caches Safe for wide replication when signed Federated / Shared posts · relay queues · software releases · rollup telemetry Policy-controlled, queued, auditable Local / Restricted member records · secrets · raw logs · privileged admin state Back up securely; do not casually fan out publish / sync boundary backup / access boundary Signed policy engine Approves destinations + scope

The strongest architectural move is to make data classes visible to operators. If volunteers cannot explain what is public, what is shared, and what is local-only, the federation boundary is already too blurry.

Identity is federation-aware, not universally merged

Use interoperable identities or relay mappings where needed, but avoid assuming that one site’s admin privileges should automatically grant another site’s. Shared protocol is not the same as shared sovereignty.

Secrets should stay closest to the site that needs them

API keys, admin tokens, restore credentials, VPN material, and device enrollment secrets should not be copied into every peer by convenience. Distribute the minimum required to perform recovery and maintenance.

Backups are not a permission model

A peer may be allowed to hold encrypted archives without being allowed to browse the underlying contents. Restore capability and everyday read access are different rights and should stay different.

Aggregation beats unrestricted exhaust

Regional observability is valuable, but prefer summaries, health signals, and coarse metrics over indiscriminate raw telemetry. Communities need visibility without building a surveillance pipeline by accident.

Regional infrastructure only matters if people can operate and recover it

The mature regional layer is mostly operational discipline: restore drills, rotating credentials, package mirrors, replacement inventories, alerting thresholds, peer contacts, and documentation that survives turnover. The dedicated operations runbook picks up that maintenance layer, while the operator handbook and service runbooks make authority and service-specific response legible across sites.

Health signals
Sync lag · queue depth · backup age
Track the signals that reveal drift before users notice. A green homepage is less useful than a queue-depth graph and last-good-snapshot timestamp.
Recovery target
Rebuild from mirror + snapshot
Every node should have a documented path to re-image hardware, restore a config set, reload services, and rejoin peers without heroics.
Spare strategy
Hold cheap, failure-prone parts locally
Power supplies, SSDs, cables, APs, and a ready boot image save more downtime than an elaborate theory of perfect uptime.
Governance layer
Named stewards + written handoff
A federation is social infrastructure too. Make sure there is an operator roster, escalation path, change log, and documented decision authority.

Build the federation in small honest stages

The safest expansion path is incremental but intentional: prove a single node, add restore discipline, add a second peer, then promote selected mirror and synchronization lanes as the operating team becomes competent enough to maintain them.

1
Stabilize one node first
No federation before the first node has labeled hardware, backups, metrics, and a maintainer who can restore it.
2
Add a second site as a real peer, not a passive consumer
Use the second node to prove snapshot exchange, docs mirroring, config export/import, and queue replay across an imperfect link.
3
Promote regional mirrors for static artifacts
Mirror what improves recovery first: build documentation, package bundles, base images, and public service announcements.
4
Expand social and data federation selectively
Only after the operators trust the recovery model should more complex post, identity, or application sync lanes be introduced.