A practical technical specification for the community networking layer that sits between E-UBI terminals, shared local services, and TheEtherNet — designed for local-first operation, modest power budgets, and maintenance by an actual community team. This page covers the single-node layer; the regional federation guide picks up where this leaves off.
A network node is not just “internet access.” In the E-UBI model it is the local infrastructure layer that keeps documentation, status, identity, coordination tools, and selected social functions available close to the people using them. If the upstream is slow, expensive, or temporarily down, the neighborhood should still be able to read, post, and coordinate on the local network.
The objective is not enterprise complexity for its own sake. It is to make sure a public kiosk browser session or guest phone never lands on the same unrestricted network as management interfaces, storage shares, or backup targets.
The goal is not to chase enterprise hardware. It is to assemble a reliable local node from widely available parts that can be serviced, replaced, and powered responsibly. Use the lower-power starter configuration first, then scale into more radios, more storage, or a stronger compute platform only after measured demand justifies it.
| Component | Example model / spec | Role | Typical draw / capacity | Status |
|---|---|---|---|---|
| Compute node | Raspberry Pi 5 (8 GB) or fanless Intel N100 mini-PC | Runs local web apps, sync workers, dashboard, auth broker, and maintenance tools | 8–20 W | Required |
| Primary SSD | 1 TB NVMe or SATA SSD, endurance-oriented if logs are heavy | Local databases, docs mirror, media cache, and snapshots | 1–4 W | Required |
| Backup SSD | 2 TB USB 3.2 external SSD | Nightly local backup target kept separate from the primary drive | 1–3 W | Recommended |
| Router / firewall | MikroTik hAP ax3, GL.iNet Flint 2, or small OPNsense box | DHCP, VLANs, WAN failover, VPN, and admin boundary enforcement | 8–15 W | Required |
| Managed switch | 8-port managed switch; PoE if powering APs or cameras | Structured LAN, infrastructure uplinks, future kiosk growth | 5–10 W base | Required |
| Access point | Wi-Fi 6 AP such as Ubiquiti U6+ or MikroTik cAP ax | Primary resident and kiosk access for indoor coverage | 9–13 W | Required |
| Backhaul fallback | LTE/5G modem or gateway with external antenna option | Failover path for alerts, remote admin, and deferred sync | 3–8 W average | Optional |
| Battery buffer / UPS | 12 V LiFePO₄ + regulated DC path or small online UPS | Ride through short outages and prevent abrupt filesystem loss | 20–40 Ah target | Recommended |
| Outdoor bridge radio | Ubiquiti NanoBeam / MikroTik Wireless Wire class link | Point-to-point building links or rooftop extension | 6–11 W each | Optional |
| Cabling + spares | Cat6 patch leads, labeled power cords, spare SFPs or injectors | Physical resilience; the cheapest parts often cause the most downtime | n/a | Recommended |
A first cabinet should optimize for reachability, labeling, airflow, and easy swaps. “Looks clean” matters less than whether the next maintainer can replace a router, trace a port, and find the backup drive without guesswork.
Most users will never care what hardware sits in the cabinet. They will notice whether the docs load instantly, whether status pages stay up during a bad ISP day, and whether posts or records survive intermittent connectivity. The service stack should therefore bias toward low-complexity, high-visibility tools.
There is no prize for exotic orchestration on the first node. PM2 or systemd is enough if the process list is explicit, logs are collected, and the restore procedure is written down.
Build the node on a bench first, verify every service on the local network, then move it into the cabinet or site only after backup, restore, and degraded-mode tests are complete. Field debugging is expensive; staged deployment is cheap.
/srv/node-data and /srv/node-backup. Do not let backups depend on an operator remembering which USB drive is which.The hardest part of community infrastructure is not the first install. It is the quiet third month when a disk starts filling, a radio drifts, or a volunteer hands off the keys. A good operations checklist lowers the skill floor and makes maintenance predictable enough to share.
When something feels wrong, start with the small signals that reveal whether the problem is power, storage, routing, or app supervision.
The community node sits between physical terminals and broader social coordination. It should be capable of serving local traffic on its own, but also ready to exchange selected data with peer nodes, remote backup targets, and TheEtherNet when the wider path is healthy. Once one node is stable, the next design question is no longer “can it run?” but “how do several of these cooperate without collapsing into one opaque central system?”
Use the homepage for the full framing, manifesto, and the integrated network layer overview that places this spec in the larger project.
Open HomepageThe terminal specification covers the physical endpoint. This document covers the local infrastructure that makes those endpoints coordinate, cache, and endure.
Open Device BlueprintThe federation guide extends this single-node spec into regional mirrors, trust boundaries, delayed replication, and recovery capacity shared across multiple communities.
Open Federation GuideThe operations runbook turns this architecture into maintainable infrastructure with backup cadence, restore drills, incident response lanes, and steward handoff.
Open Operations RunbookThe service matrix classifies local-only services, federated sync lanes, public mirrors, and the approval boundaries that keep growth from becoming accidental centralization.
Open Service MatrixThe identity guide turns the stack into a governed system by documenting local authority, peer trust scope, secret custody, and key rotation instead of leaving them implicit.
Open Identity & Trust GuideThe operator handbook defines steward roles, escalation lanes, custody pairs, and handoff expectations so the stack does not depend on one person’s memory.
Open Operator HandbookThe runbooks convert this node design into service-class procedures for identity, mirrors, relay, and backup recovery tied to the current TheEtherNet implementation.
Open Service RunbooksCommunity nodes give TheEtherNet a place to live closer to the people using it — locally cached, locally accountable, and not entirely dependent on a distant server.
Open TheEtherNet