📡 Local Infrastructure Node Spec · v1.0 · 2026

E-UBI Network Stack

A practical technical specification for the community networking layer that sits between E-UBI terminals, shared local services, and TheEtherNet — designed for local-first operation, modest power budgets, and maintenance by an actual community team. This page covers the single-node layer; the regional federation guide picks up where this leaves off.

🍓 Pi 5 / N100 🔌 Managed Switch + PoE 📶 Wi-Fi 6 Access ☁️ Tunnel / VPN Egress 🗄️ Local Docs + Postgres 🔄 PM2 / systemd
Community Networking / Local Infrastructure Nodes expansion and future network systems work: Caleb Bott. This document turns the networking layer into something concrete: topology, BOM, deployment path, and the maintenance rhythm required to keep a node useful over time.

What a local node actually does

A network node is not just “internet access.” In the E-UBI model it is the local infrastructure layer that keeps documentation, status, identity, coordination tools, and selected social functions available close to the people using them. If the upstream is slow, expensive, or temporarily down, the neighborhood should still be able to read, post, and coordinate on the local network.

🏠
Node Class
Indoor starter node
One room or one building, 10–50 regular users, low-noise hardware, shared access point, and battery-backed router.
👥
Concurrency Target
20–80 active sessions
Enough for kiosks, staff devices, neighborhood users, and a handful of always-on dashboards without demanding data-center hardware.
Power Envelope
30–60 W typical
Compute, switch, AP, and storage combined. Spikes above that are acceptable, but the steady-state budget should stay realistic for solar or UPS support.
🗄️
Local Storage
1–2 TB SSD
Large enough for docs, mirrored media, logs, database snapshots, and room for backups without forcing spinning disks into small enclosures.
🔒
Security Boundary
Segmented LAN + outbound admin
Public devices never share a flat network with admin services. Remote access should default to outbound tunnel or VPN, not exposed inbound ports.
🔄
Sync Model
Local-first, deferred federation
The node accepts writes locally, then replicates upstream or laterally to peer nodes once backhaul and policy allow.

🧭 Service path — from resident device to wider federation

📱 Phones + kiosks
users reading docs, posting updates, checking status
📶 AP + switch + router
segmentation, DHCP, rate limits, local routing
🗄️ Service node
docs, dashboards, auth, storage, relay, backups
🌍 Peer nodes + cloud edge
replication, remote admin, federation, off-site backup
🔋 Battery-backed access layer
keep LAN alive through short power events
📚 Offline docs mirror
BOMs, manuals, procedures, public notices
🧠 Graceful degradation
local services survive even when WAN does not

🧬 VLAN / logical network layout

Logical network layout for an E-UBI local node Diagram showing backhaul entering a router and firewall, a managed switch carrying a trunk, and segmented VLANs for public access, staff, infrastructure, and services. Backhaul Fiber / Cable / LTE Primary + failover Router / Firewall DHCP · VLANs · VPN/Tunnel Admin boundary + policy Managed Switch 802.1Q trunk + PoE Patch labels + spare ports VLAN 10 · Public Guest Wi-Fi · kiosks · splash page VLAN 20 · Staff Admin laptops · maintenance tools VLAN 30 · Infrastructure APs · switches · cameras · radios VLAN 40 · Services Docs · Postgres · relay · backups Service Node Docs mirror · dashboard Backups + sync queue WAN trunk uplink segmented access service LAN
public / guest path routing and admin boundary infrastructure management service and data plane

The objective is not enterprise complexity for its own sake. It is to make sure a public kiosk browser session or guest phone never lands on the same unrestricted network as management interfaces, storage shares, or backup targets.

Design rules worth being strict about

  • Keep the first node boring. Quiet hardware, one documented topology, one backup path, and services that a second maintainer can understand in an afternoon.
  • Use wired where it matters. The service node, switch uplinks, and fixed kiosks should prefer Ethernet so Wi-Fi capacity is reserved for mobile users.
  • Treat cached documentation as critical infrastructure. If the network can host posts but not the instructions needed to repair itself, it has failed the autonomy test.
  • Document the physical layout. Port maps, VLAN names, radio orientation, and power paths must live in the local docs mirror, not in one volunteer’s memory.

A realistic node bill of materials

The goal is not to chase enterprise hardware. It is to assemble a reliable local node from widely available parts that can be serviced, replaced, and powered responsibly. Use the lower-power starter configuration first, then scale into more radios, more storage, or a stronger compute platform only after measured demand justifies it.

Component Example model / spec Role Typical draw / capacity Status
Compute node Raspberry Pi 5 (8 GB) or fanless Intel N100 mini-PC Runs local web apps, sync workers, dashboard, auth broker, and maintenance tools 8–20 W Required
Primary SSD 1 TB NVMe or SATA SSD, endurance-oriented if logs are heavy Local databases, docs mirror, media cache, and snapshots 1–4 W Required
Backup SSD 2 TB USB 3.2 external SSD Nightly local backup target kept separate from the primary drive 1–3 W Recommended
Router / firewall MikroTik hAP ax3, GL.iNet Flint 2, or small OPNsense box DHCP, VLANs, WAN failover, VPN, and admin boundary enforcement 8–15 W Required
Managed switch 8-port managed switch; PoE if powering APs or cameras Structured LAN, infrastructure uplinks, future kiosk growth 5–10 W base Required
Access point Wi-Fi 6 AP such as Ubiquiti U6+ or MikroTik cAP ax Primary resident and kiosk access for indoor coverage 9–13 W Required
Backhaul fallback LTE/5G modem or gateway with external antenna option Failover path for alerts, remote admin, and deferred sync 3–8 W average Optional
Battery buffer / UPS 12 V LiFePO₄ + regulated DC path or small online UPS Ride through short outages and prevent abrupt filesystem loss 20–40 Ah target Recommended
Outdoor bridge radio Ubiquiti NanoBeam / MikroTik Wireless Wire class link Point-to-point building links or rooftop extension 6–11 W each Optional
Cabling + spares Cat6 patch leads, labeled power cords, spare SFPs or injectors Physical resilience; the cheapest parts often cause the most downtime n/a Recommended
Starter node power budget
~36–52 W steady-state
Pi 5 or N100 + one AP + one managed switch + router + SSD + light backup target. Add radio links and PoE devices only when usage proves the need.
Minimum battery target
4–8 h of graceful runtime
Enough time to keep docs, coordination tools, and status dashboards alive through short interruptions while shutting down non-critical services first.
Preferred enclosure pattern
Ventilated indoor rack shelf or wall cabinet
Short cable runs, dust control, labeled ports, and access for swaps matter more than making the first node look futuristic.
Storage policy
Primary + local backup + off-site sync
Do not confuse “SSD installed” with “backups exist.” A node should survive one dead disk and one bad day.

🗄️ Reference rack / cabinet layout

Rack layout for an E-UBI local node Front view of a compact indoor rack or wall cabinet with patch panel, router, switch, service node shelf, backup storage shelf, and DC buffer or UPS. 1U Patch panel / cable landing 1U Router / firewall 1U Managed switch / PoE 2U Shelf · Service node Pi 5 / N100 · SSD · mirrored docs 1U Shelf · Backup SSD + spares Base · UPS / DC buffer / battery feed Label every drop and uplink Leave spare PoE / LAN capacity Keep service node reachable from the front Protected power path lives lowest top bottom

A first cabinet should optimize for reachability, labeling, airflow, and easy swaps. “Looks clean” matters less than whether the next maintainer can replace a router, trace a port, and find the backup drive without guesswork.

The service stack people actually notice

Most users will never care what hardware sits in the cabinet. They will notice whether the docs load instantly, whether status pages stay up during a bad ISP day, and whether posts or records survive intermittent connectivity. The service stack should therefore bias toward low-complexity, high-visibility tools.

📚
Docs mirror
Blueprints, BOMs, and procedures
Offline-capable copy of manuals, node diagrams, maintenance forms, and public notices. This is the first service to protect.
📊
Status dashboard
Power, uptime, storage, WAN health
Simple dashboards reduce guesswork and make it obvious when a battery, disk, or uplink is drifting toward failure.
🧾
Forms + shared files
Local records and operational documents
Shared storage for inventory, work orders, event schedules, volunteer notes, and recovery runbooks.
🛰️
Sync worker
Queued federation and backup jobs
Pushes snapshots, cached content, and selected state to peer nodes or cloud edge only when connectivity and policy allow.
TheEtherNet relay
Local cache / relay for social layer
Keeps community interactions close to the users and reduces dependence on a single distant runtime or unstable WAN path.
🔐
Identity boundary
Session broker + admin controls
Small, audited auth layer for node operators and community services. Keep privileged access narrow and documented.

🛠️ Example supervision layout

There is no prize for exotic orchestration on the first node. PM2 or systemd is enough if the process list is explicit, logs are collected, and the restore procedure is written down.

// ecosystem.config.cjs — example local node processes module.exports = { apps: [ { name: 'eubi-web', script: 'server.js' }, { name: 'docs-mirror', script: 'npm', args: 'run serve-docs' }, { name: 'status-worker', script: 'workers/status.js' }, { name: 'sync-worker', script: 'workers/sync.js' } ] };

Bench-to-field deployment sequence

Build the node on a bench first, verify every service on the local network, then move it into the cabinet or site only after backup, restore, and degraded-mode tests are complete. Field debugging is expensive; staged deployment is cheap.

1
Provision the base OS and pin the hostname
Start with a current Debian-based image or a small stable Linux distribution. Apply updates, enable SSH, and give the node an unambiguous name before any app work begins.
$ sudo apt update && sudo apt full-upgrade -y $ sudo hostnamectl set-hostname eubi-node-01 $ sudo timedatectl set-ntp true $ sudo systemctl enable ssh $ sudo apt install -y curl git rsync smartmontools
2
Mount primary and backup storage with persistent paths
Use fixed mount points such as /srv/node-data and /srv/node-backup. Do not let backups depend on an operator remembering which USB drive is which.
$ sudo mkdir -p /srv/node-data /srv/node-backup # /etc/fstab entries should use UUIDs, not /dev/sdX names UUID=PRIMARY-SSD /srv/node-data ext4 defaults,noatime 0 2 UUID=BACKUP-SSD /srv/node-backup ext4 defaults,noatime,nofail 0 2 $ sudo mount -a && df -h /srv/node-data /srv/node-backup
3
Define network segments before adding public devices
Create at least three logical boundaries: public/client access, staff/admin access, and infrastructure services. Kiosks and guest phones should never sit on the same unrestricted network as the node’s management plane.
4
Start services locally and persist supervision state
Bring up the apps on the LAN first. Verify the docs mirror, dashboard, and database locally before introducing remote access, DNS, or public traffic.
$ npm ci $ pm2 start ecosystem.config.cjs $ pm2 save $ pm2 status $ curl -I http://127.0.0.1:3000
5
Add outbound remote administration and delayed sync
Use Cloudflare Tunnel or a documented VPN path for remote maintenance. Keep public exposure narrow and make sure the node still works if the admin tunnel is absent.
6
Test degraded mode before field handoff
Unplug WAN, reboot the node, disconnect the backup drive temporarily, and confirm that operators can still reach the dashboard, open the docs mirror, inspect local logs, and recover to a known-good state without improvising.

Keep the node alive with a real operations checklist

The hardest part of community infrastructure is not the first install. It is the quiet third month when a disk starts filling, a radio drifts, or a volunteer hands off the keys. A good operations checklist lowers the skill floor and makes maintenance predictable enough to share.

Daily / every shift

  • Check node uptime, WAN state, and battery or UPS status.
  • Confirm the docs mirror and dashboard load from a local client device.
  • Review open alerts: low disk, failed backup, tunnel down, or AP offline.
  • Log any unusual restarts, packet loss, or public complaints in the site notebook.

Weekly

  • Verify that one recent backup can actually be mounted or restored.
  • Check storage growth, PM2 process health, and log rotation behavior.
  • Inspect client counts, AP saturation, and unusual top talkers on the LAN.
  • Confirm that public notices and maintenance docs are current in the mirror.

Monthly

  • Apply firmware and OS security updates during a planned maintenance window.
  • Inspect cabling, port labels, strain relief, and cabinet airflow.
  • Test failover or degraded-mode operation with WAN intentionally removed.
  • Rotate documented credentials or recovery tokens according to local policy.

Quarterly / seasonal

  • Run a full restore drill from backup media to spare hardware if possible.
  • Check battery runtime against the original estimate and record drift.
  • Review whether the node needs more radios, more storage, or stronger compute.
  • Update the topology diagram so future volunteers inherit the real network, not the old one.

🚨 First-response command set

When something feels wrong, start with the small signals that reveal whether the problem is power, storage, routing, or app supervision.

$ uptime && free -h && df -h $ ping -c 4 192.168.1.1 $ curl -I http://127.0.0.1:3000 $ pm2 status && pm2 logs --lines 40 $ sudo smartctl -H /dev/sda $ journalctl -n 60 --no-pager

How the node connects to the wider E-UBI ecosystem

The community node sits between physical terminals and broader social coordination. It should be capable of serving local traffic on its own, but also ready to exchange selected data with peer nodes, remote backup targets, and TheEtherNet when the wider path is healthy. Once one node is stable, the next design question is no longer “can it run?” but “how do several of these cooperate without collapsing into one opaque central system?”

🔗 Federation path

📐 Device Blueprint layer
E-UBI terminals, kiosks, sensors, displays
📡 Local node
docs, dashboard, auth, storage, relay, sync queue
🧩 Peer nodes / regional mirrors
redundancy, collaboration, shared archives
⚡ TheEtherNet + remote edge
wider social layer, publication, off-site backup