open source· rust· mit· v0.9.2

the tunnel that
respects your domain.

self-hostable local-to-public tunnel in rust. one static binary on your vps, one on your laptop. real https://abc.nowhere.yourdomain.com in front of localhost:3000. no account wall, no ads, no middle tier.

binary 6 MB deps 0 external services latency <1 ms license MIT
farkhad@mbp · ~/work/api · nowhere 3000
connected
$ nowhere 3000 --subdomain api connecting to nowhere.example.com:7000 ... control channel up, token ok assigned https://api.nowhere.example.com forwarding to localhost:3000   09:41:02 GET / 200 12ms 24.73 KB 09:41:02 GET /style.css 200 4ms 3.12 KB 09:41:04 POST /api/auth 200 38ms 412 B 09:41:06 GET /api/users/42 200 11ms 1.09 KB 09:41:09 GET /missing 404 2ms 118 B   requests: 47 · in-flight: 0 · uptime: 00:12:34
the problem

A tunnel shouldn’t be a SaaS.

Every time you want to show something to a client, you shouldn’t have to sign up for anything, pay anyone, or accept a random subdomain that changes every restart.

ngrok today

  • Account wall on the free tierSignup, verify email, keep a dashboard tab open.
  • Random subdomains on freeRotates on restart. Breaks every webhook you set up.
  • Warning interstitial on public URLsClick-through page before your app even loads.
  • From $10/month for a real domainAnd the plan scales up fast past that.
  • Bandwidth caps / request capsMid-demo throttling, 429s, connection drops.
  • Regional latency you don’t controlYou route through their servers, wherever those are.

nowhere

  • You run the server. You are the account.Token in a toml file. That’s the whole auth story.
  • Pick any subdomain on your domain--subdomain api and it sticks across restarts.
  • Your URL, your responseNo interstitial, no injected scripts, no branding.
  • $6/month for a box that handles thousandsThe DigitalOcean droplet it was benched on. Fixed cost.
  • Bandwidth is whatever your VPS gives youNo artificial ceiling. Hit it, then pick a bigger box.
  • Pick your region, one SSH awayMove to Frankfurt, Singapore, whatever. One binary.
how it works

Three parties, one persistent TCP connection.

A control channel between your laptop and your VPS. Inbound public HTTP gets framed down the tunnel, response gets framed back. Everything multiplexed over a single socket.

client
your laptop
nowhere 3000
server
your vps
:7000 :80 :443
control: handshake + keepalive request: public HTTP pushed in response: streamed back as frames
public HTTPS → *.nowhere.yourdomain.com → routed by Host header
architecture

The protocol. The server. The client.

Three small pieces. Each does one thing. Nothing leaves the machines you control.

the protocol

Length-prefixed binary frames.

Single persistent TCP socket, big-endian length prefix, one byte of frame type, a u64 request id. 16 MiB max body per frame, split if larger.

Hello 0x00 Welcome 0x01 Ping 0x03 NewRequest 0x10 RequestBody 0x11 RequestEnd 0x12 ResponseStart 0x20 ResponseBody 0x21 ResponseEnd 0x22 Close 0x30
the server

One binary. Systemd. Done.

Listens on three ports: control, public HTTP, public HTTPS. Parses the Host header, routes by wildcard subdomain, hands the request down the right client socket.

listen:7000 control
listen:80 :443 public
route*.nowhere.example.com
authshared-secret tokens
runtimetokio + rustls
deploysystemd unit
the client

A single command. Optional TUI.

Dials the server, authenticates, claims a subdomain, opens local TCP connections per incoming request. Has a live ratatui dashboard if you want to watch frames fly.

dialserver:7000
handshaketoken + subdomain
forwardlocalhost:<port>
dashboardratatui --tui
binary~6 MB stripped
oslinux, macos, bsd
the stack

Every crate in the binary, on one screen.

A small dependency footprint is a feature. Two crates are doing the heavy lifting; the rest is glue you could read on a plane.

runtime core

tokio + rustls

The load-bearing dependencies. tokio handles I/O, timers, task scheduling. rustls terminates TLS on the public side without pulling in OpenSSL.

tokio1.40, full features
rustls0.23, aws-lc-rs backend
tokio-rustls0.26, async glue
rustls-pemfile2, cert loading
bytes1.7, zero-copy buffers
cli + config

clap + serde + toml

The same three crates every Rust CLI uses for a reason. Flags, config files, JSON for the handshake payload. Nothing fancy.

clap4.5, derive macros
serde1, derive
serde_json1, handshake
toml0.8, server config
anyhow / thiserrorerror plumbing
observability

tracing + ratatui

Structured logs via tracing, which systemd-journalctl renders nicely out of the box. The --tui dashboard is ratatui + crossterm.

tracing0.1, structured
tracing-subscriber0.3, env-filter
ratatui0.28, --tui
crossterm0.28, terminal I/O
rand0.8, subdomain gen
15 direct dependencies · ~180 transitive · ~4k lines of first-party Rust · no build-script surprises
protocol

The wire, byte for byte.

A complete handshake and request, annotated. If you ever wanted to write a client in Go, Python, or Zig, this is all you need.

# one frame on the wire

 offset   size   field       notes
 0        1      ty          frame type, see tables below
 1        8      req_id      big-endian u64; 0 for control frames
 9        4      length      big-endian u32; body byte count
 13       len    body        variable payload

# max body per frame: 16 MiB
# larger payloads split into multiple body frames

control          per-request
 0x00 Hello      0x10 NewRequest
 0x01 Welcome    0x11 RequestBody
 0x02 Reject     0x12 RequestEnd
 0x03 Ping       0x20 ResponseStart
 0x04 Pong       0x21 ResponseBody
                   0x22 ResponseEnd
                   0x30 Close
# client -> server (0x00 Hello)

0x00                                            # ty
00 00 00 00 00 00 00 00                        # req_id = 0
00 00 00 48                                    # length = 72
{"version": 1, "token": "...", "subdomain": "api"}

# server -> client (0x01 Welcome)

0x01                                            # ty
00 00 00 00 00 00 00 00                        # req_id = 0
00 00 00 5c                                    # length = 92
{"subdomain": "api", "public_url": "https://api.nowhere.example.com"}

# on reject the server sends 0x02 Reject with a reason body
# and closes the socket.
# server -> client, request head (0x10 NewRequest)

0x10                                            # ty
00 00 00 00 00 00 4e 21                        # req_id = 0x4e21
00 00 00 ba                                    # length = 186
GET /api/users/42 HTTP/1.1\r\n
Host: api.nowhere.example.com\r\n
Accept: application/json\r\n
X-Forwarded-For: 93.184.216.34\r\n
\r\n

# zero or more 0x11 RequestBody frames with same req_id
# then 0x12 RequestEnd (length = 0)
# client -> server, response start (0x20)

0x20                                            # ty
00 00 00 00 00 00 4e 21                        # req_id = 0x4e21
00 00 00 5c                                    # length = 92
HTTP/1.1 200 OK\r\n
Content-Type: application/json\r\n
Content-Length: 1124\r\n
\r\n

# 0x21 ResponseBody frames with the JSON payload,
# split if the body is larger than 16 MiB per frame.
# finally 0x22 ResponseEnd with length = 0.

# either side can send 0x30 Close(req_id) at any time.
one request, three views

The same GET /api/users/42, seen from three places.

What your laptop sees. What the wire sees. What the public URL visitor sees. All one logical request.

on your laptop

localhost:3000

09:41:06.018 GET /api/users/42
host:      localhost:3000
accept:    application/json
x-fwd-for: 93.184.216.34

09:41:06.029 200 OK
content-type: application/json
content-length: 1124

09:41:06.030 handled in 11.2 ms
in the tunnel

frame-by-frame

+0.0   ms 0x10 NewRequest     req_id=0x4e21
              body_len=186 B
+0.3   ms 0x12 RequestEnd     req_id=0x4e21

+11.5  ms 0x20 ResponseStart  req_id=0x4e21
              body_len=92 B
+11.7  ms 0x21 ResponseBody   body_len=1124 B
+11.8  ms 0x22 ResponseEnd    req_id=0x4e21

total   12.0 ms (overhead ~0.8 ms)
public URL

api.nowhere.example.com

09:41:06.015 GET /api/users/42
host:       api.nowhere.example.com
user-agent: curl/8.4.0

09:41:06.028 200 OK
content-type: application/json
server:       nowhere/0.9.2

{"id": 42, "name": "ada", ...}

total 13.1 ms (tls + tunnel)
benchmarks

Numbers from a real box.

Not a lab. A $6/month DigitalOcean droplet running Debian 12, measured with wrk from a second box in the same region.

0ms
median latency overhead
Extra time the tunnel adds vs hitting the origin directly. P99 under 3 ms on the same box.
0GB/s
sustained throughput
Streaming a large response through the tunnel. NIC-bound on anything smaller than a 10 Gbit box.
0tunnels
concurrent clients / server
Soft limit configured via max_clients. Memory scales linearly at roughly 6 KB per idle client.
0MB
static binary
Release build, LTO thin, symbols stripped. Ships as a single file, no runtime, no shared libs.
0ms
cold start
From exec to first accepted connection. Useful when you run the client in a hot-reload loop.
0external deps
services you need
No database. No broker. No cloud. Just your VPS and your laptop. The whole thing is tokio + rustls.
latency histogram
1,000,000 requests through a hot tunnel
buckets in ms · y-axis log-ish
<0.5 ms
4%
0.5-1 ms
62%
1-2 ms
88%
2-4 ms
40%
4-8 ms
16%
8-16 ms
6%
16-32 ms
2%
32-64 ms
1%
>64 ms
1%
p50 0.8 ms p90 2.1 ms p99 6.4 ms p99.9 24 ms max 214 ms

measured on a $6/mo DO droplet (1 vCPU, 1 GB, Debian 12) with wrk from a peer box in the same region. your mileage will vary; none of these are marketing-lab numbers, but they’re also not a load-test the marketing team would write.

changelog

What shipped recently.

A working changelog in the repo is underrated. Here are the last four releases, most recent first.

0.9.2 latest
2026-04-12
  • Request-body streaming no longer buffers the full body on the client before forwarding.
  • Fixed a wakeup loop on idle control connections that kept one core at ~3% forever.
  • TUI now renders per-route p50/p99 histograms, not just counters.
  • Configurable idle_timeout_seconds, defaults to 300.
0.9.1 2026-03-18
  • New --log-format json flag for shipping structured logs off-box.
  • Better error messages on cert load failures (path, permission, format).
  • Dropped a stray println! from the accept loop. Sorry.
0.9.0 2026-02-02
  • Ratatui dashboard landed behind --tui.
  • Breaking: config file moved from /etc/nowhere.toml to /etc/nowhere/server.toml.
  • Shipped a hardened systemd unit (CAP_NET_BIND_SERVICE only, unprivileged user).
  • Binary size down ~22% after enabling LTO thin + strip symbols.
0.8.0 2025-11-11
  • Rewrote the frame codec on top of bytes::BytesMut. ~18% throughput improvement.
  • Per-token subdomain reservation across reconnects.
  • First public release with documented wire protocol.
features

Everything in the box.

What’s shipping in 0.9.2 today, what’s coming in 1.0, and what’s on the list after that.

shipped

HTTP tunneling

Full HTTP/1.1 request and response forwarding through a framed binary channel.

shipped

HTTPS on the public side

Bring your own wildcard cert. Rustls on the server, zero OpenSSL.

shipped

Wildcard subdomain routing

Any *.nowhere.yourdomain.com lands on the right client via Host header.

shipped

Custom subdomains

Pick your own slug with --subdomain api. Sticks across restarts.

shipped

Token auth

Shared-secret tokens in a toml file. Rotate by editing and sending SIGHUP.

shipped

Ratatui dashboard

Live TUI with --tui. Watch frames, in-flight requests, per-route counts.

shipped

Systemd unit

Ships a hardened unit file. CAP_NET_BIND_SERVICE for :80 :443, unprivileged user.

shipped

Request streaming

Bodies stream chunk-by-chunk. Works fine for 4 GB uploads over slow networks.

next · 1.0

Automatic Let’s Encrypt

DNS-01 challenge loop built in. One less cron job on the VPS.

next · 1.0

WebSocket tunneling

Full upgrade pass-through. Dev servers with HMR just work.

next · 1.0

Prometheus metrics

Per-client, per-subdomain counters and histograms on a configurable port.

next · 1.0

Request replay

Click any past request in the TUI, replay it against localhost. Webhook debugging gold.

next · 1.0

Per-token subdomain ACLs

Whitelist which subdomains a given token can claim. Multi-user ready.

next · 1.0

Rate limiting

Token bucket per client, per subdomain, or per source IP. Configurable.

soon · 1.x

Control-channel TLS

Wrap the client-to-server socket in rustls too. Plaintext goes away in 1.0.

soon · 1.x

Replay-inspector UI

Browser-based dashboard, opens on localhost only, renders requests as nice JSON.

later

TCP and raw-socket tunneling

Not just HTTP. Pipe arbitrary TCP (ssh, postgres, redis) through named ports.

later

Cluster mode

Run the server on multiple VPSes behind a round-robin DNS. For when one box isn’t enough.

later

QUIC transport

Swap TCP for QUIC on the control channel. Better reconnection, lower head-of-line blocking.

soon · 1.x

Per-route header rewrites

Strip or inject headers on the way in or out. Useful for bots and preview environments.

shipped

Config reload on SIGHUP

Edit the toml, send a signal, new tokens live in milliseconds. No restart, no drops.

security & trust

You are the trust boundary.

No middle tier means no third-party telemetry, no feature flags pulled from a remote config, no silent SDK updates. What you install is what runs.

End-to-end on your domain

Your certs, your TLS termination. Nowhere else sees plaintext bytes.

Token auth, rotatable

Shared secrets in a toml file. Edit, reload, done. No OAuth dance.

Zero telemetry. Ever.

No analytics beacons. No crash reporter. No feature flags calling home.

Source audited by you

Small codebase. ~4k lines of Rust. Read it in a lunch break.

MIT licensed, forkable

Use it, fork it, strip the name off, ship it internally. That’s the deal.

Reproducible builds

Pinned Cargo.lock, pinned rustc version in rust-toolchain.toml. Two boxes, same SHA256.

Hardened systemd unit

Unprivileged user, CAP_NET_BIND_SERVICE only, no setuid, no new privileges.

Small attack surface

No template engine, no dynamic linking, no plugin loader. What runs is what you compiled.

Graceful shutdown

SIGTERM drains inflight requests up to 10 seconds. No dropped connections on deploys.

report vulnerabilities to security@frkhd.com · pgp key on the repo
compared to

How it stacks up.

Honest comparison with the tools in the same lane. Each has a different sweet spot; this is what nowhere trades for.

feature nowhere ngrok bore rathole frp cloudflared localtunnel
self-hostable ~
wildcard http routing ~
custom persistent subdomain ~ ~ ~
https public side ~ ~
byo wildcard cert ~
no account wall ~
no interstitial page ~
zero telemetry
single static binary
rust, memory safe
live TUI dashboard ~
mit license ~
quickstart

Three files, six minutes.

Point a wildcard DNS record at your VPS, drop in a config, start the systemd unit, run the client on your laptop.

# on your vps, one-time install
# (you need rustup and a recent rustc)

git clone https://github.com/f4rkh4d/nowhere.git
cd nowhere
cargo install --path .

# point *.nowhere.yourdomain.com at this box, then:
sudo mkdir -p /etc/nowhere
sudo cp deploy/nowhere-server.toml /etc/nowhere/server.toml
sudo $EDITOR /etc/nowhere/server.toml   # set domain and token

sudo cp deploy/systemd.service /etc/systemd/system/nowhere.service
sudo systemctl enable --now nowhere
sudo journalctl -fu nowhere
# on your laptop, once
cargo install --path .

# one-time login, writes ~/.config/nowhere/config.toml
nowhere login nowhere.example.com:7000 --token "your-secret"

# expose something
nowhere 3000
# -> https://abc123.nowhere.example.com

# sticky subdomain
nowhere 3000 --subdomain api

# live TUI dashboard (press q to quit)
nowhere 3000 --tui
; /etc/systemd/system/nowhere.service

[Unit]
Description=nowhere tunnel server
After=network.target

[Service]
Type=simple
User=nowhere
Group=nowhere
ExecStart=/usr/local/bin/nowhere-server --config /etc/nowhere/server.toml
Restart=on-failure
RestartSec=5s
LimitNOFILE=65536
AmbientCapabilities=CAP_NET_BIND_SERVICE
CapabilityBoundingSet=CAP_NET_BIND_SERVICE

[Install]
WantedBy=multi-user.target
# /etc/nowhere/server.toml

[server]
domain         = "nowhere.example.com"
control_port   = 7000
http_port      = 80
https_port     = 443
bind           = "0.0.0.0"
# uncomment once you have a wildcard cert:
# cert_path = "/etc/letsencrypt/live/nowhere.example.com/fullchain.pem"
# key_path  = "/etc/letsencrypt/live/nowhere.example.com/privkey.pem"

[auth]
tokens = ["replace-me-with-a-real-secret"]

[limits]
max_clients              = 100
max_concurrent_requests  = 10000
idle_timeout_seconds     = 300
use cases

Five ways people actually use it.

Less theoretical than “self-host a tunnel”. Here’s what the traffic looks like on my own VPS.

Show a client your dev server

Demo the branch you’re on in 30 seconds. Share a real URL, not a screen share. They click, it loads, you breathe.

demos

Webhook testing for Stripe / GitHub

Pick a subdomain per branch. The URL stays the same across restarts. Stripe test events keep hitting the right place.

webhooks

Bypass corporate NAT for a Pi

Your Raspberry Pi is in a coffee shop somewhere. nowhere dials out over TCP, no port forwarding, no router config.

iot

Pop-up service, no deploy

A tiny tool you wrote this afternoon. Share it with three friends. Kill it when you’re done. No container image, no CI.

experiments

Telegram / Discord bots during dev

Bot frameworks love a public webhook. Route Telegram, Slack, Discord through your laptop; no polling loops, no tunneling hacks.

bots

Remote pair-programming via HTTP

Collaborative editors and live preview tools that assume a real URL. One subdomain per session. Tears down when you close the window.

collab
cli reference

The whole flag surface on one page.

Small enough to fit in a table, which is how you know it’s the right size. Subcommands, flags, defaults, and what each one actually does.

command / flag default does
nowhere login <host:port> Writes credentials to ~/.config/nowhere/config.toml. One time.
--token <string> required Shared secret to authenticate with the server.
--insecure false Skip TLS verification on the control channel. For dev only.
nowhere <port> Exposes localhost:<port> on a random subdomain.
--subdomain <slug> random Pick a sticky subdomain. Reclaimed on reconnect with same token.
--tui off Launch the live dashboard instead of text logs.
--log-format <human|json> human Switch to JSON for log shippers.
--host <server> from config Override the server from config (useful for multiple VPSes).
--idle-timeout <secs> inherited Disconnect if no activity for this long.
nowhere-server --config <path> required Starts the server with a toml config at the given path.
--log-format <human|json> human Same switch, server-side.
--version Prints the version string and exits.
SIGHUP Reload the config file without dropping connections.
SIGTERM Graceful shutdown. Existing requests drain up to 10s.
what the logs look like

Structured, greppable, out of the box.

nowhere uses tracing with a default human format. Pipe into --log-format json if you want to ship to Loki / Vector / whatever.

server · journalctl -fu nowhere
apr 12 09:40:58 vps nowhere-server[2141]: INFO  server listening on 0.0.0.0:7000
apr 12 09:40:58 vps nowhere-server[2141]: INFO  public listener on :80 and :443
apr 12 09:40:58 vps nowhere-server[2141]: INFO  loaded wildcard cert, valid until 2026-07-01
apr 12 09:41:01 vps nowhere-server[2141]: INFO  client connected peer=93.184.x.x:53222
apr 12 09:41:01 vps nowhere-server[2141]: INFO  handshake ok token=abc... subdomain=api
apr 12 09:41:02 vps nowhere-server[2141]: INFO  assigned https://api.nowhere.example.com
apr 12 09:41:02 vps nowhere-server[2141]: DEBUG req=0x4e21 GET / status=200 dur=12ms
apr 12 09:41:04 vps nowhere-server[2141]: DEBUG req=0x4e22 POST /api/auth status=200 dur=38ms
apr 12 09:41:09 vps nowhere-server[2141]: DEBUG req=0x4e25 GET /missing status=404 dur=2ms
apr 12 09:42:38 vps nowhere-server[2141]: INFO  client disconnected graceful=true uptime=97s
client · nowhere 3000 --tui
nowhere v0.9.2  |  api.nowhere.example.com  |  localhost:3000
────────────────────────────────────────────────────────────
status   connected  ·  uptime 00:12:34
in-flight 0  ·  total 47  ·  rps 0.21
p50 0.9ms   p99 6.4ms   peak 214ms

────────────────────────────────────────────────────────────
recent requests
  09:41:02  GET   /                200  12ms
  09:41:02  GET   /style.css       200   4ms
  09:41:04  POST  /api/auth        200  38ms
  09:41:06  GET   /api/users/42    200  11ms
  09:41:09  GET   /missing         404   2ms

[q]uit   [p]ause   [/]filter   [r]eplay last
under the hood

Small pieces, composed carefully.

What’s in the binary.

Nowhere is a single crate with two bin targets, nowhere-server and nowhere. The protocol is a small module both share. The rest is glue.

  • tokio for the async runtime, full features.
  • rustls for TLS termination on the public side.
  • bytes for zero-copy buffers through the frame pipeline.
  • clap for the CLI, serde + toml for config.
  • ratatui + crossterm for the --tui dashboard.
  • Hand-rolled TLV framing. No protobuf, no msgpack, no codegen.
  • systemd-native on the VPS side. No wrapper scripts.

Total release binary: roughly 6 MB, stripped, no shared libs. Two dependencies are doing most of the work (tokio, rustls); the rest is glue you could read in an afternoon.

world +0.0 ms
server +0.2 ms
client +0.4 ms
app +11.1 ms
client +11.4 ms
server +12.0 ms
roadmap

Where it’s headed.

Not a promise, a plan. 1.0 is about closing the last few ngrok-parity gaps; 1.x and beyond is interesting territory.

now · 0.9.2

HTTP + HTTPS tunnelingshipped
Custom subdomainsshipped
Token authshipped
Ratatui TUIshipped
Systemd hardened unitshipped
Wire protocol v1 documentedshipped

soon · 1.x

Request replay in the TUIv1.1
Browser-based inspectorv1.2
Per-route header rewritesv1.2
Compression hint on text bodiesv1.3
Backpressure per req_idv1.3
WebHook signature verificationv1.4

someday

Raw TCP tunnelingmaybe
Cluster mode (multi-VPS)maybe
QUIC on the control channelresearch
ebpf-based per-conn statsresearch
Windows builds as first-classif demand
Plugin interface for routerscarefully
backstory

Why this exists.

Early 2024. I was testing a Stripe webhook for the hundredth time. ngrok rotated my subdomain again, the webhook broke, and I sat there staring at a dashboard I didn’t want an account with. The free tier was fine, honestly, until it wasn’t.

I wrote the first version of nowhere that weekend. It was maybe 400 lines of Rust, it was ugly, it worked. I kept improving it. Wildcard routing went in that summer. The ratatui dashboard was a cold-December thing. Rustls landed in v0.4 after I gave up trying to do TLS on my own.

Two years later, 0.9.2 is what I actually use every day. It sits on a $6 droplet, routing my own laptop to a handful of subdomains I own. 1.0 is close. It needs Let’s Encrypt automation and WebSocket upgrades and a few more polish passes.

Still not done. Probably never will be.

started
Feb 2024
weekend hack
first public release
Nov 2025
v0.8.0, docs day
1.0 target
Q3 2026
soft date
project shape

Two years of commits, one box.

A rough shape of the repo. Numbers are approximate; the ones that matter (bytes in the binary, lines in the crate) are exact.

47
total commits
from first to 0.9.2
4,120
lines of rust
first-party, excluding tests
1,640
lines of tests
integration + unit
15
direct crates
180-ish transitive
6.1 MB
release binary
stripped, LTO thin
24
github stars
small and loved
12
closed PRs
from 3 contributors
0
telemetry calls
zero, forever
commit cadence
Commits per month, 2024-01 to 2026-04
47 commits / 28 months
2024-01 2025-01 2026-01 now
heavy month (3+) steady month (2) light month (1) quiet month (0)

Grab it. Break it. Open an issue.

Small project, small community, real people. Three contributors so far; if you send a good PR you become the fourth.

3 contributors 47 commits 12 closed PRs MIT license
cargo install nowhere
principles

What nowhere refuses to do.

Explicit non-goals are as important as features. Here’s the stuff this project will never grow into, by design.

No telemetry, no exceptions.

Not even anonymous usage counts. Not even crash reporting. If we needed to know, we’d ask; if we didn’t ask, we don’t need to know.

No account system on the server.

Tokens in a toml file. That’s the whole user model. If you want SSO, put nowhere behind an SSO proxy.

No bundled CDN / WAF.

Cloudflare and bunny already exist and do that better. nowhere is the transport, not the edge.

No forced upgrade path.

Old clients keep working until we hit a wire-protocol break, which is rare and announced a version ahead.

No plugin marketplace.

A plugin interface will arrive, carefully, in a future version. A marketplace will not. Fork the binary instead.

No business tier.

There’s one tier. It’s called “the MIT license”. If you need support, email and we’ll figure it out.

faq

The questions people keep asking.

If your question isn’t here, open a discussion on GitHub. I read them all.

Why not just use cloudflared?
cloudflared is great if you’re already inside Cloudflare’s world and you want their CDN, WAF, and Access in front of everything. nowhere is the opposite: no third party, your box, your certs. Use cloudflared when you want Cloudflare. Use nowhere when you want nothing between you and the client.
What about Windows?
It builds on Windows (tokio works fine there) but systemd deployment docs are linux-first. The client side is fully cross-platform. Windows-native daemon wrappers are on the someday list, pinned to user demand.
Can I use my own TLS cert?
Yes. That’s the intended path. Drop a wildcard cert at cert_path / key_path in server.toml and nowhere-server does TLS termination with rustls. Automatic Let’s Encrypt is landing in 1.0.
Can multiple clients share a subdomain?
Not today. Each subdomain maps to exactly one client connection. If the same client reconnects, it can reclaim the subdomain with the same token. v1.x is likely to add round-robin between clients holding the same name.
What’s the protocol spec?
Length-prefixed binary frames over a single TCP socket. One byte of frame type, u64 request id, u32 length, variable body. Full table in docs/protocol.md in the repo. Writing a port to another language is a weekend project.
Is there rate limiting?
Not in 0.9.2. The only knobs are max_clients, max_concurrent_requests, and idle_timeout_seconds. Token-bucket rate limiting per client / per subdomain / per source IP is in the 1.0 milestone.
What happens when the client disconnects?
The server closes the public subdomain immediately; visitors get a 502 until the client reconnects. On reconnect with the same token and subdomain, the mapping is restored within the next TCP handshake.
How do I monitor it?
Today, journalctl -u nowhere and the --tui dashboard. In 1.0 there’s a Prometheus /metrics endpoint with per-client, per-subdomain counters and latency histograms.
Does it inspect traffic?
No. It frames request and response bodies through; it doesn’t parse or buffer whole payloads. HTTPS termination happens at the server, so the server does see plaintext, but nothing is logged beyond method, path, status, and bytes.
Does it work for WebSockets?
Not in 0.9.2, which is honestly the biggest gap before 1.0. The upgrade branch is open on the repo. If you need it today, pin v0.x and follow along.
Can I run multiple nowhere-servers behind one DNS?
Not cleanly; a subdomain is owned by one server at a time. Cluster mode is tracked for someday and will need a coordination store (etcd? a small gossip layer?). For now, run one server per VPS and route DNS accordingly.
How do I contribute?
Open an issue before a big PR, keep the binary size under 10 MB, add a test, don’t add a dependency unless you’ve justified it in the PR description. The bar is pretty simple: would you be comfortable reading this in six months.
Can I run it on Fly, Railway, or similar?
Yes, with caveats. You need a VM that lets you bind :80 and :443 and attach a wildcard domain. Fly works; most platforms-as-a-service that terminate TLS for you don’t. A $6 VPS is the simplest story.
What about IPv6?
tokio listens on dual-stack by default when bind = "::". The default config binds 0.0.0.0; flip it if your box has a v6 address and you want both.
How big is the server process at rest?
Around 5-7 MB RSS with zero clients connected on linux-x86_64. Each active tunnel adds roughly 6 KB plus whatever the kernel TCP buffers need.
Why Rust and not Go?
Go would be a perfectly reasonable choice. Rust is the one I reach for first, and rustls without OpenSSL was the deciding factor for single-binary deploys on minimal VPS images.
v0.9.2 · open source · mit

Your localhost deserves a real URL.

Six minutes to set it up. Six megabytes on disk. Six dollars a month for a box that keeps it running.

start the quickstart read the source
no signup · no credit card · no email · nothing
copied to clipboard