MASCOM Deploy

The deployment system for 122 live domains.

File → R2 → Live. That's it.

file R2 {slug}/{path} mascom-edge serves it
1Command
0Versions
122Domains

Usage

# Deploy a file
python3 mascom_edge_deploy.py index.html --hostname golflink.cc

# Deploy to a subpath
python3 mascom_edge_deploy.py app.js --hostname golflink.cc --path /app.js

# Deploy multiple files at once
python3 mascom_edge_deploy.py --batch index.html:/ app.js:/app.js --hostname golflink.cc

# Dry run (show what would happen)
python3 mascom_edge_deploy.py index.html --hostname golflink.cc --dry-run

# Skip validation (emergency)
python3 mascom_edge_deploy.py index.html --hostname golflink.cc --force

How It Works

StepWhat Happens
1. ValidateJS syntax check, hazard pattern scan, HTML size check
2. UploadDirect CF R2 API PUT to {slug}/{path}
3. Verifycurl the live URL, confirm HTTP 200 + content
4. SyncBest-effort GravNova mirror (sovereign backup)

How Serving Works

The mascom-edge Worker handles all 122 domains:

Request: https://golflink.cc/
  ↓
Worker reads Host header: "golflink.cc"
  ↓
D1 lookup: hostname → slug ("golflink_cc")
  ↓
R2 GET: golflink_cc/index.html
  ↓
Response (with x-mascom-served: edge)

No KV. No versions. No indirection. D1 maps hostname to slug. R2 stores the file. Done.

Rollback

Deploy the previous file again. That IS the rollback. No version pointers to update, no routing to fix, no KV to invalidate. Just overwrite the R2 object.

What Was Removed

Before: 2,194 lines
Version tags, FLEET_KV routing, staging deploys, promote workflow, carry-forward logic, cybernetic gate (SENSE/ASSESS/DECIDE/DIRECT/OBSERVE/ADAPT), worker version sentinel, blast radius gate, deploy coordinator mutex, deploy_lock.db, perplexity gate, fleet API registration, D1 version updates, 6 deploy paths (smart/legacy/staging/promote/batch/cybernetic).
After: 240 lines
deploy_file() uploads to R2 and verifies. deploy_batch() calls it N times. CLI parses args. Done.

Post-Mortem: Version Hell

The old system had three indirection layers between "file" and "served content":

LayerWhat It DidHow It Failed
FLEET_KVStored version pointer per hostnameGot stale, disagreed with D1, wrong version served
D1 versionStored "current version" tagUpdated but KV wasn't, or vice versa
R2 version dirsFiles stored under version prefixNew version deployed but files not carried forward

Every deploy failure in investigations.db traces back to these layers disagreeing. The "rollback" system never successfully rolled back anything — every recovery was a re-deploy. The versioning system existed to enable rollback, and the rollback never worked, so the versioning was pure liability.

The fix: delete all three layers. File goes to a flat R2 key. Worker reads that key. If you need to "rollback," deploy the old file. Same mechanism, zero indirection, zero ways for layers to disagree because there are no layers.

MASCOM PublicOS WeylandAI GameGob HelmCorp FilmLine HALside News AuthFor