New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

@opencode-cloud/core

Package Overview
Dependencies
Maintainers
1
Versions
90
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@opencode-cloud/core - npm Package Compare versions

Comparing version
21.0.1
to
23.0.2
+1
-1
Cargo.toml
[package]
name = "opencode-cloud-core"
version = "21.0.1"
version = "23.0.2"
edition = "2024"

@@ -5,0 +5,0 @@ rust-version = "1.89"

{
"name": "@opencode-cloud/core",
"version": "21.0.1",
"version": "23.0.2",
"description": "Core NAPI bindings for opencode-cloud (internal package)",

@@ -18,9 +18,3 @@ "main": "index.js",

},
"keywords": [
"opencode",
"ai",
"cloud",
"napi",
"rust"
],
"keywords": ["opencode", "ai", "cloud", "napi", "rust"],
"napi": {

@@ -33,5 +27,2 @@ "binaryName": "core",

},
"dependencies": {
"@napi-rs/cli": "^3.0.0-alpha.69"
},
"scripts": {

@@ -41,3 +32,6 @@ "build": "napi build --platform --release --features napi --no-js && cp src/bindings.js index.js && cp src/bindings.d.ts index.d.ts",

"postinstall": "echo 'Building native module (requires Rust 1.85+)...' && npm run build"
},
"dependencies": {
"@napi-rs/cli": "^3.0.0-alpha.69"
}
}
}

@@ -24,3 +24,3 @@ # opencode-cloud

Deploy opencode-cloud with one command. Installs Docker if needed (Linux), downloads the Docker Compose config, starts the service, and prints the login credentials:
Deploy opencode-cloud with one command. Installs Docker if needed (Linux), downloads or refreshes the Docker Compose config, pulls the latest `prizz/opencode-cloud-sandbox:latest` image, reconciles services, and prints the login credentials:

@@ -39,2 +39,6 @@ ```bash

> **Compose refresh behavior:** By default, the script fetches the latest upstream `docker-compose.yml`. If your local file differs, it is replaced and a backup is written as `docker-compose.yml.bak.<timestamp>`.
> **Image refresh behavior:** By default, the script runs `docker compose pull` before `docker compose up -d`, so rerunning quick deploy updates to the latest image.
## Quick install (cargo)

@@ -121,3 +125,3 @@

This installs Docker, downloads the Compose file, starts the service, and prints the IOTP.
This installs Docker, by default refreshes the Compose file from upstream (with backup if your local copy differs), pulls the latest image, reconciles services, and prints the IOTP.

@@ -264,3 +268,3 @@ Access via SSH tunnel: `ssh -L 3000:localhost:3000 root@<droplet-ip>`, then open `http://localhost:3000`.

# Bun is required for packages/opencode checks/builds
# Bun is required for this repo
bun --version

@@ -524,3 +528,3 @@

For new Docker build steps, follow this checklist:
- Prefer BuildKit cache mounts (`RUN --mount=type=cache`) for package caches (`apt`, `bun`, `cargo`, `pip`, and `pnpm/npm`).
- Prefer BuildKit cache mounts (`RUN --mount=type=cache`) for package caches (`apt`, `bun`, `cargo`, `pip`, and `npm`).
- For `bun install` in container builds, use a dedicated install-cache mount plus a short retry loop that clears that cache between attempts to recover from occasional corrupted/interrupted cache artifacts.

@@ -558,3 +562,3 @@ - Create and remove temporary workdirs in the same `RUN` layer (for example `/tmp/opencode-repo`).

```bash
# Bun is required for packages/opencode checks/builds
# Bun is required for this repo
bun --version

@@ -561,0 +565,0 @@

@@ -12,3 +12,3 @@ # =============================================================================

# - Prefer `RUN --mount=type=cache` for package caches (`apt`, `bun`, `cargo`,
# `pip`, and `pnpm/npm`) when BuildKit is available. New RUN steps that
# `pip`, and `npm`) when BuildKit is available. New RUN steps that
# download or compile dependencies should reuse existing cache mounts or add

@@ -271,13 +271,2 @@ # new ones following the same patterns:

# Install pnpm 10.x via corepack (2026-02-03)
RUN eval "$(/home/opencoder/.local/bin/mise activate bash)" \
&& npm install -g corepack \
&& corepack enable \
&& corepack prepare pnpm@10.28.2 --activate
# Set up pnpm global bin directory
ENV PNPM_HOME="/home/opencoder/.local/share/pnpm"
ENV PATH="${PNPM_HOME}:${PATH}"
RUN mkdir -p "${PNPM_HOME}"
# bun - self-managing installer, pinned to version (2026-02-03)

@@ -301,20 +290,35 @@ RUN curl -fsSL https://bun.sh/install | bash -s "bun-v1.3.8" \

# Install global TypeScript compiler
# NOTE: Avoid cache mount here to prevent pnpm store permission issues
RUN eval "$(/home/opencoder/.local/bin/mise activate bash)" \
&& pnpm add -g typescript@5.9.2
&& bun add -g typescript@5.9.2
# -----------------------------------------------------------------------------
# Modern CLI Tools (Rust-based) - pinned versions (2026-01-22)
# Modern CLI Tools - pre-built release binaries (2026-01-22)
# -----------------------------------------------------------------------------
# Download pre-built binaries instead of compiling from source via cargo install.
# This avoids 3-5 min cargo compilation for tools with pinned versions, replacing
# it with ~5 second downloads. Also eliminates dependency on cargo cache mounts
# which are empty on ephemeral CI runners (GitHub Actions).
ARG TARGETARCH
# ripgrep 15.1.0 - fast regex search
RUN set -eux; \
case "${TARGETARCH}" in \
amd64) RG_ARCH="x86_64-unknown-linux-musl" ;; \
arm64) RG_ARCH="aarch64-unknown-linux-gnu" ;; \
*) echo "Unsupported arch: ${TARGETARCH}" >&2; exit 1 ;; \
esac; \
curl -fsSL "https://github.com/BurntSushi/ripgrep/releases/download/15.1.0/ripgrep-15.1.0-${RG_ARCH}.tar.gz" \
| tar -xz --strip-components=1 -C /home/opencoder/.local/bin/ "ripgrep-15.1.0-${RG_ARCH}/rg"; \
rg --version
# eza 0.23.4 - modern ls replacement
# Cache cargo registry/git indices across builds to skip re-downloading crate metadata.
# Scoped per architecture to prevent amd64/arm64 cache corruption during multi-platform builds.
# uid/gid match opencoder (1000); chown fixes subdirectory ownership from prior builds.
ARG TARGETARCH
RUN --mount=type=cache,id=cargo-registry-${TARGETARCH},target=/home/opencoder/.cargo/registry,uid=1000,gid=1000,mode=0755 \
--mount=type=cache,id=cargo-git-${TARGETARCH},target=/home/opencoder/.cargo/git,uid=1000,gid=1000,mode=0755 \
sudo chown -R opencoder:opencoder /home/opencoder/.cargo/registry /home/opencoder/.cargo/git \
&& . /home/opencoder/.cargo/env \
&& cargo install --locked ripgrep@15.1.0 eza@0.23.4
RUN set -eux; \
case "${TARGETARCH}" in \
amd64) EZA_ARCH="x86_64-unknown-linux-gnu" ;; \
arm64) EZA_ARCH="aarch64-unknown-linux-gnu" ;; \
*) echo "Unsupported arch: ${TARGETARCH}" >&2; exit 1 ;; \
esac; \
curl -fsSL "https://github.com/eza-community/eza/releases/download/v0.23.4/eza_${EZA_ARCH}.tar.gz" \
| tar -xz --strip-components=0 -C /home/opencoder/.local/bin/; \
eza --version

@@ -455,3 +459,3 @@ # lazygit v0.58.1 (2026-01-12) - terminal UI for git

# RUN eval "$(/home/opencoder/.local/bin/mise activate bash)" \
# && pnpm add -g \
# && bun add -g \
# prettier \

@@ -468,3 +472,3 @@ # eslint \

# RUN eval "$(/home/opencoder/.local/bin/mise activate bash)" \
# && pnpm add -g jest vitest
# && bun add -g jest vitest
#

@@ -505,30 +509,13 @@ # # Python pytest via pipx

# -----------------------------------------------------------------------------
# Stage 2: opencode build
# -----------------------------------------------------------------------------
FROM base AS opencode-build
# =============================================================================
# Stage 2a: opencode-source — Obtain source code
# =============================================================================
# Separated into its own stage so downstream stages (JS build, broker build)
# can COPY only what they need, enabling BuildKit to run them in parallel.
FROM base AS opencode-source
# -----------------------------------------------------------------------------
# opencode Setup (Fork + Broker + Proxy)
# -----------------------------------------------------------------------------
# Keep this block near the end to preserve cache for earlier layers. We expect
# opencode fork changes to happen more often than base tooling changes.
#
# This block includes:
# - opencode build (backend binary) + app build (frontend dist)
# - opencode-broker build
#
# NOTE: This stage uses opencode user + sudo for privileged installs.
USER opencoder
# -----------------------------------------------------------------------------
# opencode Installation (Fork from pRizz/opencode)
# -----------------------------------------------------------------------------
# Source selection:
# - remote (default): reproducible pinned build from GitHub
# - local: dev-only override that uses packages/opencode from the build context
ARG OPENCODE_SOURCE=remote
ARG OPENCODE_COMMIT
ARG OPENCODE_LOCAL_REF
ARG TARGETARCH

@@ -541,36 +528,12 @@ # CLI builds use a custom build-context generator that always adds

# Clone the fork and build opencode from source (as non-root user)
# Pin to specific commit for reproducibility
# Clone the fork or copy local source.
# NOTE: OPENCODE_COMMIT is not tied to releases/tags; it tracks the latest stable
# commit on the dev branch of https://github.com/pRizz/opencode.
# Update it by running: ./scripts/update-opencode-commit.sh
# Build hygiene for this block:
# - `/tmp/opencode-repo` and `/tmp/opencode-local` are transient and must be
# removed in this same RUN layer.
# - Prefer BuildKit cache mounts for bun/cargo cache paths if this block is
# expanded; avoid persisting package caches in committed layers.
# - Keep explicit cleanup as a defensive fallback even when cache mounts are used.
# Reliability note for Bun dependency install:
# - We use a dedicated Bun install cache mount so BuildKit can reuse downloaded
# packages across builds without polluting image layers.
# - In CI/container builds, Bun's cached install artifacts can occasionally
# become inconsistent (for example after interrupted network/download steps),
# which causes `bun install --frozen-lockfile` to fail nondeterministically.
# - The retry loop intentionally clears only this cache dir between attempts so
# each retry gets a clean cache state while preserving reproducibility.
# Cargo cache mounts: registry/git for crate metadata, target dir for compiled
# artifacts (the big win — enables incremental compilation across builds).
# Target dir lives at /tmp/cargo-target-broker instead of inside the source tree
# to avoid conflicts with `rm -rf /tmp/opencode-repo` cleanup later in this RUN.
# All cargo caches scoped per TARGETARCH to prevent multi-platform corruption.
RUN --mount=type=cache,target=/home/opencoder/.bun/install/cache,uid=1000,gid=1000,mode=0755 \
--mount=type=cache,id=cargo-registry-${TARGETARCH},target=/home/opencoder/.cargo/registry,uid=1000,gid=1000,mode=0755 \
--mount=type=cache,id=cargo-git-${TARGETARCH},target=/home/opencoder/.cargo/git,uid=1000,gid=1000,mode=0755 \
--mount=type=cache,id=cargo-target-broker-${TARGETARCH},target=/tmp/cargo-target-broker,uid=1000,gid=1000,mode=0755 \
OPENCODE_COMMIT_OVERRIDE="${OPENCODE_COMMIT:-}" \
&& OPENCODE_LOCAL_REF="${OPENCODE_LOCAL_REF:-local-unknown}" \
&& OPENCODE_COMMIT="9fc1f2cd6084bb1611ee33d7085fa86a5ea6511f" \
&& if [ -n "${OPENCODE_COMMIT_OVERRIDE}" ]; then OPENCODE_COMMIT="${OPENCODE_COMMIT_OVERRIDE}"; fi \
&& rm -rf /tmp/opencode-repo \
&& if [ "${OPENCODE_SOURCE}" = "local" ]; then \
RUN set -eux; \
OPENCODE_COMMIT_OVERRIDE="${OPENCODE_COMMIT:-}"; \
OPENCODE_COMMIT="ba669d0d68d36063852e29cf640f9baeb26e14be"; \
if [ -n "${OPENCODE_COMMIT_OVERRIDE}" ]; then OPENCODE_COMMIT="${OPENCODE_COMMIT_OVERRIDE}"; fi; \
rm -rf /tmp/opencode-repo; \
if [ "${OPENCODE_SOURCE}" = "local" ]; then \
if [ ! -f /tmp/opencode-local/package.json ]; then \

@@ -582,3 +545,3 @@ echo "Local opencode source requested but packages/opencode was not included in build context."; \

cp -R /tmp/opencode-local/. /tmp/opencode-repo; \
else \
else \
git clone --depth 1 https://github.com/pRizz/opencode.git /tmp/opencode-repo; \

@@ -588,8 +551,33 @@ cd /tmp/opencode-repo; \

git checkout "${OPENCODE_COMMIT}"; \
fi \
&& cd /tmp/opencode-repo \
fi; \
rm -rf /tmp/opencode-local
# =============================================================================
# Stage 2b: opencode-js-build — bun install + UI build (runs in parallel with broker)
# =============================================================================
FROM base AS opencode-js-build
USER opencoder
ARG OPENCODE_SOURCE=remote
# Bind-mount the source from the opencode-source stage, then copy it into
# this layer's writable filesystem in the same RUN as bun install. This avoids
# issues where Docker COPY --from= between stages can cause bun install to
# fail with ENOENT during platform-specific package linking.
# Reliability note for Bun dependency install:
# - We use a dedicated Bun install cache mount so BuildKit can reuse downloaded
# packages across builds without polluting image layers.
# - In CI/container builds, Bun's cached install artifacts can occasionally
# become inconsistent (for example after interrupted network/download steps),
# which causes `bun install --frozen-lockfile` to fail nondeterministically.
# Common symptoms: integrity check failures for platform-specific tarballs.
# - The retry loop aggressively clears ALL bun caches AND node_modules between
# attempts to ensure each retry starts from a truly clean state.
RUN --mount=type=bind,from=opencode-source,source=/tmp/opencode-repo,target=/tmp/opencode-source-ro \
--mount=type=cache,target=/home/opencoder/.bun/install/cache,uid=1000,gid=1000,mode=0755 \
cp -R /tmp/opencode-source-ro /tmp/opencode-repo \
&& sudo mkdir -p /home/opencoder/.bun/install/cache \
&& sudo chown -R opencoder:opencoder /home/opencoder/.bun/install/cache \
&& sudo chown -R opencoder:opencoder /home/opencoder/.cargo/registry /home/opencoder/.cargo/git /tmp/cargo-target-broker \
&& export BUN_INSTALL_CACHE_DIR=/home/opencoder/.bun/install/cache \
&& cd /tmp/opencode-repo \
&& bun_install_ok=0; \

@@ -601,4 +589,6 @@ for attempt in 1 2 3; do \

fi; \
echo "bun install failed (attempt ${attempt}); clearing cache and retrying..." >&2; \
echo "bun install failed (attempt ${attempt}/3); purging all caches and retrying..." >&2; \
sudo find /home/opencoder/.bun/install/cache -mindepth 1 -maxdepth 1 -exec rm -rf {} + || true; \
sudo rm -rf /home/opencoder/.bun/cache /home/opencoder/.cache/bun || true; \
rm -rf node_modules packages/*/node_modules packages/*/*/node_modules || true; \
done; \

@@ -611,4 +601,2 @@ if [ "${bun_install_ok}" -ne 1 ]; then \

&& if [ "${OPENCODE_SOURCE}" = "local" ]; then \
# Local mode omits .git from context for performance and safety, so set
# channel metadata explicitly instead of relying on git branch detection.
OPENCODE_CHANNEL=local bun run build-single-ui; \

@@ -618,23 +606,121 @@ else \

fi \
&& cd /tmp/opencode-repo \
&& sudo mkdir -p /opt/opencode/bin /opt/opencode/ui \
&& if [ "${OPENCODE_SOURCE}" = "local" ]; then \
echo "${OPENCODE_LOCAL_REF}" | sudo tee /opt/opencode/COMMIT >/dev/null; \
else \
echo "${OPENCODE_COMMIT}" | sudo tee /opt/opencode/COMMIT >/dev/null; \
fi \
&& sudo chown opencoder:opencoder /opt/opencode/COMMIT \
&& sudo cp /tmp/opencode-repo/packages/opencode/dist/opencode-*/bin/opencode /opt/opencode/bin/opencode \
&& sudo cp -R /tmp/opencode-repo/packages/opencode/dist/opencode-*/ui/. /opt/opencode/ui/ \
&& sudo chown -R opencoder:opencoder /opt/opencode \
&& sudo chmod +x /opt/opencode/bin/opencode \
&& cd /tmp/opencode-repo/packages/opencode-broker \
&& CARGO_TARGET_DIR=/tmp/cargo-target-broker cargo build --release \
&& sudo mkdir -p /usr/local/bin \
&& sudo cp /tmp/cargo-target-broker/release/opencode-broker /usr/local/bin/opencode-broker \
&& sudo chmod 4755 /usr/local/bin/opencode-broker \
&& rm -rf /tmp/opencode-repo /tmp/opencode-local \
&& sudo find /home/opencoder/.bun/install/cache -mindepth 1 -maxdepth 1 -exec rm -rf {} + || true \
&& sudo rm -rf /home/opencoder/.bun/cache /home/opencoder/.cache/bun
# =============================================================================
# Stage 2c: broker-planner — Prepare cargo-chef dependency recipe
# =============================================================================
# cargo-chef separates dependency compilation from application compilation.
# The recipe captures only the dependency graph (Cargo.toml/Cargo.lock), so the
# broker-deps layer below only invalidates when dependencies change, not when
# broker source code changes.
FROM base AS broker-planner
USER opencoder
ARG TARGETARCH
# Install cargo-chef from pre-built binary (avoids cargo install compilation time)
RUN set -eux; \
case "${TARGETARCH}" in \
amd64) CHEF_ARCH="x86_64-unknown-linux-musl" ;; \
arm64) CHEF_ARCH="aarch64-unknown-linux-gnu" ;; \
*) echo "Unsupported arch: ${TARGETARCH}" >&2; exit 1 ;; \
esac; \
curl -fsSL "https://github.com/LukeMathWalker/cargo-chef/releases/latest/download/cargo-chef-${CHEF_ARCH}.tar.gz" \
| tar -xz -C /home/opencoder/.local/bin/; \
cargo-chef --version
COPY --from=opencode-source /tmp/opencode-repo/packages/opencode-broker /tmp/opencode-broker
WORKDIR /tmp/opencode-broker
RUN . /home/opencoder/.cargo/env \
&& cargo chef prepare --recipe-path /tmp/recipe.json
# =============================================================================
# Stage 2d: broker-deps — Build broker dependencies only (cached when deps unchanged)
# =============================================================================
FROM base AS broker-deps
USER opencoder
ARG TARGETARCH
# Install cargo-chef (needed for cargo chef cook)
RUN set -eux; \
case "${TARGETARCH}" in \
amd64) CHEF_ARCH="x86_64-unknown-linux-musl" ;; \
arm64) CHEF_ARCH="aarch64-unknown-linux-gnu" ;; \
*) echo "Unsupported arch: ${TARGETARCH}" >&2; exit 1 ;; \
esac; \
curl -fsSL "https://github.com/LukeMathWalker/cargo-chef/releases/latest/download/cargo-chef-${CHEF_ARCH}.tar.gz" \
| tar -xz -C /home/opencoder/.local/bin/
# Copy only the recipe (dependency graph). This layer is cached as long as
# Cargo.toml/Cargo.lock are unchanged — broker source code changes don't
# invalidate it.
COPY --from=broker-planner /tmp/recipe.json /tmp/recipe.json
# Build all dependencies. Cargo cache mounts provide incremental benefit when
# this layer does invalidate (new deps added).
# All cargo caches scoped per TARGETARCH to prevent multi-platform corruption.
WORKDIR /tmp/broker-build
RUN --mount=type=cache,id=cargo-registry-${TARGETARCH},target=/home/opencoder/.cargo/registry,uid=1000,gid=1000,mode=0755 \
--mount=type=cache,id=cargo-git-${TARGETARCH},target=/home/opencoder/.cargo/git,uid=1000,gid=1000,mode=0755 \
sudo chown -R opencoder:opencoder /home/opencoder/.cargo/registry /home/opencoder/.cargo/git \
&& . /home/opencoder/.cargo/env \
&& cargo chef cook --release --recipe-path /tmp/recipe.json
# =============================================================================
# Stage 2e: broker-build — Compile broker application (deps already built)
# =============================================================================
# This stage runs in parallel with opencode-js-build. Only the broker's own
# crate is recompiled here; all dependencies come from the broker-deps layer.
FROM broker-deps AS broker-build
USER opencoder
ARG TARGETARCH
COPY --from=opencode-source /tmp/opencode-repo/packages/opencode-broker /tmp/broker-build
RUN --mount=type=cache,id=cargo-registry-${TARGETARCH},target=/home/opencoder/.cargo/registry,uid=1000,gid=1000,mode=0755 \
--mount=type=cache,id=cargo-git-${TARGETARCH},target=/home/opencoder/.cargo/git,uid=1000,gid=1000,mode=0755 \
sudo chown -R opencoder:opencoder /home/opencoder/.cargo/registry /home/opencoder/.cargo/git \
&& . /home/opencoder/.cargo/env \
&& cargo build --release
# =============================================================================
# Stage 2f: opencode-build — Assemble final artifacts
# =============================================================================
FROM base AS opencode-build
USER opencoder
ARG OPENCODE_SOURCE=remote
ARG OPENCODE_COMMIT
ARG OPENCODE_LOCAL_REF
# Copy built JS/UI dist from the JS build stage
COPY --from=opencode-js-build /tmp/opencode-repo/packages/opencode/dist /tmp/opencode-dist
# Copy built broker binary from the broker build stage
COPY --from=broker-build /tmp/broker-build/target/release/opencode-broker /tmp/opencode-broker-bin
# Assemble artifacts into final locations
RUN set -eux; \
OPENCODE_COMMIT_OVERRIDE="${OPENCODE_COMMIT:-}"; \
OPENCODE_LOCAL_REF="${OPENCODE_LOCAL_REF:-local-unknown}"; \
OPENCODE_COMMIT="e3cbc7b4611688309e3e7b0004987679e94d3392"; \
if [ -n "${OPENCODE_COMMIT_OVERRIDE}" ]; then OPENCODE_COMMIT="${OPENCODE_COMMIT_OVERRIDE}"; fi; \
sudo mkdir -p /opt/opencode/bin /opt/opencode/ui; \
if [ "${OPENCODE_SOURCE}" = "local" ]; then \
echo "${OPENCODE_LOCAL_REF}" | sudo tee /opt/opencode/COMMIT >/dev/null; \
else \
echo "${OPENCODE_COMMIT}" | sudo tee /opt/opencode/COMMIT >/dev/null; \
fi; \
sudo chown opencoder:opencoder /opt/opencode/COMMIT; \
sudo cp /tmp/opencode-dist/opencode-*/bin/opencode /opt/opencode/bin/opencode; \
sudo cp -R /tmp/opencode-dist/opencode-*/ui/. /opt/opencode/ui/; \
sudo chown -R opencoder:opencoder /opt/opencode; \
sudo chmod +x /opt/opencode/bin/opencode; \
sudo mkdir -p /usr/local/bin; \
sudo cp /tmp/opencode-broker-bin /usr/local/bin/opencode-broker; \
sudo chmod 4755 /usr/local/bin/opencode-broker; \
sudo rm -rf /tmp/opencode-dist /tmp/opencode-broker-bin
# -----------------------------------------------------------------------------

@@ -641,0 +727,0 @@ # Stage 3: Runtime

@@ -228,2 +228,32 @@ #!/bin/bash

ensure_jsonc_parser() {
if ! command -v jq >/dev/null 2>&1; then
log "ERROR: jq is required to parse JSONC configs."
return 1
fi
return 0
}
jsonc_get_auth_enabled() {
local file="$1"
ensure_jsonc_parser || return 1
local auth_enabled
if ! auth_enabled="$(grep -v '^\s*//' "${file}" | jq -r '.auth.enabled // false')"; then
return 1
fi
printf '%s' "${auth_enabled}"
}
jsonc_set_auth_enabled() {
local file="$1"
ensure_jsonc_parser || return 1
local patched
if ! patched="$(grep -v '^\s*//' "${file}" | jq '.auth.enabled = true')"; then
return 1
fi
printf '%s\n' "${patched}" > "${file}"
}
ensure_auth_config() {

@@ -236,6 +266,34 @@ local config_dir="/home/opencoder/.config/opencode"

if [ -f "${config_json}" ] || [ -f "${config_jsonc}" ]; then
# Check if an existing config already has auth enabled
local config_file=""
for candidate in "${config_json}" "${config_jsonc}" "${config_dir}/config.json"; do
if [ -f "${candidate}" ]; then
config_file="${candidate}"
break
fi
done
if [ -n "${config_file}" ]; then
# File exists — verify auth is enabled
local auth_enabled
if ! auth_enabled="$(jsonc_get_auth_enabled "${config_file}")"; then
log "ERROR: Failed to parse ${config_file} for auth settings."
exit 1
fi
if [ "${auth_enabled}" = "true" ]; then
return # Already configured correctly
fi
# Auth not enabled — patch the existing config to enable it
log "Auth is not enabled in ${config_file}; patching to enable."
if ! jsonc_set_auth_enabled "${config_file}"; then
log "ERROR: Failed to update ${config_file} to enable auth."
exit 1
fi
chown opencoder:opencoder "${config_file}" 2>/dev/null || true
chmod 644 "${config_file}" 2>/dev/null || true
return
fi
# No config file — create default
if ! cat > "${config_jsonc}" <<'EOF'

@@ -260,19 +318,20 @@ {

local config_dir="/home/opencoder/.config/opencode"
local config_file=""
for candidate in "${config_dir}/opencode.json" "${config_dir}/opencode.jsonc"; do
# Check all config files that opencode's global loader merges
local candidate auth_enabled
for candidate in "${config_dir}/opencode.json" \
"${config_dir}/opencode.jsonc" \
"${config_dir}/config.json"; do
if [ -f "${candidate}" ]; then
config_file="${candidate}"
break
if ! auth_enabled="$(jsonc_get_auth_enabled "${candidate}")"; then
log "ERROR: Failed to parse ${candidate} for auth settings."
exit 1
fi
if [ "${auth_enabled}" = "true" ]; then
return 0
fi
fi
done
if [ -z "${config_file}" ]; then
return 1
fi
local auth_enabled
# Strip // line-comments before parsing (JSONC compat)
auth_enabled="$(grep -v '^\s*//' "${config_file}" | jq -r '.auth.enabled // false' 2>/dev/null || echo "false")"
[ "${auth_enabled}" = "true" ]
return 1
}

@@ -526,3 +585,30 @@

ensure_opencode_data_dir_writable() {
local data_dir="/home/opencoder/.local/share/opencode"
install -d -m 0755 "${data_dir}"
if runuser -u opencoder -- test -w "${data_dir}"; then
return
fi
log "Detected non-writable opencode data directory; attempting ownership fix: ${data_dir}"
if ! chown -R opencoder:opencoder "${data_dir}" 2>/dev/null; then
log "WARNING: Failed to change ownership for ${data_dir}; continuing with writability re-check."
fi
if runuser -u opencoder -- test -w "${data_dir}"; then
return
fi
log "ERROR: ${data_dir} is not writable by user 'opencoder'."
log "If running on Railway, set RAILWAY_RUN_UID=0 and attach a volume mounted at ${data_dir}."
exit 1
}
load_builtin_home_users
if ! ensure_jsonc_parser; then
log "ERROR: JSONC parser is required for auth config checks."
exit 1
fi
ensure_auth_config

@@ -533,2 +619,3 @@ restore_or_bootstrap_users

warn_security_posture
ensure_opencode_data_dir_writable

@@ -535,0 +622,0 @@ log "Starting opencode on ${OPENCODE_HOST}:${OPENCODE_PORT}"

@@ -277,3 +277,10 @@ //! Docker image build and pull operations

// Create tar archive containing Dockerfile
// Create tar archive containing Dockerfile and (optionally) the local submodule checkout.
// This can take several seconds for local submodule builds due to recursive tar+gzip.
let context_msg = if include_local_opencode_submodule {
"Packaging local opencode checkout"
} else {
"Preparing build context"
};
progress.update_spinner("build", context_msg);
let context = create_build_context(BuildContextOptions {

@@ -310,7 +317,10 @@ include_local_opencode_submodule,

// Sending the context to Docker and waiting for build initialization can take
// several seconds, especially for large local-submodule contexts.
progress.update_spinner("build", "Sending build context to Docker");
// Start build with streaming output
let mut stream = client.inner().build_image(options, None, Some(body));
// Add main build spinner (context prefix like "Building image" is set by caller)
progress.add_spinner("build", "Initializing...");
progress.update_spinner("build", "Waiting for Docker build to start");

@@ -406,2 +416,28 @@ let mut maybe_image_id = None;

/// Clean up raw BuildKit vertex labels for user-friendly display.
///
/// Strips the `[internal]` prefix that BuildKit uses for internal plumbing
/// vertices and maps known labels to friendlier names.
fn clean_buildkit_label(raw: &str) -> String {
let trimmed = raw.trim();
let Some(rest) = trimmed.strip_prefix("[internal] ") else {
return trimmed.to_string();
};
if rest.starts_with("load remote build context") {
"Loading remote build context".to_string()
} else if let Some(image) = rest.strip_prefix("load metadata for ") {
format!("Resolving image {image}")
} else if rest.starts_with("load build definition") {
"Loading Dockerfile".to_string()
} else if rest.starts_with("load build context") {
"Loading build context".to_string()
} else {
let mut chars = rest.chars();
match chars.next() {
None => String::new(),
Some(c) => c.to_uppercase().to_string() + chars.as_str(),
}
}
}
fn handle_stream_message(

@@ -429,3 +465,3 @@ info: &bollard::models::BuildInfo,

if !(has_runtime_vertex && is_internal_msg) {
progress.update_spinner("build", stream_msg);
progress.update_spinner("build", &clean_buildkit_label(stream_msg));
}

@@ -490,4 +526,5 @@ }

let display_name = clean_buildkit_label(&vertex_name);
let message = if progress.is_plain_output() {
vertex_name
display_name
} else if let Some(log_entry) = latest_logs

@@ -498,5 +535,5 @@ .iter()

{
format!("{vertex_name} · {}", log_entry.message)
format!("{display_name} · {}", log_entry.message)
} else {
vertex_name
display_name
};

@@ -1750,2 +1787,66 @@ progress.update_spinner("build", &message);

}
#[test]
fn clean_buildkit_label_strips_internal_load_remote_context() {
assert_eq!(
clean_buildkit_label("[internal] load remote build context"),
"Loading remote build context"
);
}
#[test]
fn clean_buildkit_label_strips_internal_load_metadata() {
assert_eq!(
clean_buildkit_label("[internal] load metadata for docker.io/library/ubuntu:24.04"),
"Resolving image docker.io/library/ubuntu:24.04"
);
}
#[test]
fn clean_buildkit_label_strips_internal_load_build_definition() {
assert_eq!(
clean_buildkit_label("[internal] load build definition from Dockerfile"),
"Loading Dockerfile"
);
}
#[test]
fn clean_buildkit_label_strips_internal_load_build_context() {
assert_eq!(
clean_buildkit_label("[internal] load build context"),
"Loading build context"
);
}
#[test]
fn clean_buildkit_label_capitalizes_unknown_internal() {
assert_eq!(
clean_buildkit_label("[internal] some unknown thing"),
"Some unknown thing"
);
}
#[test]
fn clean_buildkit_label_preserves_runtime_steps() {
assert_eq!(
clean_buildkit_label("[runtime 1/15] RUN apt-get update"),
"[runtime 1/15] RUN apt-get update"
);
}
#[test]
fn clean_buildkit_label_preserves_plain_text() {
assert_eq!(
clean_buildkit_label("Step 3/10 : COPY . ."),
"Step 3/10 : COPY . ."
);
}
#[test]
fn clean_buildkit_label_trims_whitespace() {
assert_eq!(
clean_buildkit_label(" [internal] load build context "),
"Loading build context"
);
}
}
MIT License
Copyright (c) 2026 Peter Ryszkiewicz
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.