Three issues Dalidou Claude surfaced during the first real deploy
of commit e877e5b to the live service (report from 2026-04-08).
Bug 1 was the critical one — a schema init ordering bug that would
have bitten every future upgrade from a pre-Phase-9 schema — and
the other two were usability traps around hostname resolution.
Bug 1 (CRITICAL): schema init ordering
--------------------------------------
src/atocore/models/database.py
SCHEMA_SQL contained CREATE INDEX statements that referenced
columns added later by _apply_migrations():
CREATE INDEX IF NOT EXISTS idx_memories_project ON memories(project);
CREATE INDEX IF NOT EXISTS idx_interactions_project_name ON interactions(project);
CREATE INDEX IF NOT EXISTS idx_interactions_session ON interactions(session_id);
On a FRESH install, CREATE TABLE IF NOT EXISTS creates the tables
with the Phase 9 shape (columns present), so the CREATE INDEX runs
cleanly and _apply_migrations is effectively a no-op.
On an UPGRADE from a pre-Phase-9 schema, CREATE TABLE IF NOT EXISTS
is a no-op (the tables already exist in the old shape), the columns
are NOT added yet, and the CREATE INDEX fails with
"OperationalError: no such column: project" before
_apply_migrations gets a chance to add the columns.
Dalidou Claude hit this exactly when redeploying from 0.1.0 to
0.2.0 — had to manually ALTER TABLE to add the Phase 9 columns
before the container could start.
The fix is to remove the Phase 9-column indexes from SCHEMA_SQL.
They already exist in _apply_migrations() AFTER the corresponding
ALTER TABLE, so they still get created on both fresh and upgrade
paths — just after the columns exist, not before.
Indexes still in SCHEMA_SQL (all safe — reference columns that
have existed since the first release):
- idx_chunks_document on source_chunks(document_id)
- idx_memories_type on memories(memory_type)
- idx_memories_status on memories(status)
- idx_interactions_project on interactions(project_id)
Indexes moved to _apply_migrations (already there — just no longer
duplicated in SCHEMA_SQL):
- idx_memories_project on memories(project)
- idx_interactions_project_name on interactions(project)
- idx_interactions_session on interactions(session_id)
- idx_interactions_created_at on interactions(created_at)
Regression test: tests/test_database.py
---------------------------------------
New test_init_db_upgrades_pre_phase9_schema_without_failing:
- Seeds the DB with the exact pre-Phase-9 shape (no project /
last_referenced_at / reference_count on memories; no project /
client / session_id / response / memories_used / chunks_used on
interactions)
- Calls init_db() — which used to raise OperationalError before
the fix
- Verifies all Phase 9 columns are present after the call
- Verifies the migration indexes exist
Before the fix this test would have failed with
"OperationalError: no such column: project" on the init_db call.
After the fix it passes. This locks the invariant "init_db is
safe on any legacy schema shape" so the bug can't silently come
back.
Full suite: 216 passing (was 215), 1 warning. The +1 is the new
regression test.
Bug 3 (usability): deploy.sh DNS default
----------------------------------------
deploy/dalidou/deploy.sh
ATOCORE_GIT_REMOTE defaulted to http://dalidou:3000/Antoine/ATOCore.git
which requires the "dalidou" hostname to resolve. On the Dalidou
host itself it didn't (no /etc/hosts entry for localhost alias),
so deploy.sh had to be run with the IP as a manual workaround.
Fix: default ATOCORE_GIT_REMOTE to http://127.0.0.1:3000/Antoine/ATOCore.git.
Loopback always works on the host running the script. Callers
from a remote host (e.g. running deploy.sh from a laptop against
the Dalidou LAN) set ATOCORE_GIT_REMOTE explicitly. The script
header's Environment Variables section documents this with an
explicit reference to the 2026-04-08 Dalidou deploy report so the
rationale isn't lost.
docs/dalidou-deployment.md gets a new "Troubleshooting hostname
resolution" subsection and a new example invocation showing how
to deploy from a remote host with an explicit ATOCORE_GIT_REMOTE
override.
Bug 2 (usability): atocore_client.py ATOCORE_BASE_URL documentation
-------------------------------------------------------------------
scripts/atocore_client.py
Same class of issue as bug 3. BASE_URL defaults to
http://dalidou:8100 which resolves fine from a remote caller
(laptop, T420/OpenClaw over Tailscale) but NOT from the Dalidou
host itself or from inside the atocore container. Dalidou Claude
saw the CLI return
{"status": "unavailable", "fail_open": true}
while direct curl to http://127.0.0.1:8100 worked.
The fix here is NOT to change the default (remote callers are
the common case and would break) but to DOCUMENT the override
clearly so the next operator knows what's happening:
- The script module docstring grew a new "Environment variables"
section covering ATOCORE_BASE_URL, ATOCORE_TIMEOUT_SECONDS,
ATOCORE_REFRESH_TIMEOUT_SECONDS, and ATOCORE_FAIL_OPEN, with
the explicit override example for on-host/in-container use
- It calls out the exact symptom (fail-open envelope when the
base URL doesn't resolve) so the diagnosis is obvious from
the error alone
- docs/dalidou-deployment.md troubleshooting section mirrors
this guidance so there's one place to look regardless of
whether the operator starts with the client help or the
deploy doc
What this commit does NOT do
----------------------------
- Does NOT change the default ATOCORE_BASE_URL. Doing that would
break the T420 OpenClaw helper and every remote caller who
currently relies on the hostname. Documentation is the right
fix for this case.
- Does NOT fix /etc/hosts on Dalidou. That's a host-level
configuration issue that the user can fix if they prefer
having the hostname resolve; the deploy.sh fix makes it
unnecessary regardless.
- Does NOT re-run the validation on Dalidou. The next step is
for the live service to pull this commit via deploy.sh (which
should now work without the IP workaround) and re-run the
Phase 9 loop test to confirm nothing regressed.
185 lines
7.3 KiB
Python
185 lines
7.3 KiB
Python
"""Tests for SQLite connection pragmas and runtime behavior."""
|
|
|
|
import sqlite3
|
|
|
|
import atocore.config as config
|
|
from atocore.models.database import get_connection, init_db
|
|
|
|
|
|
def test_get_connection_applies_busy_timeout_and_wal(tmp_path, monkeypatch):
|
|
monkeypatch.setenv("ATOCORE_DATA_DIR", str(tmp_path / "data"))
|
|
monkeypatch.setenv("ATOCORE_DB_BUSY_TIMEOUT_MS", "7000")
|
|
|
|
original_settings = config.settings
|
|
try:
|
|
config.settings = config.Settings()
|
|
init_db()
|
|
with get_connection() as conn:
|
|
busy_timeout = conn.execute("PRAGMA busy_timeout").fetchone()[0]
|
|
journal_mode = conn.execute("PRAGMA journal_mode").fetchone()[0]
|
|
foreign_keys = conn.execute("PRAGMA foreign_keys").fetchone()[0]
|
|
finally:
|
|
config.settings = original_settings
|
|
|
|
assert busy_timeout == 7000
|
|
assert str(journal_mode).lower() == "wal"
|
|
assert foreign_keys == 1
|
|
|
|
|
|
def test_get_connection_uses_configured_timeout_value(tmp_path, monkeypatch):
|
|
monkeypatch.setenv("ATOCORE_DATA_DIR", str(tmp_path / "data"))
|
|
monkeypatch.setenv("ATOCORE_DB_BUSY_TIMEOUT_MS", "2500")
|
|
|
|
original_settings = config.settings
|
|
original_connect = sqlite3.connect
|
|
calls = []
|
|
|
|
def fake_connect(*args, **kwargs):
|
|
calls.append(kwargs.get("timeout"))
|
|
return original_connect(*args, **kwargs)
|
|
|
|
try:
|
|
config.settings = config.Settings()
|
|
monkeypatch.setattr("atocore.models.database.sqlite3.connect", fake_connect)
|
|
init_db()
|
|
finally:
|
|
config.settings = original_settings
|
|
|
|
assert calls
|
|
assert calls[0] == 2.5
|
|
|
|
|
|
def test_init_db_upgrades_pre_phase9_schema_without_failing(tmp_path, monkeypatch):
|
|
"""Regression test for the schema init ordering bug caught during
|
|
the first real Dalidou deploy (report from 2026-04-08).
|
|
|
|
Before the fix, SCHEMA_SQL contained CREATE INDEX statements that
|
|
referenced columns (memories.project, interactions.project,
|
|
interactions.session_id) added by _apply_migrations later in
|
|
init_db. On a fresh install this worked because CREATE TABLE
|
|
created the tables with the new columns before the CREATE INDEX
|
|
ran, but on UPGRADE from a pre-Phase-9 schema the CREATE TABLE
|
|
IF NOT EXISTS was a no-op and the CREATE INDEX hit
|
|
OperationalError: no such column.
|
|
|
|
This test seeds the tables with the OLD pre-Phase-9 shape then
|
|
calls init_db() and verifies that:
|
|
|
|
- init_db does not raise
|
|
- The new columns were added via _apply_migrations
|
|
- The new indexes exist
|
|
|
|
If the bug is reintroduced by moving a CREATE INDEX for a
|
|
migration column back into SCHEMA_SQL, this test will fail
|
|
with OperationalError before reaching the assertions.
|
|
"""
|
|
monkeypatch.setenv("ATOCORE_DATA_DIR", str(tmp_path / "data"))
|
|
original_settings = config.settings
|
|
try:
|
|
config.settings = config.Settings()
|
|
|
|
# Step 1: create the data dir and open a direct connection
|
|
config.ensure_runtime_dirs()
|
|
db_path = config.settings.db_path
|
|
|
|
# Step 2: seed the DB with the old pre-Phase-9 shape. No
|
|
# project/last_referenced_at/reference_count on memories; no
|
|
# project/client/session_id/response/memories_used/chunks_used
|
|
# on interactions. We also need the prerequisite tables
|
|
# (projects, source_documents, source_chunks) because the
|
|
# memories table has an FK to source_chunks.
|
|
with sqlite3.connect(str(db_path)) as conn:
|
|
conn.executescript(
|
|
"""
|
|
CREATE TABLE source_documents (
|
|
id TEXT PRIMARY KEY,
|
|
file_path TEXT UNIQUE NOT NULL,
|
|
file_hash TEXT NOT NULL,
|
|
title TEXT,
|
|
doc_type TEXT DEFAULT 'markdown',
|
|
tags TEXT DEFAULT '[]',
|
|
ingested_at DATETIME DEFAULT CURRENT_TIMESTAMP,
|
|
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
|
|
);
|
|
|
|
CREATE TABLE source_chunks (
|
|
id TEXT PRIMARY KEY,
|
|
document_id TEXT NOT NULL REFERENCES source_documents(id) ON DELETE CASCADE,
|
|
chunk_index INTEGER NOT NULL,
|
|
content TEXT NOT NULL,
|
|
heading_path TEXT DEFAULT '',
|
|
char_count INTEGER NOT NULL,
|
|
metadata TEXT DEFAULT '{}',
|
|
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
|
|
);
|
|
|
|
CREATE TABLE memories (
|
|
id TEXT PRIMARY KEY,
|
|
memory_type TEXT NOT NULL,
|
|
content TEXT NOT NULL,
|
|
source_chunk_id TEXT REFERENCES source_chunks(id),
|
|
confidence REAL DEFAULT 1.0,
|
|
status TEXT DEFAULT 'active',
|
|
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
|
|
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
|
|
);
|
|
|
|
CREATE TABLE projects (
|
|
id TEXT PRIMARY KEY,
|
|
name TEXT UNIQUE NOT NULL,
|
|
description TEXT DEFAULT '',
|
|
status TEXT DEFAULT 'active',
|
|
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
|
|
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
|
|
);
|
|
|
|
CREATE TABLE interactions (
|
|
id TEXT PRIMARY KEY,
|
|
prompt TEXT NOT NULL,
|
|
context_pack TEXT DEFAULT '{}',
|
|
response_summary TEXT DEFAULT '',
|
|
project_id TEXT REFERENCES projects(id),
|
|
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
|
|
);
|
|
"""
|
|
)
|
|
conn.commit()
|
|
|
|
# Step 3: call init_db — this used to raise on the upgrade
|
|
# path. After the fix it should succeed.
|
|
init_db()
|
|
|
|
# Step 4: verify the migrations ran — Phase 9 columns present
|
|
with sqlite3.connect(str(db_path)) as conn:
|
|
conn.row_factory = sqlite3.Row
|
|
memories_cols = {
|
|
row["name"] for row in conn.execute("PRAGMA table_info(memories)")
|
|
}
|
|
interactions_cols = {
|
|
row["name"]
|
|
for row in conn.execute("PRAGMA table_info(interactions)")
|
|
}
|
|
|
|
assert "project" in memories_cols
|
|
assert "last_referenced_at" in memories_cols
|
|
assert "reference_count" in memories_cols
|
|
|
|
assert "project" in interactions_cols
|
|
assert "client" in interactions_cols
|
|
assert "session_id" in interactions_cols
|
|
assert "response" in interactions_cols
|
|
assert "memories_used" in interactions_cols
|
|
assert "chunks_used" in interactions_cols
|
|
|
|
# Step 5: verify the indexes on migration columns exist
|
|
index_rows = conn.execute(
|
|
"SELECT name FROM sqlite_master WHERE type='index' AND tbl_name IN ('memories','interactions')"
|
|
).fetchall()
|
|
index_names = {row["name"] for row in index_rows}
|
|
|
|
assert "idx_memories_project" in index_names
|
|
assert "idx_interactions_project_name" in index_names
|
|
assert "idx_interactions_session" in index_names
|
|
finally:
|
|
config.settings = original_settings
|