config: default LLM extractor model haiku -> sonnet
Haiku was producing noisy candidates (31% accept rate on first triage). Sonnet should give tighter extraction with fewer false positives while still catching the same durable-fact patterns. Override: ATOCORE_LLM_EXTRACTOR_MODEL=haiku to revert. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -27,7 +27,7 @@ Configuration:
|
|||||||
|
|
||||||
- Requires the ``claude`` CLI on PATH (``claude --version`` should work).
|
- Requires the ``claude`` CLI on PATH (``claude --version`` should work).
|
||||||
- ``ATOCORE_LLM_EXTRACTOR_MODEL`` overrides the model alias (default
|
- ``ATOCORE_LLM_EXTRACTOR_MODEL`` overrides the model alias (default
|
||||||
``haiku``).
|
``sonnet``).
|
||||||
- ``ATOCORE_LLM_EXTRACTOR_TIMEOUT_S`` overrides the per-call timeout
|
- ``ATOCORE_LLM_EXTRACTOR_TIMEOUT_S`` overrides the per-call timeout
|
||||||
(default 90 seconds — first invocation is slow because Node.js
|
(default 90 seconds — first invocation is slow because Node.js
|
||||||
startup plus OAuth check is non-trivial).
|
startup plus OAuth check is non-trivial).
|
||||||
@@ -65,7 +65,7 @@ from atocore.observability.logger import get_logger
|
|||||||
log = get_logger("extractor_llm")
|
log = get_logger("extractor_llm")
|
||||||
|
|
||||||
LLM_EXTRACTOR_VERSION = "llm-0.2.0"
|
LLM_EXTRACTOR_VERSION = "llm-0.2.0"
|
||||||
DEFAULT_MODEL = os.environ.get("ATOCORE_LLM_EXTRACTOR_MODEL", "haiku")
|
DEFAULT_MODEL = os.environ.get("ATOCORE_LLM_EXTRACTOR_MODEL", "sonnet")
|
||||||
DEFAULT_TIMEOUT_S = float(os.environ.get("ATOCORE_LLM_EXTRACTOR_TIMEOUT_S", "90"))
|
DEFAULT_TIMEOUT_S = float(os.environ.get("ATOCORE_LLM_EXTRACTOR_TIMEOUT_S", "90"))
|
||||||
MAX_RESPONSE_CHARS = 8000
|
MAX_RESPONSE_CHARS = 8000
|
||||||
MAX_PROMPT_CHARS = 2000
|
MAX_PROMPT_CHARS = 2000
|
||||||
|
|||||||
Reference in New Issue
Block a user