AttractorEx is the primary artifact in this repository.
This folder contains a DOT-driven pipeline engine inspired by strongDM Attractor:
- Parser (
Parser) - Validator (
Validator) - Authoring fidelity helpers (
Authoring) - Execution engine (
Engine) - Routing and condition evaluator (
Condition) - Handler registry + built-in handlers (
Handlers.*)
Independence from Phoenix App
lib/attractor_ex does not depend on AttractorPhoenix or AttractorPhoenixWeb modules.
Dependency boundary:
- Internal references are only
AttractorEx.*. - Runtime deps used here are standard library +
Jason. - Phoenix-specific integration now lives in the separate
AttractorExPhxadapter layer. - Phoenix is used by the demo UI app, not by this library code.
Public API
AttractorEx.run(dot_source, context_map, opts)
AttractorEx.resume(dot_source, checkpoint_or_path, opts)
AttractorEx.start_http_server(port: 4041, store_root: "tmp/attractor_http_store")
AttractorEx.Authoring.analyze(dot_source)Example:
dot = """
digraph attractor {
start [shape=Mdiamond]
hello [shape=parallelogram, tool_command="echo hello world"]
done [shape=Msquare]
start -> hello
hello -> done
}
"""
{:ok, result} = AttractorEx.run(dot, %{})
checkpoint_path = Path.join(result.logs_root, "checkpoint.json")
{:ok, resumed} = AttractorEx.resume(dot, checkpoint_path, codergen_backend: MyApp.LLMBackend)
{:ok, server_pid} =
AttractorEx.start_http_server(port: 4041, store_root: "tmp/attractor_http_store")
{:ok, authoring_payload} = AttractorEx.Authoring.analyze(dot)Authoring Fidelity
AttractorEx.Authoring exposes canonical builder-facing authoring helpers so UI
surfaces can stay aligned with engine semantics.
Implemented authoring features:
- Canonical parse-and-normalize analysis via
AttractorEx.Authoring.analyze/1 - Stable DOT formatting driven from normalized graphs
- Canonical graph JSON for builder rendering
- Inline validator diagnostics and suggested autofixes
- Built-in graph templates and transform actions for builder workflows
Phoenix app authoring endpoints:
GET /api/authoring/templatesPOST /api/authoring/analyzePOST /api/authoring/transform
HTTP Server Mode
AttractorEx.start_http_server/1 starts a lightweight Bandit-backed HTTP service around the engine.
The HTTP runtime is durable by default. Run metadata, pending questions, checkpoint
snapshots, append-only event history, and artifact indexes are persisted under
store_root so the manager can reload and recover runs after restarts.
Implemented endpoints:
POST /pipelinesGET /pipelines/:idGET /pipelines/:id/eventsPOST /pipelines/:id/cancelPOST /pipelines/:id/resumeGET /pipelines/:id/graphGET /pipelines/:id/questionsPOST /pipelines/:id/questions/:qid/answerGET /pipelines/:id/checkpointGET /pipelines/:id/context
Compatibility aliases for the definition-of-done checklist:
POST /rundelegates toPOST /pipelinesGET /status?pipeline_id=...(or?id=...) delegates toGET /pipelines/:idPOST /answeracceptspipeline_id,question_id(orqid), andanswer(orvalue)
Graph endpoint formats:
- Default:
GET /pipelines/:id/graphreturns a native SVG graph rendering. GET /pipelines/:id/graph?format=dotreturns raw DOT.GET /pipelines/:id/graph?format=jsonreturns parsed graph JSON.GET /pipelines/:id/graph?format=mermaidreturns Mermaid flowchart text.GET /pipelines/:id/graph?format=textreturns a plain-text graph summary.
HTTP service hardening:
- Empty pipeline submissions are rejected with
400. - Unsupported graph formats are rejected with
400and a supported-format list. - Responses include
cache-control: no-storeandx-content-type-options: nosniff. - JSON request parsing is limited to a 1 MB body by default.
Human-in-the-loop web flow:
- Submit a pipeline containing
wait.human. - Poll
GET /pipelines/:id/questionsfor pending questions. - Send a choice to
POST /pipelines/:id/questions/:qid/answer. - If the run is cancelled after a persisted checkpoint, inspect
resume_readyfromGET /pipelines/:id. - Use
POST /pipelines/:id/resumeonly when that cancelled packet has a checkpoint, no pending questions, and a recorded human answer. - Subscribe to
GET /pipelines/:id/eventsfor SSE status updates.
Replay and recovery details:
GET /pipelines/:id/events?after=<sequence>replays persisted events after a known sequence number.- Incomplete runs are reloaded on boot and resumed from their latest checkpoint when one exists.
- Accepted human answers are persisted into run context and checkpoint-backed context when available so post-answer cancelled packets remain durably inspectable.
POST /pipelines/:id/resumeadmits one explicit checkpoint-backed resume for a cancelled run only when the checkpoint exists, no questions remain, and a human answer is recorded.- Persisted runs index artifacts discovered under each run directory so operators can inspect generated files alongside checkpoints and events.
Configuring LLM Nodes (codergen)
AttractorEx treats box nodes (or type="codergen") as LLM stages.
Handler behavior:
- Prompt source: node
prompt(fallback: nodelabel). - Variable expansion:
$goalfrom graph-levelgoal. - Preferred backend selection:
opts[:llm_client]using unified LLM client. - Legacy backend selection:
opts[:codergen_backend]. - Legacy backend contract: module with
run(node, prompt, context)returning:String(will be written toresponse.md), or%AttractorEx.Outcome{}(full control of status/context updates).
Unified client contract:
Build client with providers map and optional default:
%AttractorEx.LLM.Client{providers: %{"openai" => MyAdapter}, default_provider: "openai"}or
AttractorEx.LLM.Client.from_env/1with:config :attractor_phoenix, :attractor_ex_llm, providers: %{"openai" => MyAdapter}, default_provider: "openai"
Adapter module contract:
complete(%AttractorEx.LLM.Request{}) :: %AttractorEx.LLM.Response{} | {:error, term()}
Node attrs used for unified request:
llm_model,llm_provider,reasoning_effort,max_tokens,temperature
Higher-level client helpers:
generate/2andgenerate_with_request/2accumulate_stream/2to turn raw streaming events into a final%AttractorEx.LLM.Response{}generate_object/2andstream_object/2for JSON object decodingstream_object_deltas/2to inject typed:object_deltaevents while a JSON stream is still in flight
Message content:
AttractorEx.LLM.Message.contentaccepts either plain text or a list ofAttractorEx.LLM.MessagePartstructs for richer multimodal/tool/thinking payloads
Reliability hooks:
- request/client retry policies normalize provider errors into
AttractorEx.LLM.Error - request-level
cachemetadata is translated into provider cache hooks where supported
- request/client retry policies normalize provider errors into
Included native adapters under lib/attractor_phoenix/llm_adapters/:
openai.exfor OpenAI Responses API request/stream translationanthropic.exfor Anthropic Messages API request/stream translationgemini.exfor Gemini generate/stream content translation
Example backend module:
defmodule MyApp.LLMBackend do
alias AttractorEx.Outcome
def run(node, prompt, _context) do
# Call your LLM provider here.
text = "Response for #{node.id}: #{prompt}"
Outcome.success(%{"responses" => %{node.id => text}}, "LLM completed")
end
endRun with backend:
AttractorEx.run(dot_source, %{}, codergen_backend: MyApp.LLMBackend)Run with unified client:
llm_client = %AttractorEx.LLM.Client{
providers: %{"openai" => MyAdapter},
default_provider: "openai"
}
AttractorEx.run(dot_source, %{}, llm_client: llm_client)Default-client helpers are also available for applications that want a process-wide singleton:
client = AttractorEx.LLM.Client.from_env()
AttractorEx.LLM.Client.put_default(client)
response =
AttractorEx.LLM.Client.generate(%AttractorEx.LLM.Request{
model: "gpt-5.2",
messages: [%AttractorEx.LLM.Message{role: :user, content: "Plan the change"}]
})Artifacts written by codergen stage:
prompt.mdresponse.mdstatus.json
status.json follows the Appendix C contract:
outcomepreferred_next_labelsuggested_next_idscontext_updatesnotes
Backward-compatible aliases such as status and preferred_label are still emitted.
Coding Agent Loop Spec Compliance
Coding-agent loop behavior is implemented in the Agent session modules and tracked in:
lib/attractor_ex/CODING_AGENT_LOOP_COMPLIANCE.md- Source spec: https://github.com/strongdm/attractor/blob/main/coding-agent-loop-spec.md
Unified LLM behavior is tracked in:
lib/attractor_ex/UNIFIED_LLM_SPEC_COMPLIANCE.md- Source spec: https://github.com/strongdm/attractor/blob/main/unified-llm-spec.md
Core Attractor engine behavior is tracked in:
lib/attractor_ex/ATTRACTOR_SPEC_COMPLIANCE.md- Source spec: https://github.com/strongdm/attractor/blob/main/attractor-spec.md
These compliance docs use implemented / partial / not implemented status per upstream section and should be updated whenever upstream spec content changes.
Current coding-agent highlights:
ProviderProfile.openai/1,ProviderProfile.anthropic/1, andProviderProfile.gemini/1now expose provider-aligned tool bundles and capability metadata instead of a single shared tool list. OpenAI includesapply_patch, Anthropic and Gemini includeedit_file, and Gemini also includesread_many_filespluslist_dir, with opt-inweb_search/web_fetchsupport viaProviderProfile.gemini(web_tools: true).AttractorEx.Agent.LocalExecutionEnvironmentnow exposes file IO, directory listing, globbing, grep, shell execution, and environment metadata through theExecutionEnvironmentbehaviour.AttractorEx.Agent.ApplyPatchbacks the OpenAI-facingapply_patchtool for local sessions, handling add/delete/update/move operations in the appendix-style patch envelope.AttractorEx.Agent.Sessionvalidates object-style tool arguments, emits spec-style typed session events (including synthesized assistant text deltas and full-outputtool_call_endhost events), layers ancestor project instruction files such asAGENTS.md,CODEX.md, and.codex/instructions.mdinto the default prompt context under a shared 32 KB budget, and manages spec-style subagent tools (spawn_agent,send_input,wait,close_agent) with depth limits.AttractorEx.Agent.ProviderProfileexposes provider-specific base prompt guidance, supports deterministic custom-tool registration/override viaregister_tool/2andregister_tools/2, and publishes a maintained OpenAI/Anthropic/Gemini compatibility matrix covering implemented tool names, reference tool names, capability flags, instruction files, reasoning-option paths, and shared event kinds.
How to Extract into Another Project
- Copy
lib/attractor_ex/into your project underlib/. - Copy
lib/attractor_ex.ex(public entrypoint module). - Add
{:jason, "~> 1.2"}to dependencies (if not already present). - Copy
test/attractor_ex/tests (recommended) and run them.
Optional: copy test/support/attractor_ex_test_* backend fixtures for spec-style test scenarios.
Verification Commands
mix test test/attractor_ex
mix coveralls
Coverage is configured to enforce a 90% minimum for AttractorEx scope.
Spec Reference
- https://github.com/strongdm/attractor
- https://github.com/strongdm/attractor/blob/main/attractor-spec.md
- https://github.com/strongdm/attractor/blob/main/coding-agent-loop-spec.md
- https://github.com/strongdm/attractor/blob/main/unified-llm-spec.md
- Local compliance docs:
lib/attractor_ex/ATTRACTOR_SPEC_COMPLIANCE.mdlib/attractor_ex/CODING_AGENT_LOOP_COMPLIANCE.mdlib/attractor_ex/UNIFIED_LLM_SPEC_COMPLIANCE.md
- Baseline commit currently implemented/tested against:
2f892efd63ee7c11f038856b90aae57c067b77c2(verified 2026-03-06)
Keeping Up with Upstream
- Refresh local reference clone:
git -C ..\\_attractor_reference fetch --all --prune - Compare baseline:
git -C ..\\_attractor_reference rev-parse HEAD - If changed, review spec diff and update
AttractorExtests first, then implementation.