This workstream closes the remaining gaps in the AI-facing layers so the project can lead across runtime, agent loop, and provider abstraction together.
Primary inspiration:
brynary-attractorfor coherent boundaries across attractor, coding-agent, and unified-LLM surfacessmartcomputer-ai-forgefor typed contracts and explicit gap management
Goal
Reduce the major partial and not implemented areas in the unified-LLM and coding-agent surfaces.
Planned Capabilities
- Native provider adapters for OpenAI, Anthropic, and Gemini
- Streaming parity and consistent stream translation
- Typed retry and error taxonomy
- Prompt caching hooks where providers support them
- Richer multimodal and content-part round-trip fidelity
- Incremental typed object streaming
- Stronger provider-compatibility tests for tool behavior
Work Items
- Implement native provider adapters at the unified-LLM layer.
- Add streaming parity and normalized stream translation.
- Add typed retry and error handling with backoff and retry-after semantics.
- Add prompt caching support where available.
- Add stronger multimodal/message-part round-trip handling.
- Add incremental typed object streaming rather than full accumulation only.
- Expand provider-compatibility tests for agent-loop tool semantics and host event behavior.
Deliverables
- Native adapters and streaming support
- Retry and error model
- Typed object streaming
- Expanded provider conformance coverage
- Reduced partial and not-implemented status in compliance docs
Implemented In This Repo
- Native OpenAI, Anthropic, and Gemini adapters now translate normalized requests for both
completeandstream. AttractorEx.LLM.ErrorandAttractorEx.LLM.RetryPolicynow provide typed retryable failures, backoff, and retry-after support.- Request-level cache hooks are translated where provider APIs expose compatible prompt/cache controls.
AttractorEx.LLM.Client.stream_object_deltas/2now emits incremental typed:object_deltaevents for newline-delimited and full-document JSON streams.- Provider adapter tests and unified-LLM client tests now cover native adapter translation, retry behavior, and object streaming.
Success Criteria
This workstream is done when:
- the compliance docs materially improve in the unified-LLM and coding-agent areas
- provider behavior is stronger and more uniform
- streaming and retry behavior are first-class rather than partial scaffolding
- the system tells a coherent full-stack story across runtime, agent loop, and provider abstractions