Expand description
Port definition for streaming LLM chat completions.
This module defines the infrastructure contract that the agent loop uses to
drive an LLM. The port is intentionally narrow: it speaks domain types
(AgentMessage, ToolDefinition, LlmStreamEvent) and hides all
vendor wire-format details (OpenAI JSON schemas, SSE framing, HTTP headers,
etc.) behind the trait boundary.
§Adapter responsibility
A concrete implementation (e.g. in gglib-axum or gglib-proxy) is
responsible for:
- Translating
&[AgentMessage]into the vendor’smessagesarray, serialisingToolCall::arguments(serde_json::Value) into the JSON string form that OpenAI-compatible APIs require. - Translating
&[ToolDefinition]into the vendor’stoolsarray. - Parsing the streaming SSE response into a sequence of
LlmStreamEventvalues, accumulating incremental tool-call deltas where necessary.
The agent loop never sees HTTP, never sees reqwest, and never contains a
single OpenAI-specific field name.
Traits§
- LlmCompletion
Port - Port that the agent loop uses to drive a streaming LLM.