Module llm_completion

Module llm_completion 

Source
Expand description

Port definition for streaming LLM chat completions.

This module defines the infrastructure contract that the agent loop uses to drive an LLM. The port is intentionally narrow: it speaks domain types (AgentMessage, ToolDefinition, LlmStreamEvent) and hides all vendor wire-format details (OpenAI JSON schemas, SSE framing, HTTP headers, etc.) behind the trait boundary.

§Adapter responsibility

A concrete implementation (e.g. in gglib-axum or gglib-proxy) is responsible for:

  1. Translating &[AgentMessage] into the vendor’s messages array, serialising ToolCall::arguments (serde_json::Value) into the JSON string form that OpenAI-compatible APIs require.
  2. Translating &[ToolDefinition] into the vendor’s tools array.
  3. Parsing the streaming SSE response into a sequence of LlmStreamEvent values, accumulating incremental tool-call deltas where necessary.

The agent loop never sees HTTP, never sees reqwest, and never contains a single OpenAI-specific field name.

Traits§

LlmCompletionPort
Port that the agent loop uses to drive a streaming LLM.