pub enum LlmStreamEvent {
TextDelta {
content: String,
},
ReasoningDelta {
content: String,
},
ToolCallDelta {
index: usize,
id: Option<String>,
name: Option<String>,
arguments: Option<String>,
},
Done {
finish_reason: String,
},
}Expand description
A single event produced by a streaming LLM response.
These low-level events are the currency of crate::ports::LlmCompletionPort;
they are parsed by adapter crates from raw SSE frames and handed to
gglib-agent’s stream collector, which:
- Forwards
TextDeltaitems directly to the caller’sAgentEventchannel so text appears in real time. - Accumulates
ToolCallDeltafragments until the stream ends, then assembles them intoToolCallvalues. - Waits for
Donebefore triggering tool execution.
Variants§
TextDelta
An incremental text fragment from the model’s response.
ReasoningDelta
An incremental reasoning/thinking fragment (CoT tokens).
Produced by reasoning-capable models (e.g. DeepSeek R1, QwQ) when
llama-server is started with --reasoning-format deepseek. The
runtime adapter maps delta["reasoning_content"] frames to this
variant; the stream collector forwards them as
AgentEvent::ReasoningDelta and accumulates them in a separate
buffer that is never sent back to the LLM as context.
ToolCallDelta
An incremental fragment of a tool-call request.
The adapter crate streams these before the model has finished
generating the full arguments JSON. The stream collector accumulates
all deltas for a given index into a single ToolCall.
Fields
Done
Signals the end of the stream.
Every conforming stream must end with exactly one Done item.
Trait Implementations§
Source§impl Clone for LlmStreamEvent
impl Clone for LlmStreamEvent
Source§fn clone(&self) -> LlmStreamEvent
fn clone(&self) -> LlmStreamEvent
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more