pub struct Settings {Show 20 fields
pub default_download_path: Option<String>,
pub default_context_size: Option<u64>,
pub proxy_port: Option<u16>,
pub llama_base_port: Option<u16>,
pub max_download_queue_size: Option<u32>,
pub show_memory_fit_indicators: Option<bool>,
pub max_tool_iterations: Option<u32>,
pub max_stagnation_steps: Option<u32>,
pub default_model_id: Option<i64>,
pub inference_defaults: Option<InferenceConfig>,
pub voice_enabled: Option<bool>,
pub voice_interaction_mode: Option<String>,
pub voice_stt_model: Option<String>,
pub voice_tts_voice: Option<String>,
pub voice_tts_speed: Option<f32>,
pub voice_vad_threshold: Option<f32>,
pub voice_vad_silence_ms: Option<u32>,
pub voice_auto_speak: Option<bool>,
pub voice_input_device: Option<String>,
pub setup_completed: Option<bool>,
}Expand description
Application settings structure.
All fields are optional to support partial updates and graceful defaults.
Fields§
§default_download_path: Option<String>Default directory for downloading models.
default_context_size: Option<u64>Default context size for models (e.g., 4096, 8192).
proxy_port: Option<u16>Port for the OpenAI-compatible proxy server.
llama_base_port: Option<u16>Base port for llama-server instance allocation (first port in range).
Note: The OpenAI-compatible proxy listens on proxy_port.
max_download_queue_size: Option<u32>Maximum number of downloads that can be queued (1-50).
show_memory_fit_indicators: Option<bool>Whether to show memory fit indicators in HuggingFace browser.
max_tool_iterations: Option<u32>Maximum iterations for tool calling agentic loop.
max_stagnation_steps: Option<u32>Maximum stagnation steps before stopping agent loop.
default_model_id: Option<i64>Default model ID for commands that support a default model.
inference_defaults: Option<InferenceConfig>Global inference parameter defaults.
Applied when neither request nor per-model defaults are specified. If not set, hardcoded defaults are used as final fallback.
voice_enabled: Option<bool>Whether voice mode is enabled.
voice_interaction_mode: Option<String>Voice interaction mode: “ptt” (push-to-talk) or “vad” (voice activity detection).
voice_stt_model: Option<String>Selected whisper STT model ID (e.g., “base.en”, “small.en-q5_1”).
voice_tts_voice: Option<String>Selected TTS voice ID (e.g., af_sarah, am_michael).
voice_tts_speed: Option<f32>TTS playback speed multiplier (0.5–2.0, default 1.0).
voice_vad_threshold: Option<f32>VAD speech detection threshold (0.0–1.0, default 0.5).
voice_vad_silence_ms: Option<u32>VAD minimum silence duration in ms before utterance ends (default 700).
voice_auto_speak: Option<bool>Whether to automatically speak LLM responses via TTS.
voice_input_device: Option<String>Preferred audio input device name (None = system default).
setup_completed: Option<bool>Whether the first-run setup wizard has been completed.
Implementations§
Source§impl Settings
impl Settings
Sourcepub const fn with_defaults() -> Self
pub const fn with_defaults() -> Self
Create settings with sensible defaults.
Sourcepub const fn effective_proxy_port(&self) -> u16
pub const fn effective_proxy_port(&self) -> u16
Get the effective proxy port (with default fallback).
Sourcepub const fn effective_llama_base_port(&self) -> u16
pub const fn effective_llama_base_port(&self) -> u16
Get the effective llama-server base port (with default fallback).
Sourcepub fn merge(&mut self, other: &SettingsUpdate)
pub fn merge(&mut self, other: &SettingsUpdate)
Merge another settings into this one, only updating fields that are Some.