Documentation
¶
Index ¶
- Variables
- type LLM
- func (l *LLM) AddExternalTools(schemas []tools.FunctionSchema, ...)
- func (l *LLM) AddTool(t tools.Tool)
- func (l *LLM) Chat(message string) <-chan Update
- func (l *LLM) ChatUsingContent(ctx context.Context, message content.Content) <-chan Update
- func (l *LLM) ChatUsingMessages(ctx context.Context, messages []Message) <-chan Update
- func (l *LLM) ChatWithContext(ctx context.Context, message string) <-chan Update
- func (l *LLM) Err() error
- func (l *LLM) String() string
- func (l *LLM) WithDebug() *LLM
- func (l *LLM) WithMaxTurns(maxTurns int) *LLM
- type Message
- type Provider
- type ProviderStream
- type StreamStatus
- type TextUpdate
- type ToolCall
- type ToolDoneUpdate
- type ToolStartUpdate
- type ToolStatusUpdate
- type Update
- type UpdateType
Examples ¶
Constants ¶
This section is empty.
Variables ¶
var (
ErrMaxTurnsReached = errors.New("max turns reached")
)
var ToolCallContextKey = &contextKey{"tool-call"}
ToolCallContextKey is a context key. It can be used in tool functions with Runner.Context().Value() to access the specific ToolCall instance that triggered the current tool execution. The associated value will be of type llms.ToolCall.
Functions ¶
This section is empty.
Types ¶
type LLM ¶
type LLM struct { // SystemPrompt should return the system prompt for the LLM. It's a function // to allow the system prompt to dynamically change throughout a single // conversation. SystemPrompt func() content.Content // contains filtered or unexported fields }
LLM represents the interface to an LLM provider, maintaining state between individual calls, for example when tool calling is being performed. Note that this is NOT thread safe for this reason.
func New ¶
New creates a new LLM instance with the specified provider and optional tools. The provider handles communication with the actual LLM service. If tools are provided, they will be available for the LLM to use during conversations.
func (*LLM) AddExternalTools ¶
func (l *LLM) AddExternalTools(schemas []tools.FunctionSchema, handler func(r tools.Runner, params json.RawMessage) tools.Result)
AddExternalTools adds one or more external tools to the LLM's toolbox. Unlike regular tools, external tools are usually forwarded to some other code (sometimes over the network) and handled there, before a result is produced. For this reason, a list of tool definitions can be provided, and then the tool's raw JSON parameters are passed into the handler. The handler can use `GetToolCall(r.Context())` to retrieve the `ToolCall` object, which includes the function name (`Name`) and the unique `ID` for the specific call.
Example ¶
This example demonstrates adding and using external tools with OpenAI.
package main import ( "encoding/json" "fmt" "log" "os" "github.com/blixt/go-llms/llms" "github.com/blixt/go-llms/openai" "github.com/blixt/go-llms/tools" ) func main() { // Note: Requires OPENAI_API_KEY environment variable to be set. apiKey := os.Getenv("OPENAI_API_KEY") if apiKey == "" { fmt.Println("OPENAI_API_KEY environment variable not set.") // Skip example if key is not set. return } // Define external tool schemas within the example externalToolSchemas := []tools.FunctionSchema{ { Name: "get_weather", Description: "Get the current weather for a location", Parameters: tools.ValueSchema{ Type: "object", Properties: &map[string]tools.ValueSchema{ "location": { Type: "string", Description: "The city and state, e.g. San Francisco, CA", }, }, Required: []string{"location"}, }, }, } // Define single handler for external tools within the example handleExternalTool := func(r tools.Runner, params json.RawMessage) tools.Result { // Get the specific tool call details (Name, ID) from the context toolCall, ok := llms.GetToolCall(r.Context()) if !ok { return tools.Errorf("Could not get tool call details from context") } // Decode parameters based on the tool name switch toolCall.Name { case "get_weather": var weatherParams struct { Location string `json:"location"` } if err := json.Unmarshal(params, &weatherParams); err != nil { return tools.Errorf("Invalid parameters for get_weather: %v", err) } // Simulate calling an external weather API and return data with a dynamic label. return tools.SuccessWithLabel(fmt.Sprintf("Weather for %s", weatherParams.Location), map[string]any{ "location": weatherParams.Location, "temperature": "70F", "condition": "Sunny", }) default: return tools.Errorf("Unknown external tool: %s", toolCall.Name) } } // Create an LLM instance with OpenAI (without tools initially) llm := llms.New(openai.New(apiKey, "gpt-4o-mini")) // Add external tools and their single handler llm.AddExternalTools(externalToolSchemas, handleExternalTool) fmt.Println("User: What's the weather in London?") fmt.Print("Assistant:\n") // Start a chat using the externally defined tool for update := range llm.Chat("What's the weather in London?") { switch update := update.(type) { case llms.TextUpdate: fmt.Print(update.Text) case llms.ToolStartUpdate: // Note: The Tool.Label() for external tools defaults to the Name. fmt.Printf("(System: Using tool: %s)\n", update.Tool.Label()) // Shows "get_weather" case llms.ToolDoneUpdate: // Shows the potentially dynamic label returned by the tool result. fmt.Printf("(System: Tool result: %s)\n", update.Result.Label()) } } fmt.Println() if err := llm.Err(); err != nil { log.Fatalf("Chat failed: %v", err) } /* Example Interaction (output depends heavily on model and tool execution): User: What's the weather in London? Assistant: (System: Using tool: get_weather) (System: Tool result: Weather for London) The weather in London is currently Sunny with a temperature of 70F. */ }
Output:
func (*LLM) AddTool ¶
AddTool adds a new tool to the LLM's toolbox. If the toolbox doesn't exist yet, it will be created. Tools allow the LLM to perform actions beyond just generating text, such as fetching data, running calculations, or interacting with external systems.
func (*LLM) Chat ¶
Chat sends a text message to the LLM and immediately returns a channel over which updates will come in. The LLM will use the tools available and keep generating more messages until it's done using tools.
Example ¶
This example demonstrates basic chat functionality using Anthropic.
package main import ( "fmt" "log" "os" "github.com/blixt/go-llms/anthropic" "github.com/blixt/go-llms/content" "github.com/blixt/go-llms/llms" ) func main() { // Note: Requires ANTHROPIC_API_KEY environment variable to be set. apiKey := os.Getenv("ANTHROPIC_API_KEY") if apiKey == "" { fmt.Println("ANTHROPIC_API_KEY environment variable not set.") // Skip example if key is not set. return } // Create a new LLM instance with Anthropic's Claude Sonnet model llm := llms.New( anthropic.New(apiKey, "claude-3-7-sonnet-latest"), ) // Optional: Set a system prompt llm.SystemPrompt = func() content.Content { return content.FromText("You are a helpful assistant.") } fmt.Println("User: What's the capital of France?") fmt.Print("Assistant: ") // Start a chat conversation for update := range llm.Chat("What's the capital of France?") { switch update := update.(type) { case llms.TextUpdate: fmt.Print(update.Text) // Simulating streaming output } } fmt.Println() // Add a newline after the stream // Check for errors after the chat completes if err := llm.Err(); err != nil { log.Fatalf("Chat failed: %v", err) } /* Example Interaction: User: What's the capital of France? Assistant: The capital of France is Paris. */ }
Output:
Example (WithTools) ¶
This example demonstrates using tools (function calling) with OpenAI.
package main import ( "fmt" "log" "os" "github.com/blixt/go-llms/llms" "github.com/blixt/go-llms/openai" "github.com/blixt/go-llms/tools" ) func main() { // Note: Requires OPENAI_API_KEY environment variable to be set. apiKey := os.Getenv("OPENAI_API_KEY") if apiKey == "" { fmt.Println("OPENAI_API_KEY environment variable not set.") // Skip example if key is not set. return } // Define tool parameters struct within the example type CommandParams struct { Command string `json:"command" description:"The shell command to run"` } // Create a shell command tool (simulated execution) within the example RunCommand := tools.Func( "Run Command", // Label for the tool type "Run a shell command and return the output", // Description "run_command", // Name used by the LLM func(r tools.Runner, p CommandParams) tools.Result { // We use SuccessWithLabel to provide a dynamic label for this specific execution. return tools.SuccessWithLabel(fmt.Sprintf("Executed '%s'", p.Command), map[string]any{ "output": fmt.Sprintf("Simulated output for: %s", p.Command), }) }, ) // Create an LLM instance with the RunCommand tool using OpenAI llm := llms.New( openai.New(apiKey, "gpt-4o-mini"), // Use a model known for tool use RunCommand, // Register the tool ) fmt.Println("User: List files in the current directory.") fmt.Print("Assistant:\n") // Start a chat conversation that might involve tools for update := range llm.Chat("List files in the current directory using the run_command tool.") { switch update := update.(type) { case llms.TextUpdate: fmt.Print(update.Text) case llms.ToolStartUpdate: // Shows the generic label for the tool type being started fmt.Printf("(System: Using tool: %s)\n", update.Tool.Label()) // e.g., "Run Command" case llms.ToolStatusUpdate: // You can optionally report status updates from the tool runner fmt.Printf("(System: Tool status: %s - %s)\n", update.Tool.Label(), update.Status) case llms.ToolDoneUpdate: // Shows the potentially dynamic label returned by the tool result. fmt.Printf("(System: Tool result: %s)\n", update.Result.Label()) } } fmt.Println() // Add a newline after the stream if err := llm.Err(); err != nil { log.Fatalf("Chat failed: %v", err) } /* Example Interaction (output depends heavily on model and tool execution): User: List files in the current directory. Assistant: (System: Using tool: Run Command) (System: Tool result: Executed 'ls -l') Okay, I have simulated running the command. The output is: Simulated output for: ls -l */ }
Output:
func (*LLM) ChatUsingContent ¶
ChatUsingContent sends a message (which can contain images) to the LLM and immediately returns a channel over which updates will come in. The LLM will use the tools available and keep generating more messages until it's done using tools. The provided context can be used to pass values to tools, set deadlines, cancel, etc.
func (*LLM) ChatUsingMessages ¶
ChatUsingMessages sends a message history to the LLM and immediately returns a channel over which updates will come in. The LLM will use the tools available and keep generating more messages until it's done using tools. The provided context can be used to pass values to tools, set deadlines, cancel, etc.
func (*LLM) ChatWithContext ¶
ChatWithContext sends a text message to the LLM and immediately returns a channel over which updates will come in. The LLM will use the tools available and keep generating more messages until it's done using tools. The provided context can be used to pass values to tools, set deadlines, cancel, etc.
Example ¶
This example demonstrates chatting with context using Google Gemini.
package main import ( "context" "fmt" "log" "os" "time" "github.com/blixt/go-llms/google" "github.com/blixt/go-llms/llms" ) func main() { // Note: Requires GOOGLE_API_KEY environment variable to be set. apiKey := os.Getenv("GOOGLE_API_KEY") if apiKey == "" { fmt.Println("GOOGLE_API_KEY environment variable not set.") // Skip example if key is not set. return } // Create a new LLM instance with Google's Gemini Flash model llm := llms.New( google.New("gemini-2.5-flash-preview-04-17").WithGeminiAPI(apiKey), ) // Create a context with a timeout ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() fmt.Println("User: Tell me a short story.") fmt.Print("Assistant: ") // Start a chat conversation with context for update := range llm.ChatWithContext(ctx, "Tell me a short story.") { switch update := update.(type) { case llms.TextUpdate: fmt.Print(update.Text) } } fmt.Println() // Check for errors (including context cancellation) if err := llm.Err(); err != nil { // Note: Check err against context.DeadlineExceeded, context.Canceled, etc. log.Printf("Chat finished with error: %v", err) } /* Example Interaction: User: Tell me a short story. Assistant: Once upon a time, in a land filled with rolling green hills, lived a curious rabbit named Pip. Pip loved exploring... (output may vary) */ }
Output:
func (*LLM) Err ¶
Err returns the last error encountered during LLM operation. This is useful for checking errors after a Chat loop completes. Returns nil if no error occurred.
func (*LLM) WithDebug ¶
WithDebug enables debug mode. When debug mode is enabled, the LLM will write detailed information about each interaction to a debug.yaml file, including the message history, tool calls, and other relevant data. This is useful for troubleshooting and understanding the LLM's behavior.
Example ¶
This example demonstrates enabling debug mode with Google Gemini.
package main import ( "fmt" "log" "os" "github.com/blixt/go-llms/google" "github.com/blixt/go-llms/llms" ) func main() { // Note: Requires GOOGLE_API_KEY environment variable to be set. apiKey := os.Getenv("GOOGLE_API_KEY") if apiKey == "" { fmt.Println("GOOGLE_API_KEY environment variable not set.") // Skip example if key is not set. return } // Create LLM with Google Gemini and enable debug mode llm := llms.New( google.New("gemini-2.5-flash-preview-04-17").WithGeminiAPI(apiKey), ).WithDebug() // Enable debug logging to debug.yaml // Subsequent calls to llm.Chat() will write detailed logs. fmt.Println("Debug mode enabled. Interactions will be logged to debug.yaml.") // Perform a simple chat to generate some debug output for update := range llm.Chat("Hello!") { switch update := update.(type) { case llms.TextUpdate: fmt.Print(update.Text) } } fmt.Println() if err := llm.Err(); err != nil { log.Printf("Chat failed: %v", err) } /* Example Interaction: Debug mode enabled. Interactions will be logged to debug.yaml. Hello there! How can I help you today? */ }
Output:
func (*LLM) WithMaxTurns ¶
WithMaxTurns sets the maximum number of turns the LLM will make. This is useful to prevent infinite loops or excessive usage. A value of 0 means no limit. A value of 1 means the LLM will only ever do one API call, and so on.
Example ¶
This example demonstrates setting a maximum number of LLM turns with Anthropic.
package main import ( "fmt" "log" "os" "github.com/blixt/go-llms/anthropic" "github.com/blixt/go-llms/llms" ) func main() { // Note: Requires ANTHROPIC_API_KEY environment variable to be set. apiKey := os.Getenv("ANTHROPIC_API_KEY") if apiKey == "" { fmt.Println("ANTHROPIC_API_KEY environment variable not set.") // Skip example if key is not set. return } // Create LLM with Anthropic Claude Sonnet and limit to 1 turn llm := llms.New( anthropic.New(apiKey, "claude-3-7-sonnet-latest"), ).WithMaxTurns(1) // Perform a simple chat for update := range llm.Chat("Why is the sky blue?") { switch update := update.(type) { case llms.TextUpdate: fmt.Print(update.Text) } } fmt.Println() // If the conversation required more turns (e.g., complex tool use), // llm.Err() might return llms.ErrMaxTurnsReached. if err := llm.Err(); err != nil { if err == llms.ErrMaxTurnsReached { fmt.Println("Max turns reached as expected.") } else { log.Printf("Chat failed with unexpected error: %v", err) } } else { fmt.Println("Chat completed within max turns.") } /* Example Interaction (output may vary): The sky appears blue due to a phenomenon called Rayleigh scattering... Chat completed within max turns. */ }
Output:
type Message ¶
type Message struct { // Role can be "system", "user", "assistant", or "tool". Role string `json:"role"` // Name can be used to identify different identities within the same role. Name string `json:"name,omitempty"` // Content is the message content. Content content.Content `json:"content"` // ToolCalls represents the list of tools that an assistant message is invoking. // This field is used when the message is from an assistant (Role="assistant") that is calling tools. // Each ToolCall contains an ID, name of the tool being called, and arguments to pass to the tool. ToolCalls []ToolCall `json:"tool_calls,omitempty"` // ToolCallID identifies which tool call a message is responding to. // This field is used when the message is a tool response (Role="tool") that is responding to a previous tool call. // It should match the ID of the original ToolCall that this message is responding to. ToolCallID string `json:"tool_call_id,omitempty"` }
func (*Message) UnmarshalJSON ¶
UnmarshalJSON implements the json.Unmarshaler interface for Message. It handles the case where the 'content' field might be a simple string instead of the expected array of content items.
type Provider ¶
type Provider interface { Company() string Model() string // Generate takes a system prompt, message history, and optional toolbox, // returning a stream for the LLM's response. The provided context should // be respected for cancellation. Generate(ctx context.Context, systemPrompt content.Content, messages []Message, toolbox *tools.Toolbox) ProviderStream }
type ProviderStream ¶
type StreamStatus ¶
type StreamStatus int
const ( // StreamStatusNone means either the stream hasn't started, or it has finished. StreamStatusNone StreamStatus = iota // StreamStatusText means the stream produced more text content. StreamStatusText // StreamStatusToolCallBegin means the stream started a tool call. The name of the function is available, but not the arguments. StreamStatusToolCallBegin // StreamStatusToolCallData means the stream is streaming the arguments for a tool call. StreamStatusToolCallData // StreamStatusToolCallReady means the stream finished streaming the arguments for a tool call. StreamStatusToolCallReady )
type TextUpdate ¶
type TextUpdate struct {
Text string
}
func (TextUpdate) Type ¶
func (u TextUpdate) Type() UpdateType
type ToolCall ¶
type ToolCall struct { ID string `json:"id"` Name string `json:"name"` Arguments json.RawMessage `json:"arguments"` }
type ToolDoneUpdate ¶
func (ToolDoneUpdate) Type ¶
func (u ToolDoneUpdate) Type() UpdateType
type ToolStartUpdate ¶
func (ToolStartUpdate) Type ¶
func (u ToolStartUpdate) Type() UpdateType
type ToolStatusUpdate ¶
func (ToolStatusUpdate) Type ¶
func (u ToolStatusUpdate) Type() UpdateType
type Update ¶
type Update interface {
Type() UpdateType
}
type UpdateType ¶
type UpdateType string
const ( UpdateTypeToolStart UpdateType = "tool_start" UpdateTypeToolStatus UpdateType = "tool_status" UpdateTypeToolDone UpdateType = "tool_done" UpdateTypeText UpdateType = "text" )