Documentation
¶
Overview ¶
Package google provides a client implementation for interacting with Google's GenAI models. It implements the LLM Thread interface for managing conversations, tool execution, and message processing supporting both Vertex AI and Gemini API backends.
Index ¶
- Variables
- func DeserializeMessages(rawMessages []byte) ([]*genai.Content, error)
- func ExtractMessages(rawMessages []byte, toolResults map[string]tooltypes.StructuredToolResult) ([]llmtypes.Message, error)
- type ModelPricing
- type Response
- type Thread
- func (t *Thread) AddUserMessage(ctx context.Context, message string, imagePaths ...string)
- func (t *Thread) CompactContext(ctx context.Context) error
- func (t *Thread) GetMessages() ([]llmtypes.Message, error)
- func (t *Thread) LoadConversationByID(ctx context.Context, conversationID string) error
- func (t *Thread) Provider() string
- func (t *Thread) SaveConversation(ctx context.Context, summarise bool) error
- func (t *Thread) SendMessage(ctx context.Context, message string, handler llmtypes.MessageHandler, ...) (finalOutput string, err error)
- func (t *Thread) ShortSummary(ctx context.Context) string
- func (t *Thread) SwapContext(_ context.Context, summary string) error
- type ToolCall
Constants ¶
This section is empty.
Variables ¶
var ModelPricingMap = map[string]ModelPricing{ "gemini-2.5-pro": { Input: 0.00125, InputHigh: 0.0025, Output: 0.01, OutputHigh: 0.015, ContextWindow: 2_097_152, HasThinking: true, TieredPricing: true, HighTierThreshold: 200_000, }, "gemini-2.5-flash": { Input: 0.0003, AudioInput: 0.001, Output: 0.0025, ContextWindow: 1_048_576, HasThinking: false, TieredPricing: false, }, "gemini-2.5-flash-lite": { Input: 0.0001, AudioInput: 0.0003, Output: 0.0004, ContextWindow: 1_048_576, HasThinking: false, TieredPricing: false, }, "gemini-pro": { Input: 0.00125, InputHigh: 0.0025, Output: 0.01, OutputHigh: 0.015, ContextWindow: 2_097_152, HasThinking: true, TieredPricing: true, HighTierThreshold: 200_000, }, "gemini-flash": { Input: 0.0003, AudioInput: 0.001, Output: 0.0025, ContextWindow: 1_048_576, HasThinking: false, TieredPricing: false, }, }
ModelPricingMap contains pricing information for Google GenAI models Based on current Vertex AI pricing for Gemini 2.5 models
Functions ¶
func DeserializeMessages ¶
DeserializeMessages deserializes raw message bytes into Google GenAI Content objects
func ExtractMessages ¶
func ExtractMessages(rawMessages []byte, toolResults map[string]tooltypes.StructuredToolResult) ([]llmtypes.Message, error)
ExtractMessages converts raw Google GenAI message bytes to standard message format
Types ¶
type ModelPricing ¶
type ModelPricing struct {
Input float64
InputHigh float64
Output float64
OutputHigh float64
AudioInput float64
ContextWindow int
HasThinking bool
TieredPricing bool
HighTierThreshold int
}
ModelPricing holds the per-token pricing for different operations
type Response ¶
type Response struct {
Text string
ThinkingText string
ToolCalls []*ToolCall
Usage *genai.UsageMetadata
}
Response represents a response from Google's GenAI API
type Thread ¶
type Thread struct {
*base.Thread // Embedded base thread with shared fields and methods
// contains filtered or unexported fields
}
Thread implements the Thread interface using Google's GenAI API. It embeds base.Thread for shared functionality across all LLM providers.
func NewGoogleThread ¶
NewGoogleThread creates a new thread with Google's GenAI API
func (*Thread) AddUserMessage ¶
AddUserMessage adds a user message with optional images to the thread
func (*Thread) CompactContext ¶
CompactContext performs comprehensive context compacting by creating a detailed summary
func (*Thread) GetMessages ¶
GetMessages returns the current messages in the thread
func (*Thread) LoadConversationByID ¶
LoadConversationByID loads a conversation from the conversation store by ID. This is different from the loadConversation callback which loads the current conversation.
func (*Thread) SaveConversation ¶
SaveConversation persists the current conversation state to the conversation store
func (*Thread) SendMessage ¶
func (t *Thread) SendMessage( ctx context.Context, message string, handler llmtypes.MessageHandler, opt llmtypes.MessageOpt, ) (finalOutput string, err error)
SendMessage sends a message to the LLM and processes the response
func (*Thread) ShortSummary ¶
ShortSummary generates a brief summary of the conversation