Documentation
¶
Overview ¶
Package openai provides an interface to OpenAI's language models.
Token Limits ¶
For setting token limits with OpenAI models, use openai.WithMaxCompletionTokens() for clarity. The OpenAI API now uses max_completion_tokens as the field for limiting output tokens.
// Recommended for clarity: llm.GenerateContent(ctx, messages, openai.WithMaxCompletionTokens(100), ) // Also works (backward compatible): llm.GenerateContent(ctx, messages, llms.WithMaxTokens(100), )
Both options set the same underlying field. By default, the implementation sends max_completion_tokens (modern field). For older OpenAI-compatible servers that only support max_tokens, use WithLegacyMaxTokensField():
llm.GenerateContent(ctx, messages, llms.WithMaxTokens(100), openai.WithLegacyMaxTokensField(), // Forces use of max_tokens field )
Index ¶
- Constants
- Variables
- func ExtractToolParts(msg *ChatMessage) ([]llms.ContentPart, []llms.ToolCall)
- func MapError(err error) error
- func WithLegacyMaxTokensField() llms.CallOption
- func WithMaxCompletionTokens(maxTokens int) llms.CallOption
- type APIType
- type ChatMessage
- type LLM
- func (o *LLM) Call(ctx context.Context, prompt string, options ...llms.CallOption) (string, error)
- func (o *LLM) CreateEmbedding(ctx context.Context, inputTexts []string) ([][]float32, error)
- func (o *LLM) GenerateContent(ctx context.Context, messages []llms.MessageContent, ...) (*llms.ContentResponse, error)
- func (o *LLM) SupportsReasoning() bool
- type ModelCapability
- type Option
- func WithAPIType(apiType APIType) Option
- func WithAPIVersion(apiVersion string) Option
- func WithBaseURL(baseURL string) Option
- func WithCallback(callbackHandler callbacks.Handler) Option
- func WithEmbeddingDimensions(dimensions int) Option
- func WithEmbeddingModel(embeddingModel string) Option
- func WithHTTPClient(client openaiclient.Doer) Option
- func WithModel(model string) Option
- func WithOrganization(organization string) Option
- func WithResponseFormat(responseFormat *ResponseFormat) Option
- func WithToken(token string) Option
- type ResponseFormat
- type ResponseFormatJSONSchema
- type ResponseFormatJSONSchemaProperty
Constants ¶
const ( RoleSystem = "system" RoleAssistant = "assistant" RoleUser = "user" RoleFunction = "function" RoleTool = "tool" )
const ( APITypeOpenAI APIType = APIType(openaiclient.APITypeOpenAI) APITypeAzure = APIType(openaiclient.APITypeAzure) APITypeAzureAD = APIType(openaiclient.APITypeAzureAD) )
const (
DefaultAPIVersion = "2023-05-15"
)
Variables ¶
var ( ErrEmptyResponse = errors.New("no response") ErrMissingToken = errors.New("missing the OpenAI API key, set it in the OPENAI_API_KEY environment variable") //nolint:lll ErrMissingAzureModel = errors.New("model needs to be provided when using Azure API") ErrMissingAzureEmbeddingModel = errors.New("embeddings model needs to be provided when using Azure API") ErrUnexpectedResponseLength = errors.New("unexpected length of response") )
var ResponseFormatJSON = &ResponseFormat{Type: "json_object"} //nolint:gochecknoglobals
ResponseFormatJSON is the JSON response format.
Functions ¶
func ExtractToolParts ¶ added in v0.1.8
func ExtractToolParts(msg *ChatMessage) ([]llms.ContentPart, []llms.ToolCall)
ExtractToolParts extracts the tool parts from a message.
func WithLegacyMaxTokensField ¶ added in v0.1.14
func WithLegacyMaxTokensField() llms.CallOption
WithLegacyMaxTokensField forces the use of the max_tokens field instead of max_completion_tokens. This is useful when connecting to older OpenAI-compatible inference servers that only support the max_tokens field and don't recognize max_completion_tokens.
Usage:
llm.GenerateContent(ctx, messages, llms.WithMaxTokens(100), openai.WithLegacyMaxTokensField(), // Forces use of max_tokens field )
func WithMaxCompletionTokens ¶ added in v0.1.14
func WithMaxCompletionTokens(maxTokens int) llms.CallOption
WithMaxCompletionTokens sets the max_completion_tokens field for token generation. This is the recommended way to limit tokens with OpenAI models.
Usage:
llm.GenerateContent(ctx, messages, openai.WithMaxCompletionTokens(100), )
Note: While llms.WithMaxTokens() still works for backward compatibility, WithMaxCompletionTokens is preferred for clarity when using OpenAI.
Types ¶
type APIType ¶
type APIType openaiclient.APIType
type ChatMessage ¶
type ChatMessage = openaiclient.ChatMessage
type LLM ¶
func (*LLM) CreateEmbedding ¶
CreateEmbedding creates embeddings for the given input texts.
func (*LLM) GenerateContent ¶ added in v0.1.4
func (o *LLM) GenerateContent(ctx context.Context, messages []llms.MessageContent, options ...llms.CallOption) (*llms.ContentResponse, error)
GenerateContent implements the Model interface.
func (*LLM) SupportsReasoning ¶ added in v0.1.14
SupportsReasoning implements the ReasoningModel interface. Returns true if the current model supports reasoning/thinking tokens.
type ModelCapability ¶ added in v0.1.14
type ModelCapability struct { Pattern string // Regex pattern to match model names SupportsSystem bool // If true, supports system messages SupportsThinking bool // If true, supports reasoning/thinking SupportsCaching bool // If true, supports prompt caching }
ModelCapability defines what a model supports
type Option ¶
type Option func(*options)
Option is a functional option for the OpenAI client.
func WithAPIType ¶
WithAPIType passes the api type to the client. If not set, the default value is APITypeOpenAI.
func WithAPIVersion ¶
WithAPIVersion passes the api version to the client. If not set, the default value is DefaultAPIVersion.
func WithBaseURL ¶
WithBaseURL passes the OpenAI base url to the client. If not set, the base url is read from the OPENAI_BASE_URL environment variable. If still not set in ENV VAR OPENAI_BASE_URL, then the default value is https://api.openai.com/v1 is used.
func WithCallback ¶ added in v0.1.1
WithCallback allows setting a custom Callback Handler.
func WithEmbeddingDimensions ¶ added in v0.1.14
WithEmbeddingDimensions passes the OpenAI embeddings dimensions to the client. Requires a compatible model, test-embedding-3 or later. For more info, please check openai doc https://platform.openai.com/docs/api-reference/embeddings/create#embeddings-create-dimensions
func WithEmbeddingModel ¶
WithEmbeddingModel passes the OpenAI model to the client. Required when ApiType is Azure.
func WithHTTPClient ¶
func WithHTTPClient(client openaiclient.Doer) Option
WithHTTPClient allows setting a custom HTTP client. If not set, the default value is http.DefaultClient.
func WithModel ¶
WithModel passes the OpenAI model to the client. If not set, the model is read from the OPENAI_MODEL environment variable. Required when ApiType is Azure.
func WithOrganization ¶
WithOrganization passes the OpenAI organization to the client. If not set, the organization is read from the OPENAI_ORGANIZATION.
func WithResponseFormat ¶ added in v0.1.6
func WithResponseFormat(responseFormat *ResponseFormat) Option
WithResponseFormat allows setting a custom response format.
type ResponseFormat ¶ added in v0.1.6
type ResponseFormat = openaiclient.ResponseFormat
ResponseFormat is the response format for the OpenAI client.
type ResponseFormatJSONSchema ¶ added in v0.1.13
type ResponseFormatJSONSchema = openaiclient.ResponseFormatJSONSchema
ResponseFormatJSONSchema is the JSON Schema response format in structured output.
type ResponseFormatJSONSchemaProperty ¶ added in v0.1.13
type ResponseFormatJSONSchemaProperty = openaiclient.ResponseFormatJSONSchemaProperty
ResponseFormatJSONSchemaProperty is the JSON Schema property in structured output.