dbkit

package module
v0.8.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 23, 2025 License: MIT Imports: 15 Imported by: 0

README

Toolkit for working with SQL databases in Go

GoDoc Widget

go‑dbkit is a Go library designed to simplify and streamline working with SQL databases. It provides a solid foundation for connection management, instrumentation, error retry mechanisms, and transaction handling. Additionally, go‑dbkit offers a suite of specialized sub‑packages that address common database challenges—such as distributed locking, schema migrations, and query builder utilities - making it a one‑stop solution for applications that needs to interact with multiple SQL databases.

Features

  • Transaction Management: Execute functions within transactions that automatically commit on success or roll back on error. The transaction runner abstracts the boilerplate, ensuring cleaner and more reliable code.
  • Retryable Queries: Built‑in support for detecting and automatically retrying transient errors (e.g., deadlocks, lock timeouts) across various databases.
  • Distributed Locking: Implement SQL‑based distributed locks to coordinate exclusive access to shared resources across multiple processes.
  • Database Migrations: Seamlessly manage schema changes with support for both embedded SQL migrations (using Go’s embed package) and programmatic migration definitions.
  • Query Builder Utilities: Enhance your query‐building experience with utilities for popular libraries:
    • dbrutil: Simplifies working with the dbr query builder, adding instrumentation (Prometheus metrics, slow query logging) and transaction support.
    • goquutil: Provides helper routines for the goqu query builder (currently, no dedicated README exists—refer to the source for usage).

Packages Overview

  • Root go‑dbkit package provides configuration management, DSN generation, and the foundational retryable query functionality used across the library.
  • dbrutil offers utilities for the dbr query builder, including:
    • Instrumented connection opening with Prometheus metrics.
    • Automatic slow query logging based on configurable thresholds.
    • A transaction runner that simplifies commit/rollback logic. Read more in dbrutil/README.md.
  • distrlock implements a lightweight, SQL‑based distributed locking mechanism that ensures exclusive execution of critical sections across concurrent services. Read more in distrlock/README.md.
  • migrate: Manage your database schema changes effortlessly with support for both embedded SQL files and programmatic migrations. Read more in migrate/README.md.
  • goquutil provides helper functions for working with the goqu query builder, streamlining common operations. (This package does not have its own README yet, so please refer to the source code for more details.)
  • RDBMS‑Specific dedicated sub‑packages are provided for various relational databases:
    • mysql includes DSN generation, retryable error handling, and other MySQL‑specific utilities.
    • sqlite contains helpers to integrate SQLite seamlessly into your projects.
    • postgres & pgx offers tools and error handling improvements for PostgreSQL using both the lib/pq and pgx drivers.
    • mssql provides MSSQL‑specific error handling, including registration of retryable functions for deadlocks and related transient errors. Each of these packages registers its own retryable function in the init() block, ensuring that transient errors (like deadlocks or cached plan invalidations) are automatically retried.o

Installation

go get -u github.com/acronis/go-dbkit

Usage

Basic Example

Below is a simple example that demonstrates how to register a retryable function for a MySQL database connection and execute a query within a transaction with a custom retry policy on transient errors.

package main

import (
	"context"
	"database/sql"
	"log"
	"os"
	"time"

	"github.com/acronis/go-appkit/retry"

	"github.com/acronis/go-dbkit"

	// Import the `mysql` package for registering the retryable function for MySQL transient errors (like deadlocks).
	_ "github.com/acronis/go-dbkit/mysql"
)

func main() {
	// Configure the database using the dbkit.Config struct.
	// In this example, we're using MySQL. Adjust Dialect and config fields for your target DB.
	cfg := &dbkit.Config{
		Dialect: dbkit.DialectMySQL,
		MySQL: dbkit.MySQLConfig{
			Host:     os.Getenv("MYSQL_HOST"),
			Port:     3306,
			User:     os.Getenv("MYSQL_USER"),
			Password: os.Getenv("MYSQL_PASSWORD"),
			Database: os.Getenv("MYSQL_DATABASE"),
		},
		MaxOpenConns: 16,
		MaxIdleConns: 8,
	}

	// Open the database connection.
	// The 2nd parameter is a boolean that indicates whether to ping the database.
	db, err := dbkit.Open(cfg, true)
	if err != nil {
		log.Fatalf("failed to open database: %v", err)
	}
	defer db.Close()

	// Execute a transaction with a custom retry policy (exponential backoff with 3 retries, starting from 10ms).
	retryPolicy := retry.NewConstantBackoffPolicy(10*time.Millisecond, 3)
	if err = dbkit.DoInTx(context.Background(), db, func(tx *sql.Tx) error {
		// Execute your transactional operations here.
		// Example: _, err := tx.Exec("UPDATE users SET last_login = ? WHERE id = ?", time.Now(), 1)
		return nil
	}, dbkit.WithRetryPolicy(retryPolicy)); err != nil {
		log.Fatal(err)
	}
}
dbrutil Usage Example

The following basic example demonstrates how to use dbrutil to open a database connection with instrumentation, and execute queries with an automatic slow query logging and Prometheus metrics collection within transaction.

package main

import (
	"context"
	"database/sql"
	"errors"
	"fmt"
	stdlog "log"
	"net/http"
	"os"
	"time"

	"github.com/acronis/go-appkit/log"
	"github.com/gocraft/dbr/v2"
	
	"github.com/acronis/go-dbkit"
	"github.com/acronis/go-dbkit/dbrutil"
)

func main() {
	logger, loggerClose := log.NewLogger(&log.Config{Output: log.OutputStderr, Level: log.LevelInfo})
	defer loggerClose()

	// Create a Prometheus metrics collector.
	promMetrics := dbkit.NewPrometheusMetrics()
	promMetrics.MustRegister()
	defer promMetrics.Unregister()

	// Open the database connection with instrumentation.
	// Instrumentation includes collecting metrics about SQL queries and logging slow queries.
	eventReceiver := dbrutil.NewCompositeReceiver([]dbr.EventReceiver{
		dbrutil.NewQueryMetricsEventReceiver(promMetrics, queryAnnotationPrefix),
		dbrutil.NewSlowQueryLogEventReceiver(logger, 100*time.Millisecond, queryAnnotationPrefix),
	})
	conn, err := openDB(eventReceiver)
	if err != nil {
		stdlog.Fatal(err)
	}
	defer conn.Close()

	txRunner := dbrutil.NewTxRunner(conn, &sql.TxOptions{Isolation: sql.LevelReadCommitted}, nil)

	// Execute function in a transaction.
	// The transaction will be automatically committed if the function returns nil, otherwise it will be rolled back.
	if dbErr := txRunner.DoInTx(context.Background(), func(tx dbr.SessionRunner) error {
		var result int
		return tx.Select("SLEEP(1)").
			Comment(annotateQuery("long_operation")). // Annotate the query for Prometheus metrics and slow query log.
			LoadOne(&result)
	}); dbErr != nil {
		stdlog.Fatal(dbErr)
	}

	// The following log message will be printed:
	// {"level":"warn","time":"2025-02-14T16:29:55.429257+02:00","msg":"slow SQL query","pid":14030,"annotation":"query:long_operation","duration_ms":1007}

	// Prometheus metrics will be collected:
	// db_query_duration_seconds_bucket{query="query:long_operation",le="2.5"} 1
	// db_query_duration_seconds_sum{query="query:long_operation"} 1.004573875
	// db_query_duration_seconds_count{query="query:long_operation"} 1
}

const queryAnnotationPrefix = "query:"

func annotateQuery(queryName string) string {
	return queryAnnotationPrefix + queryName
}

func openDB(eventReceiver dbr.EventReceiver) (*dbr.Connection, error) {
	cfg := &dbkit.Config{
		Dialect: dbkit.DialectMySQL,
		MySQL: dbkit.MySQLConfig{
			Host:     os.Getenv("MYSQL_HOST"),
			Port:     3306,
			User:     os.Getenv("MYSQL_USER"),
			Password: os.Getenv("MYSQL_PASSWORD"),
			Database: os.Getenv("MYSQL_DATABASE"),
		},
	}

	// Open database with instrumentation based on the provided event receiver (see github.com/gocraft/dbr doc for details).
	// Opening includes configuring the max open/idle connections and their lifetime and pinging the database.
	conn, err := dbrutil.Open(cfg, true, eventReceiver)
	if err != nil {
		return nil, fmt.Errorf("open database: %w", err)
	}
	return conn, nil
}

More examples and detailed usage instructions can be found in the dbrutil package README.

distrlock Usage Example

The following basic example demonstrates how to use distrlock to ensure exclusive execution of a critical section of code.

package main

import (
	"context"
	"database/sql"
	"log"
	"os"
	"time"

	"github.com/acronis/go-dbkit"
	"github.com/acronis/go-dbkit/distrlock"
)

func main() {
	// Setup database connection
	db, err := sql.Open("mysql", os.Getenv("MYSQL_DSN"))
	if err != nil {
		log.Fatal(err)
	}
	defer db.Close()

	ctx := context.Background()

	// Create "distributed_locks" table for locks.
	createTableSQL, err := distrlock.CreateTableSQL(dbkit.DialectMySQL)
	if err != nil {
		log.Fatal(err)
	}
	_, err = db.ExecContext(ctx, createTableSQL)
	if err != nil {
		log.Fatal(err)
	}

	// Do some work exclusively.
	const lockKey = "test-lock-key-1" // Unique key that will be used to ensure exclusive execution among multiple instances
	err = distrlock.DoExclusively(ctx, db, dbkit.DialectMySQL, lockKey, func(ctx context.Context) error {
		time.Sleep(10 * time.Second) // Simulate work.
		return nil
	})
	if err != nil {
		log.Fatal(err)
	}
}

More examples and detailed usage instructions can be found in the distrlock package README.

License

Copyright © 2024 Acronis International GmbH.

Licensed under MIT License.

Documentation

Overview

Package dbkit provides helpers for working with different SQL databases (MySQL, PostgreSQL, SQLite, and MSSQL).

Example
// Configure the database using the dbkit.Config struct.
// In this example, we're using MySQL. Adjust Dialect and config fields for your target DB.
cfg := &dbkit.Config{
	Dialect: dbkit.DialectMySQL,
	MySQL: dbkit.MySQLConfig{
		Host:     os.Getenv("MYSQL_HOST"),
		Port:     3306,
		User:     os.Getenv("MYSQL_USER"),
		Password: os.Getenv("MYSQL_PASSWORD"),
		Database: os.Getenv("MYSQL_DATABASE"),
	},
	MaxOpenConns: 16,
	MaxIdleConns: 8,
}

// Open the database connection.
// The 2nd parameter is a boolean that indicates whether to ping the database.
db, err := dbkit.Open(cfg, true)
if err != nil {
	log.Fatalf("failed to open database: %v", err)
}
defer db.Close()

// Execute a transaction with a custom retry policy (exponential backoff with 3 retries, starting from 10ms).
retryPolicy := retry.NewConstantBackoffPolicy(10*time.Millisecond, 3)
if err = dbkit.DoInTx(context.Background(), db, func(tx *sql.Tx) error {
	// Execute your transactional operations here.
	// Example: _, err := tx.Exec("UPDATE users SET last_login = ? WHERE id = ?", time.Now(), 1)
	return nil
}, dbkit.WithRetryPolicy(retryPolicy)); err != nil {
	log.Fatal(err)
}

Index

Examples

Constants

View Source
const (
	DefaultMaxIdleConns    = 2
	DefaultMaxOpenConns    = 10
	DefaultConnMaxLifetime = 10 * time.Minute // Official recommendation from the DBA team
)

Default values of connection parameters

View Source
const MSSQLDefaultTxLevel = sql.LevelReadCommitted

MSSQLDefaultTxLevel contains transaction isolation level which will be used by default for MSSQL.

View Source
const MySQLDefaultTxLevel = sql.LevelReadCommitted

MySQLDefaultTxLevel contains transaction isolation level which will be used by default for MySQL.

View Source
const PgReadWriteParam = "read-write"

PgReadWriteParam read-write session attribute value name

View Source
const PgTargetSessionAttrs = "target_session_attrs"

PgTargetSessionAttrs session attrs parameter name

View Source
const PostgresDefaultSSLMode = PostgresSSLModeVerifyCA

PostgresDefaultSSLMode contains Postgres SSL mode which will be used by default.

View Source
const PostgresDefaultTxLevel = sql.LevelReadCommitted

PostgresDefaultTxLevel contains transaction isolation level which will be used by default for Postgres.

View Source
const PrometheusMetricsLabelQuery = "query"

PrometheusMetricsLabelQuery is a label name for SQL query in Prometheus metrics.

Variables

View Source
var DefaultQueryDurationBuckets = []float64{0.001, 0.01, 0.1, 0.25, 0.5, 1, 2.5, 5, 10}

DefaultQueryDurationBuckets is default buckets into which observations of executing SQL queries are counted.

Functions

func DoInTx

func DoInTx(ctx context.Context, dbConn *sql.DB, fn func(tx *sql.Tx) error, options ...DoInTxOption) (err error)

DoInTx begins a new transaction, calls passed function and do commit or rollback depending on whether the function returns an error or not.

func GetIsRetryable

func GetIsRetryable(d driver.Driver) retry.IsRetryable

GetIsRetryable returns a function that can tell for a given driver if error is retryable.

func InitOpenedDB

func InitOpenedDB(db *sql.DB, cfg *Config, ping bool) error

InitOpenedDB initializes early opened *sql.DB instance.

func MakeMSSQLDSN

func MakeMSSQLDSN(cfg *MSSQLConfig) string

MakeMSSQLDSN makes DSN for opening MSSQL database.

func MakeMySQLDSN

func MakeMySQLDSN(cfg *MySQLConfig) string

MakeMySQLDSN makes DSN for opening MySQL database.

func MakePostgresDSN

func MakePostgresDSN(cfg *PostgresConfig) string

MakePostgresDSN makes DSN for opening Postgres database.

func MakeSQLiteDSN

func MakeSQLiteDSN(cfg *SQLiteConfig) string

MakeSQLiteDSN makes DSN for opening SQLite database.

func Open added in v0.4.0

func Open(cfg *Config, ping bool) (*sql.DB, error)

Open opens a new database connection using the provided configuration. If ping is true, it will check the connection by sending a ping to the database.

func RegisterIsRetryableFunc

func RegisterIsRetryableFunc(d driver.Driver, retryable retry.IsRetryable)

RegisterIsRetryableFunc registers callback to determinate specific DB error is retryable or not. Several registered functions will be called one after another in FIFO order before some function returns true. Note: this function is not concurrent-safe. Typical scenario: register all custom IsRetryable in module init()

func UnregisterAllIsRetryableFuncs added in v0.4.0

func UnregisterAllIsRetryableFuncs(d driver.Driver)

UnregisterAllIsRetryableFuncs removes previously registered IsRetryable function for the given driver.

Types

type Config

type Config struct {
	Dialect         Dialect             `mapstructure:"dialect" yaml:"dialect" json:"dialect"`
	MaxOpenConns    int                 `mapstructure:"maxOpenConns" yaml:"maxOpenConns" json:"maxOpenConns"`
	MaxIdleConns    int                 `mapstructure:"maxIdleConns" yaml:"maxIdleConns" json:"maxIdleConns"`
	ConnMaxLifetime config.TimeDuration `mapstructure:"connMaxLifeTime" yaml:"connMaxLifeTime" json:"connMaxLifeTime"`
	MySQL           MySQLConfig         `mapstructure:"mysql" yaml:"mysql" json:"mysql"`
	MSSQL           MSSQLConfig         `mapstructure:"mssql" yaml:"mssql" json:"mssql"`
	SQLite          SQLiteConfig        `mapstructure:"sqlite3" yaml:"sqlite3" json:"sqlite3"`
	Postgres        PostgresConfig      `mapstructure:"postgres" yaml:"postgres" json:"postgres"`
	// contains filtered or unexported fields
}

Config represents a set of configuration parameters working with SQL databases.

func NewConfig

func NewConfig(supportedDialects []Dialect, options ...ConfigOption) *Config

NewConfig creates a new instance of the Config.

func NewConfigWithKeyPrefix

func NewConfigWithKeyPrefix(keyPrefix string, supportedDialects []Dialect) *Config

NewConfigWithKeyPrefix creates a new instance of the Config with a key prefix. This prefix will be used by config.Loader. Deprecated: use NewConfig with WithKeyPrefix instead.

func NewDefaultConfig added in v0.4.0

func NewDefaultConfig(supportedDialects []Dialect, options ...ConfigOption) *Config

NewDefaultConfig creates a new instance of the Config with default values.

func (*Config) DriverNameAndDSN

func (c *Config) DriverNameAndDSN() (driverName, dsn string)

DriverNameAndDSN returns driver name and DSN for connecting.

func (*Config) KeyPrefix

func (c *Config) KeyPrefix() string

KeyPrefix returns a key prefix with which all configuration parameters should be presented. Implements config.KeyPrefixProvider interface.

func (*Config) Set

func (c *Config) Set(dp config.DataProvider) error

Set sets configuration values from config.DataProvider.

func (*Config) SetProviderDefaults

func (c *Config) SetProviderDefaults(dp config.DataProvider)

SetProviderDefaults sets default configuration values in config.DataProvider.

func (*Config) SupportedDialects

func (c *Config) SupportedDialects() []Dialect

SupportedDialects returns the list of supported dialects.

func (*Config) TxIsolationLevel

func (c *Config) TxIsolationLevel() sql.IsolationLevel

TxIsolationLevel returns transaction isolation level from parsed config for specified dialect.

type ConfigOption added in v0.4.0

type ConfigOption func(*configOptions)

ConfigOption is a type for functional options for the Config.

func WithKeyPrefix added in v0.4.0

func WithKeyPrefix(keyPrefix string) ConfigOption

WithKeyPrefix returns a ConfigOption that sets a key prefix for parsing configuration parameters. This prefix will be used by config.Loader.

type Dialect

type Dialect string

Dialect defines possible values for planned supported SQL dialects.

const (
	DialectSQLite   Dialect = "sqlite3"
	DialectMySQL    Dialect = "mysql"
	DialectPostgres Dialect = "postgres"
	DialectPgx      Dialect = "pgx"
	DialectMSSQL    Dialect = "mssql"
)

SQL dialects.

type DoInTxOption added in v0.4.0

type DoInTxOption func(*doInTxOptions)

DoInTxOption is a functional option for DoInTx.

func WithRetryPolicy added in v0.4.0

func WithRetryPolicy(policy retry.Policy) DoInTxOption

WithRetryPolicy sets retry policy for DoInTx.

func WithTxOptions added in v0.4.0

func WithTxOptions(txOpts *sql.TxOptions) DoInTxOption

WithTxOptions sets transaction options for DoInTx.

type IsolationLevel added in v0.4.0

type IsolationLevel sql.IsolationLevel

func (IsolationLevel) MarshalJSON added in v0.4.0

func (il IsolationLevel) MarshalJSON() ([]byte, error)

MarshalJSON encodes as a human-readable string in JSON. Implements json.Marshaler interface.

func (*IsolationLevel) MarshalText added in v0.4.0

func (il *IsolationLevel) MarshalText() ([]byte, error)

MarshalText encodes as a human-readable string in text. Implements encoding.TextMarshaler interface.

func (IsolationLevel) MarshalYAML added in v0.4.0

func (il IsolationLevel) MarshalYAML() (interface{}, error)

MarshalYAML encodes as a human-readable string in YAML. Implements yaml.Marshaler interface.

func (IsolationLevel) String added in v0.4.0

func (il IsolationLevel) String() string

String returns the human-readable string representation. Implements fmt.Stringer interface.

func (*IsolationLevel) UnmarshalJSON added in v0.4.0

func (il *IsolationLevel) UnmarshalJSON(data []byte) error

UnmarshalJSON allows decoding string representation of isolation level from JSON. Implements json.Unmarshaler interface.

func (*IsolationLevel) UnmarshalText added in v0.4.0

func (il *IsolationLevel) UnmarshalText(text []byte) error

UnmarshalText allows decoding from text. Implements encoding.TextUnmarshaler interface, which is used by mapstructure.TextUnmarshallerHookFunc.

func (*IsolationLevel) UnmarshalYAML added in v0.4.0

func (il *IsolationLevel) UnmarshalYAML(value *yaml.Node) error

UnmarshalYAML allows decoding from YAML. Implements yaml.Unmarshaler interface.

type MSSQLConfig

type MSSQLConfig struct {
	Host                 string            `mapstructure:"host" yaml:"host" json:"host"`
	Port                 int               `mapstructure:"port" yaml:"port" json:"port"`
	User                 string            `mapstructure:"user" yaml:"user" json:"user"`
	Password             string            `mapstructure:"password" yaml:"password" json:"password"`
	Database             string            `mapstructure:"database" yaml:"database" json:"database"`
	TxIsolationLevel     IsolationLevel    `mapstructure:"txLevel" yaml:"txLevel" json:"txLevel"`
	AdditionalParameters map[string]string `mapstructure:"additionalParameters" yaml:"additionalParameters" json:"additionalParameters"`
}

MSSQLConfig represents a set of configuration parameters for working with MSSQL.

type MySQLConfig

type MySQLConfig struct {
	Host             string         `mapstructure:"host" yaml:"host" json:"host"`
	Port             int            `mapstructure:"port" yaml:"port" json:"port"`
	User             string         `mapstructure:"user" yaml:"user" json:"user"`
	Password         string         `mapstructure:"password" yaml:"password" json:"password"`
	Database         string         `mapstructure:"database" yaml:"database" json:"database"`
	TxIsolationLevel IsolationLevel `mapstructure:"txLevel" yaml:"txLevel" json:"txLevel"`
}

MySQLConfig represents a set of configuration parameters for working with MySQL.

type PostgresConfig

type PostgresConfig struct {
	Host                 string            `mapstructure:"host" yaml:"host" json:"host"`
	Port                 int               `mapstructure:"port" yaml:"port" json:"port"`
	User                 string            `mapstructure:"user" yaml:"user" json:"user"`
	Password             string            `mapstructure:"password" yaml:"password" json:"password"`
	Database             string            `mapstructure:"database" yaml:"database" json:"database"`
	TxIsolationLevel     IsolationLevel    `mapstructure:"txLevel" yaml:"txLevel" json:"txLevel"`
	SSLMode              PostgresSSLMode   `mapstructure:"sslMode" yaml:"sslMode" json:"sslMode"`
	SearchPath           string            `mapstructure:"searchPath" yaml:"searchPath" json:"searchPath"`
	AdditionalParameters map[string]string `mapstructure:"additionalParameters" yaml:"additionalParameters" json:"additionalParameters"`
}

PostgresConfig represents a set of configuration parameters for working with Postgres.

type PostgresSSLMode

type PostgresSSLMode string

PostgresSSLMode defines possible values for Postgres sslmode connection parameter.

const (
	PostgresSSLModeDisable    PostgresSSLMode = "disable"
	PostgresSSLModeRequire    PostgresSSLMode = "require"
	PostgresSSLModeVerifyCA   PostgresSSLMode = "verify-ca"
	PostgresSSLModeVerifyFull PostgresSSLMode = "verify-full"
)

Postgres SSL modes.

type PrometheusMetrics added in v0.4.0

type PrometheusMetrics struct {
	QueryDurations *prometheus.HistogramVec
}

PrometheusMetrics represents collector of metrics.

func NewPrometheusMetrics added in v0.4.0

func NewPrometheusMetrics() *PrometheusMetrics

NewPrometheusMetrics creates a new metrics collector.

func NewPrometheusMetricsWithOpts added in v0.4.0

func NewPrometheusMetricsWithOpts(opts PrometheusMetricsOpts) *PrometheusMetrics

NewPrometheusMetricsWithOpts is a more configurable version of creating PrometheusMetrics.

func (*PrometheusMetrics) AllMetrics added in v0.4.0

func (pm *PrometheusMetrics) AllMetrics() []prometheus.Collector

AllMetrics returns a list of metrics of this collector. This can be used to register these metrics in push gateway.

func (*PrometheusMetrics) MustCurryWith added in v0.4.0

func (pm *PrometheusMetrics) MustCurryWith(labels prometheus.Labels) *PrometheusMetrics

MustCurryWith curries the metrics collector with the provided labels.

func (*PrometheusMetrics) MustRegister added in v0.4.0

func (pm *PrometheusMetrics) MustRegister()

MustRegister does registration of metrics collector in Prometheus and panics if any error occurs.

func (*PrometheusMetrics) ObserveQueryDuration added in v0.4.0

func (pm *PrometheusMetrics) ObserveQueryDuration(query string, duration time.Duration)

ObserveQueryDuration observes the duration of executing SQL query.

func (*PrometheusMetrics) Unregister added in v0.4.0

func (pm *PrometheusMetrics) Unregister()

Unregister cancels registration of metrics collector in Prometheus.

type PrometheusMetricsOpts added in v0.5.0

type PrometheusMetricsOpts struct {
	// Namespace is a namespace for metrics. It will be prepended to all metric names.
	Namespace string

	// QueryDurationBuckets is a list of buckets into which observations of executing SQL queries are counted.
	QueryDurationBuckets []float64

	// ConstLabels is a set of labels that will be applied to all metrics.
	ConstLabels prometheus.Labels

	// CurryingLabelNames is a list of label names that will be curried with the provided labels.
	// See PrometheusMetrics.MustCurryWith method for more details.
	// Keep in mind that if this list is not empty,
	// PrometheusMetrics.MustCurryWith method must be called further with the same labels.
	// Otherwise, the collector will panic.
	CurriedLabelNames []string
}

PrometheusMetricsOpts represents an options for PrometheusMetrics.

type SQLiteConfig

type SQLiteConfig struct {
	Path string `mapstructure:"path" yaml:"path" json:"path"`
}

SQLiteConfig represents a set of configuration parameters for working with SQLite.

Directories

Path Synopsis
Package dbrutil provides utilities and helpers for dbr query builder.
Package dbrutil provides utilities and helpers for dbr query builder.
dbrtest
Package dbrtest provides objects and helpers for writings tests for code that uses dbr and dbrutils packages.
Package dbrtest provides objects and helpers for writings tests for code that uses dbr and dbrutils packages.
Package distrlock contains DML (distributed lock manager) implementation (now DMLs based on MySQL and PostgreSQL are supported).
Package distrlock contains DML (distributed lock manager) implementation (now DMLs based on MySQL and PostgreSQL are supported).
Package goquutil provides auxiliary routines for working with goqu query builder.
Package goquutil provides auxiliary routines for working with goqu query builder.
internal
testing
Package testing contains internal testing utilities we apply in go-dbkit.
Package testing contains internal testing utilities we apply in go-dbkit.
Package migrate provides functionality for applying database migrations.
Package migrate provides functionality for applying database migrations.
Package mssql provides helpers for working MSSQL database.
Package mssql provides helpers for working MSSQL database.
Package mysql provides helpers for working with the MySQL database using the github.com/go-sql-driver/mysql driver.
Package mysql provides helpers for working with the MySQL database using the github.com/go-sql-driver/mysql driver.
Package pgx provides helpers for working with the Postgres database using the github.com/jackc/pgx driver.
Package pgx provides helpers for working with the Postgres database using the github.com/jackc/pgx driver.
Package postgres provides helpers for working with the Postgres database using the github.com/lib/pq driver.
Package postgres provides helpers for working with the Postgres database using the github.com/lib/pq driver.
Package sqlite provides helpers for working SQLite database.
Package sqlite provides helpers for working SQLite database.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL