Documentation
¶
Overview ¶
Package internal provides the core SQL formatting functionality for go-zetasqlite. This file (formatter_expressions.go) implements expression conversion from ZetaSQL AST nodes to SQLite-compatible SQL fragments.
The main functionality includes: - Expression dispatch and type-specific conversion - FunctionCall call handling with special cases for control flow - Type casting and column reference resolution - Subquery expression conversion - Parameter and argument reference handling
The code uses the visitor pattern to traverse ZetaSQL AST nodes and generate equivalent SQLite SQL syntax, handling semantic differences between the two systems.
Package internal provides scan operation handling for the go-zetasqlite SQL transpiler. This file (formatter_scans.go) implements the bottom-up traversal of ZetaSQL scan nodes, converting them into SQLite-compatible SQL fragments.
SCAN TRAVERSAL ARCHITECTURE:
ZetaSQL uses a tree of scan nodes where each scan processes its input scan(s) and produces output columns. The traversal follows a bottom-up approach:
1. BOTTOM-UP PROCESSING: Child scans are visited first, then parent scans process their results 2. SCOPE MANAGEMENT: Each scan creates a new scope that defines which columns are available 3. COLUMN EXPOSURE: Scans expose their output columns to parent scans through the fragment context 4. BOUNDARY HANDLING: Column availability is managed at scan scope boundaries
This approach mirrors go-zetasql's design, ensuring that: - Column references are resolved correctly across scan boundaries - Proper scoping prevents column name conflicts - Complex nested queries maintain correct column visibility
SCAN TYPES AND THEIR ROLES:
- TableScan: Base data source, exposes table columns - ProjectScan: Computes expressions and exposes computed columns - JoinScan: Combines left/right scans, exposes merged column set - FilterScan: Adds WHERE conditions, passes through input columns - ArrayScan: UNNEST operations, exposes array element columns - AggregateScan: GROUP BY operations, exposes aggregate result columns - SetOperationScan: UNION/INTERSECT/EXCEPT, exposes unified column set - OrderByScan: Sorting operations, passes through input columns - LimitOffsetScan: Pagination, passes through input columns - AnalyticScan: Window functions, exposes input + analytic columns - WithScan: Common table expressions, manages WITH clause scoping
The fragment context maintains column mappings and scope information to ensure proper column resolution throughout the scan tree traversal.
Index ¶
- Constants
- Variables
- func CurrentTime(ctx context.Context) *time.Time
- func DateFromInt64Value(v int64) (time.Time, error)
- func EncodeGoValue(t types.Type, v interface{}) (interface{}, error)
- func EncodeGoValues(v []interface{}, params []*ast.ParameterNode) ([]interface{}, error)
- func EncodeNamedValues(v []driver.NamedValue, params []*ast.ParameterNode) ([]sql.NamedArg, error)
- func EncodeValue(v Value) (interface{}, error)
- func GetNodesByBehavior(behavior ScopeBehavior) []ast.Kind
- func GetUniqueColumnName(column *ast.Column) string
- func IsScopeFilter(nodeKind ast.Kind) bool
- func IsScopeMerger(nodeKind ast.Kind) bool
- func IsScopeOpener(nodeKind ast.Kind) bool
- func IsScopePassthrough(nodeKind ast.Kind) bool
- func IsScopeTransformer(nodeKind ast.Kind) bool
- func LiteralFromValue(v Value) (string, error)
- func LiteralFromZetaSQLValue(v types.Value) (string, error)
- func RegisterFunctions() error
- func TimestampFromFloatValue(f float64) (time.Time, error)
- func TimestampFromInt64Value(v int64) (time.Time, error)
- func ValidateColumnFlow(nodeKind ast.Kind, inputColumns, outputColumns []string) error
- func WithCurrentTime(ctx context.Context, now time.Time) context.Context
- type ANY_VALUE
- type APPROX_COUNT_DISTINCT
- type APPROX_QUANTILES
- type APPROX_TOP_COUNT
- type APPROX_TOP_SUM
- type ARRAY
- type ARRAY_AGG
- type ARRAY_CONCAT_AGG
- type AVG
- type AggregateBindFunction
- type AggregateFuncInfo
- type AggregateNameAndFunc
- type AggregateOrderBy
- type AggregateScanData
- type AggregateScanTransformer
- type Aggregator
- func (a *Aggregator) Final(ctx *sqlite.FunctionContext)
- func (a *Aggregator) Step(ctx *sqlite.FunctionContext, stepArgs []driver.Value) error
- func (a *Aggregator) WindowInverse(ctx *sqlite.FunctionContext, rowArgs []driver.Value) error
- func (a *Aggregator) WindowValue(ctx *sqlite.FunctionContext) (driver.Value, error)
- type AggregatorFuncOption
- type AggregatorFuncOptionType
- type AggregatorOption
- type AliasGenerator
- type AnalyticScanData
- type AnalyticScanTransformer
- type Analyzer
- func (a *Analyzer) AddNamePath(path string) error
- func (a *Analyzer) Analyze(ctx context.Context, conn *Conn, query string, args []driver.NamedValue) ([]StmtActionFunc, error)
- func (a *Analyzer) MaxNamePath() int
- func (a *Analyzer) NamePath() []string
- func (a *Analyzer) SetAutoIndexMode(enabled bool)
- func (a *Analyzer) SetExplainMode(enabled bool)
- func (a *Analyzer) SetMaxNamePath(num int)
- func (a *Analyzer) SetNamePath(path []string) error
- type ArgumentInfo
- type ArrayScanData
- type ArrayScanTransformer
- type ArrayValue
- func (av *ArrayValue) Add(v Value) (Value, error)
- func (av *ArrayValue) Div(v Value) (Value, error)
- func (av *ArrayValue) EQ(v Value) (bool, error)
- func (av *ArrayValue) Format(verb rune) string
- func (av *ArrayValue) GT(v Value) (bool, error)
- func (av *ArrayValue) GTE(v Value) (bool, error)
- func (av *ArrayValue) Has(v Value) (bool, error)
- func (av *ArrayValue) Interface() interface{}
- func (av *ArrayValue) LT(v Value) (bool, error)
- func (av *ArrayValue) LTE(v Value) (bool, error)
- func (av *ArrayValue) Mul(v Value) (Value, error)
- func (av *ArrayValue) Sub(v Value) (Value, error)
- func (av *ArrayValue) ToArray() (*ArrayValue, error)
- func (av *ArrayValue) ToBool() (bool, error)
- func (av *ArrayValue) ToBytes() ([]byte, error)
- func (av *ArrayValue) ToFloat64() (float64, error)
- func (av *ArrayValue) ToInt64() (int64, error)
- func (av *ArrayValue) ToJSON() (string, error)
- func (av *ArrayValue) ToRat() (*big.Rat, error)
- func (av *ArrayValue) ToString() (string, error)
- func (av *ArrayValue) ToStruct() (*StructValue, error)
- func (av *ArrayValue) ToTime() (time.Time, error)
- type BIT_AND_AGG
- type BIT_OR_AGG
- type BIT_XOR_AGG
- type BeginStmtAction
- func (a *BeginStmtAction) Args() []interface{}
- func (a *BeginStmtAction) Cleanup(ctx context.Context, conn *Conn) error
- func (a *BeginStmtAction) ExecContext(ctx context.Context, conn *Conn) (driver.Result, error)
- func (a *BeginStmtAction) Prepare(ctx context.Context, conn *Conn) (driver.Stmt, error)
- func (a *BeginStmtAction) QueryContext(ctx context.Context, conn *Conn) (*Rows, error)
- type BinaryExpression
- type BinaryExpressionData
- type BindFunction
- type BoolValue
- func (bv BoolValue) Add(v Value) (Value, error)
- func (bv BoolValue) Div(v Value) (Value, error)
- func (bv BoolValue) EQ(v Value) (bool, error)
- func (bv BoolValue) Format(verb rune) string
- func (bv BoolValue) GT(v Value) (bool, error)
- func (bv BoolValue) GTE(v Value) (bool, error)
- func (bv BoolValue) Interface() interface{}
- func (bv BoolValue) LT(v Value) (bool, error)
- func (bv BoolValue) LTE(v Value) (bool, error)
- func (bv BoolValue) Mul(v Value) (Value, error)
- func (bv BoolValue) Sub(v Value) (Value, error)
- func (bv BoolValue) ToArray() (*ArrayValue, error)
- func (bv BoolValue) ToBool() (bool, error)
- func (bv BoolValue) ToBytes() ([]byte, error)
- func (bv BoolValue) ToFloat64() (float64, error)
- func (bv BoolValue) ToInt64() (int64, error)
- func (bv BoolValue) ToJSON() (string, error)
- func (bv BoolValue) ToRat() (*big.Rat, error)
- func (bv BoolValue) ToString() (string, error)
- func (bv BoolValue) ToStruct() (*StructValue, error)
- func (bv BoolValue) ToTime() (time.Time, error)
- type BytesValue
- func (bv BytesValue) Add(v Value) (Value, error)
- func (bv BytesValue) Div(v Value) (Value, error)
- func (bv BytesValue) EQ(v Value) (bool, error)
- func (bv BytesValue) Format(verb rune) string
- func (bv BytesValue) GT(v Value) (bool, error)
- func (bv BytesValue) GTE(v Value) (bool, error)
- func (bv BytesValue) Interface() interface{}
- func (bv BytesValue) LT(v Value) (bool, error)
- func (bv BytesValue) LTE(v Value) (bool, error)
- func (bv BytesValue) Mul(v Value) (Value, error)
- func (bv BytesValue) Sub(v Value) (Value, error)
- func (bv BytesValue) ToArray() (*ArrayValue, error)
- func (bv BytesValue) ToBool() (bool, error)
- func (bv BytesValue) ToBytes() ([]byte, error)
- func (bv BytesValue) ToFloat64() (float64, error)
- func (bv BytesValue) ToInt64() (int64, error)
- func (bv BytesValue) ToJSON() (string, error)
- func (bv BytesValue) ToRat() (*big.Rat, error)
- func (bv BytesValue) ToString() (string, error)
- func (bv BytesValue) ToStruct() (*StructValue, error)
- func (bv BytesValue) ToTime() (time.Time, error)
- type CORR
- type COUNT
- type COUNTIF
- type COUNT_STAR
- type COVAR_POP
- type COVAR_SAMP
- type CaseExpression
- type CaseExpressionData
- type CastData
- type CastTransformer
- type Catalog
- func (c *Catalog) AddNewFunctionSpec(ctx context.Context, conn *Conn, spec *FunctionSpec) error
- func (c *Catalog) AddNewTableSpec(ctx context.Context, conn *Conn, spec *TableSpec) error
- func (c *Catalog) DeleteFunctionSpec(ctx context.Context, conn *Conn, name string) error
- func (c *Catalog) DeleteTableSpec(ctx context.Context, conn *Conn, name string) error
- func (c *Catalog) ExtendedTypeSuperTypes(typ types.Type) (*types.TypeListView, error)
- func (c *Catalog) FindConnection(path []string) (types.Connection, error)
- func (c *Catalog) FindConstant(path []string) (types.Constant, int, error)
- func (c *Catalog) FindConversion(from, to types.Type) (types.Conversion, error)
- func (c *Catalog) FindFunction(path []string) (*types.Function, error)
- func (c *Catalog) FindModel(path []string) (types.Model, error)
- func (c *Catalog) FindProcedure(path []string) (*types.Procedure, error)
- func (c *Catalog) FindTable(path []string) (types.Table, error)
- func (c *Catalog) FindTableValuedFunction(path []string) (types.TableValuedFunction, error)
- func (c *Catalog) FindType(path []string) (types.Type, error)
- func (c *Catalog) FullName() string
- func (c *Catalog) SuggestConstant(mistypedPath []string) string
- func (c *Catalog) SuggestFunction(mistypedPath []string) string
- func (c *Catalog) SuggestModel(mistypedPath []string) string
- func (c *Catalog) SuggestTable(mistypedPath []string) string
- func (c *Catalog) SuggestTableValuedFunction(mistypedPath []string) string
- func (c *Catalog) Sync(ctx context.Context, conn *Conn) error
- type CatalogSpecKind
- type ChangedCatalog
- type ChangedFunction
- type ChangedTable
- type ColumnData
- type ColumnDefinition
- type ColumnDefinitionData
- type ColumnInfo
- type ColumnListProvider
- type ColumnMapping
- type ColumnRefData
- type ColumnRefTransformer
- type ColumnSpec
- type CombinationFormatTimeInfo
- type CommitStmtAction
- func (a *CommitStmtAction) Args() []interface{}
- func (a *CommitStmtAction) Cleanup(ctx context.Context, conn *Conn) error
- func (a *CommitStmtAction) ExecContext(ctx context.Context, conn *Conn) (driver.Result, error)
- func (a *CommitStmtAction) Prepare(ctx context.Context, conn *Conn) (driver.Stmt, error)
- func (a *CommitStmtAction) QueryContext(ctx context.Context, conn *Conn) (*Rows, error)
- type CompoundSQLFragment
- type ComputedColumnData
- type Conn
- type Coordinator
- type CreateData
- type CreateFunctionData
- type CreateFunctionStatement
- type CreateFunctionStmt
- type CreateFunctionStmtAction
- func (a *CreateFunctionStmtAction) Args() []interface{}
- func (a *CreateFunctionStmtAction) Cleanup(ctx context.Context, conn *Conn) error
- func (a *CreateFunctionStmtAction) ExecContext(ctx context.Context, conn *Conn) (driver.Result, error)
- func (a *CreateFunctionStmtAction) Prepare(ctx context.Context, conn *Conn) (driver.Stmt, error)
- func (a *CreateFunctionStmtAction) QueryContext(ctx context.Context, conn *Conn) (*Rows, error)
- type CreateTableAsSelectStmtTransformer
- type CreateTableData
- type CreateTableStatement
- type CreateTableStmt
- type CreateTableStmtAction
- func (a *CreateTableStmtAction) Args() []interface{}
- func (a *CreateTableStmtAction) Cleanup(ctx context.Context, conn *Conn) error
- func (a *CreateTableStmtAction) ExecContext(ctx context.Context, conn *Conn) (driver.Result, error)
- func (a *CreateTableStmtAction) Prepare(ctx context.Context, conn *Conn) (driver.Stmt, error)
- func (a *CreateTableStmtAction) QueryContext(ctx context.Context, conn *Conn) (*Rows, error)
- type CreateType
- type CreateViewData
- type CreateViewStatement
- type CreateViewStmt
- type CreateViewStmtAction
- func (a *CreateViewStmtAction) Args() []interface{}
- func (a *CreateViewStmtAction) Cleanup(ctx context.Context, conn *Conn) error
- func (a *CreateViewStmtAction) ExecContext(ctx context.Context, conn *Conn) (driver.Result, error)
- func (a *CreateViewStmtAction) Prepare(ctx context.Context, conn *Conn) (driver.Stmt, error)
- func (a *CreateViewStmtAction) QueryContext(ctx context.Context, conn *Conn) (*Rows, error)
- type CreateViewStmtTransformer
- type CustomInverseWindowAggregate
- type CustomStepWindowAggregate
- type DMLStmt
- func (s *DMLStmt) CheckNamedValue(value *driver.NamedValue) error
- func (s *DMLStmt) Close() error
- func (s *DMLStmt) Exec(args []driver.Value) (driver.Result, error)
- func (s *DMLStmt) ExecContext(ctx context.Context, args []driver.NamedValue) (driver.Result, error)
- func (s *DMLStmt) NumInput() int
- func (s *DMLStmt) Query(args []driver.Value) (driver.Rows, error)
- func (s *DMLStmt) QueryContext(ctx context.Context, query string, args []driver.NamedValue) (driver.Rows, error)
- type DMLStmtAction
- func (a *DMLStmtAction) Args() []interface{}
- func (a *DMLStmtAction) Cleanup(ctx context.Context, conn *Conn) error
- func (a *DMLStmtAction) ExecContext(ctx context.Context, conn *Conn) (driver.Result, error)
- func (a *DMLStmtAction) Prepare(ctx context.Context, conn *Conn) (driver.Stmt, error)
- func (a *DMLStmtAction) QueryContext(ctx context.Context, conn *Conn) (*Rows, error)
- type DMLStmtTransformer
- type DateValue
- func (d DateValue) Add(v Value) (Value, error)
- func (d DateValue) AddDateWithInterval(v int, interval string) (Value, error)
- func (d DateValue) Div(v Value) (Value, error)
- func (d DateValue) EQ(v Value) (bool, error)
- func (d DateValue) Format(verb rune) string
- func (d DateValue) GT(v Value) (bool, error)
- func (d DateValue) GTE(v Value) (bool, error)
- func (d DateValue) Interface() interface{}
- func (d DateValue) LT(v Value) (bool, error)
- func (d DateValue) LTE(v Value) (bool, error)
- func (d DateValue) Mul(v Value) (Value, error)
- func (d DateValue) Sub(v Value) (Value, error)
- func (d DateValue) ToArray() (*ArrayValue, error)
- func (d DateValue) ToBool() (bool, error)
- func (d DateValue) ToBytes() ([]byte, error)
- func (d DateValue) ToFloat64() (float64, error)
- func (d DateValue) ToInt64() (int64, error)
- func (d DateValue) ToJSON() (string, error)
- func (d DateValue) ToRat() (*big.Rat, error)
- func (d DateValue) ToString() (string, error)
- func (d DateValue) ToStruct() (*StructValue, error)
- func (d DateValue) ToTime() (time.Time, error)
- type DatetimeValue
- func (d DatetimeValue) Add(v Value) (Value, error)
- func (d DatetimeValue) Div(v Value) (Value, error)
- func (d DatetimeValue) EQ(v Value) (bool, error)
- func (d DatetimeValue) Format(verb rune) string
- func (d DatetimeValue) GT(v Value) (bool, error)
- func (d DatetimeValue) GTE(v Value) (bool, error)
- func (d DatetimeValue) Interface() interface{}
- func (d DatetimeValue) LT(v Value) (bool, error)
- func (d DatetimeValue) LTE(v Value) (bool, error)
- func (d DatetimeValue) Mul(v Value) (Value, error)
- func (d DatetimeValue) Sub(v Value) (Value, error)
- func (d DatetimeValue) ToArray() (*ArrayValue, error)
- func (d DatetimeValue) ToBool() (bool, error)
- func (d DatetimeValue) ToBytes() ([]byte, error)
- func (d DatetimeValue) ToFloat64() (float64, error)
- func (d DatetimeValue) ToInt64() (int64, error)
- func (d DatetimeValue) ToJSON() (string, error)
- func (d DatetimeValue) ToRat() (*big.Rat, error)
- func (d DatetimeValue) ToString() (string, error)
- func (d DatetimeValue) ToStruct() (*StructValue, error)
- func (d DatetimeValue) ToTime() (time.Time, error)
- type DayOfWeek
- type DefaultFragmentContext
- func (fc *DefaultFragmentContext) AddAvailableColumn(columnID int, info *ColumnInfo)
- func (fc *DefaultFragmentContext) AddAvailableColumnsForDML(scanData *ScanData)
- func (fc *DefaultFragmentContext) EnterScope() ScopeToken
- func (fc *DefaultFragmentContext) ExitScope(token ScopeToken)
- func (fc *DefaultFragmentContext) GetColumnExpression(columnID int) *SQLExpression
- func (fc *DefaultFragmentContext) GetID() string
- func (fc *DefaultFragmentContext) GetQualifiedColumnExpression(columnID int) *SQLExpression
- func (fc *DefaultFragmentContext) GetQualifiedColumnRef(columnID int) (string, string)
- func (fc *DefaultFragmentContext) RegisterColumnScope(columnID int, scopeAlias string)
- func (fc *DefaultFragmentContext) RegisterColumnScopeMapping(scopeAlias string, columns []*ColumnData)
- type DefaultScopeToken
- type DefaultTransformContext
- func (c *DefaultTransformContext) AddWithEntryColumnMapping(name string, columns []*ColumnData)
- func (c *DefaultTransformContext) Config() *TransformConfig
- func (c *DefaultTransformContext) Context() context.Context
- func (c *DefaultTransformContext) FragmentContext() FragmentContextProvider
- func (c *DefaultTransformContext) GetWithEntryMapping(name string) map[string]string
- func (c *DefaultTransformContext) WithFragmentContext(fc FragmentContextProvider) TransformContext
- type DeleteData
- type DeleteStatement
- type DisableQueryFormattingKey
- type DropData
- type DropStatement
- type DropStmtAction
- func (a *DropStmtAction) Args() []interface{}
- func (a *DropStmtAction) Cleanup(ctx context.Context, conn *Conn) error
- func (a *DropStmtAction) ExecContext(ctx context.Context, conn *Conn) (driver.Result, error)
- func (a *DropStmtAction) Prepare(ctx context.Context, conn *Conn) (driver.Stmt, error)
- func (a *DropStmtAction) QueryContext(ctx context.Context, conn *Conn) (*Rows, error)
- type DropStmtTransformer
- type ErrorGroup
- type ExistsExpression
- type ExpressionData
- type ExpressionTransformer
- func NewAggregateFunctionTransformer(coord Coordinator) ExpressionTransformer
- func NewAnalyticFunctionTransformer(coord Coordinator) ExpressionTransformer
- func NewComputedColumnTransformer(coord Coordinator) ExpressionTransformer
- func NewDMLDefaultTransformer() ExpressionTransformer
- func NewDMLValueTransformer(coord Coordinator) ExpressionTransformer
- func NewGetJsonFieldTransformer(coord Coordinator) ExpressionTransformer
- func NewGetStructFieldTransformer(coord Coordinator) ExpressionTransformer
- func NewMakeStructTransformer(coord Coordinator) ExpressionTransformer
- func NewOutputColumnTransformer(coord Coordinator) ExpressionTransformer
- func NewSubqueryExprTransformer(coord Coordinator) ExpressionTransformer
- type ExpressionType
- type FilterScanData
- type FilterScanTransformer
- type FloatValue
- func (fv FloatValue) Add(v Value) (Value, error)
- func (fv FloatValue) Div(v Value) (Value, error)
- func (fv FloatValue) EQ(v Value) (bool, error)
- func (fv FloatValue) Format(verb rune) string
- func (fv FloatValue) GT(v Value) (bool, error)
- func (fv FloatValue) GTE(v Value) (bool, error)
- func (fv FloatValue) Interface() interface{}
- func (fv FloatValue) LT(v Value) (bool, error)
- func (fv FloatValue) LTE(v Value) (bool, error)
- func (fv FloatValue) Mul(v Value) (Value, error)
- func (fv FloatValue) Sub(v Value) (Value, error)
- func (fv FloatValue) ToArray() (*ArrayValue, error)
- func (fv FloatValue) ToBool() (bool, error)
- func (fv FloatValue) ToBytes() ([]byte, error)
- func (fv FloatValue) ToFloat64() (float64, error)
- func (fv FloatValue) ToInt64() (int64, error)
- func (fv FloatValue) ToJSON() (string, error)
- func (fv FloatValue) ToRat() (*big.Rat, error)
- func (fv FloatValue) ToString() (string, error)
- func (fv FloatValue) ToStruct() (*StructValue, error)
- func (fv FloatValue) ToTime() (time.Time, error)
- type FormatContext
- type FormatFlag
- type FormatInfo
- type FormatParam
- type FormatPrecision
- type FormatTimeInfo
- type FormatWidth
- type Formatter
- type FragmentContext
- func (fc *FragmentContext) AddAvailableColumn(column *ast.Column, info *ColumnInfo)
- func (fc *FragmentContext) AddWithEntryColumnMapping(name string, columns []*ast.Column)
- func (fc *FragmentContext) FilterScope(scopeType string, list []*ast.Column)
- func (fc *FragmentContext) GetColumnExpression(column *ast.Column) *SQLExpression
- func (fc *FragmentContext) OpenScope(scopeType string, columns []*ast.Column) ScopeInfo
- func (fc *FragmentContext) PopScope(alias string) *ScopeInfo
- func (fc *FragmentContext) PushScope(scopeType string)
- func (fc *FragmentContext) UseScope(scopeType string) func()
- type FragmentContextProvider
- type FragmentStorage
- type FrameBound
- type FrameBoundData
- type FrameClause
- type FrameClauseData
- type FromItem
- type FromItemType
- type FuncInfo
- type FunctionCall
- type FunctionCallData
- type FunctionCallTransformer
- type FunctionSignature
- type FunctionSpec
- func (s *FunctionSpec) CallSQL(ctx context.Context, callNode *ast.BaseFunctionCallNode, ...) (*SQLExpression, error)
- func (s *FunctionSpec) CallSQLData(ctx context.Context, functionData *FunctionCallData, ...) (*SQLExpression, error)
- func (s *FunctionSpec) FuncName() string
- func (s *FunctionSpec) SQL() string
- type GroupingSetData
- type HLL_COUNT_INIT
- type HLL_COUNT_MERGE
- type HLL_COUNT_MERGE_PARTIAL
- type InsertData
- type InsertStatement
- type IntValue
- func (iv IntValue) Add(v Value) (Value, error)
- func (iv IntValue) Div(v Value) (Value, error)
- func (iv IntValue) EQ(v Value) (bool, error)
- func (iv IntValue) Format(verb rune) string
- func (iv IntValue) GT(v Value) (bool, error)
- func (iv IntValue) GTE(v Value) (bool, error)
- func (iv IntValue) Interface() interface{}
- func (iv IntValue) LT(v Value) (bool, error)
- func (iv IntValue) LTE(v Value) (bool, error)
- func (iv IntValue) Mul(v Value) (Value, error)
- func (iv IntValue) Sub(v Value) (Value, error)
- func (iv IntValue) ToArray() (*ArrayValue, error)
- func (iv IntValue) ToBool() (bool, error)
- func (iv IntValue) ToBytes() ([]byte, error)
- func (iv IntValue) ToFloat64() (float64, error)
- func (iv IntValue) ToInt64() (int64, error)
- func (iv IntValue) ToJSON() (string, error)
- func (iv IntValue) ToRat() (*big.Rat, error)
- func (iv IntValue) ToString() (string, error)
- func (iv IntValue) ToStruct() (*StructValue, error)
- func (iv IntValue) ToTime() (time.Time, error)
- type IntervalValue
- func (iv *IntervalValue) Add(v Value) (Value, error)
- func (iv *IntervalValue) Div(v Value) (Value, error)
- func (iv *IntervalValue) EQ(v Value) (bool, error)
- func (iv *IntervalValue) Format(verb rune) string
- func (iv *IntervalValue) GT(v Value) (bool, error)
- func (iv *IntervalValue) GTE(v Value) (bool, error)
- func (iv *IntervalValue) Interface() interface{}
- func (iv *IntervalValue) LT(v Value) (bool, error)
- func (iv *IntervalValue) LTE(v Value) (bool, error)
- func (iv *IntervalValue) Mul(v Value) (Value, error)
- func (iv *IntervalValue) Sub(v Value) (Value, error)
- func (iv *IntervalValue) ToArray() (*ArrayValue, error)
- func (iv *IntervalValue) ToBool() (bool, error)
- func (iv *IntervalValue) ToBytes() ([]byte, error)
- func (iv *IntervalValue) ToFloat64() (float64, error)
- func (iv *IntervalValue) ToInt64() (int64, error)
- func (iv *IntervalValue) ToJSON() (string, error)
- func (iv *IntervalValue) ToRat() (*big.Rat, error)
- func (iv *IntervalValue) ToString() (string, error)
- func (iv *IntervalValue) ToStruct() (*StructValue, error)
- func (iv *IntervalValue) ToTime() (time.Time, error)
- type JoinClause
- type JoinScanData
- type JoinScanTransformer
- type JoinType
- type JsonValue
- func (jv JsonValue) Add(v Value) (Value, error)
- func (jv JsonValue) Div(v Value) (Value, error)
- func (jv JsonValue) EQ(v Value) (bool, error)
- func (jv JsonValue) Format(verb rune) string
- func (jv JsonValue) GT(v Value) (bool, error)
- func (jv JsonValue) GTE(v Value) (bool, error)
- func (jv JsonValue) Interface() interface{}
- func (jv JsonValue) LT(v Value) (bool, error)
- func (jv JsonValue) LTE(v Value) (bool, error)
- func (jv JsonValue) Mul(v Value) (Value, error)
- func (jv JsonValue) Sub(v Value) (Value, error)
- func (jv JsonValue) ToArray() (*ArrayValue, error)
- func (jv JsonValue) ToBool() (bool, error)
- func (jv JsonValue) ToBytes() ([]byte, error)
- func (jv JsonValue) ToFloat64() (float64, error)
- func (jv JsonValue) ToInt64() (int64, error)
- func (jv JsonValue) ToJSON() (string, error)
- func (jv JsonValue) ToRat() (*big.Rat, error)
- func (jv JsonValue) ToString() (string, error)
- func (jv JsonValue) ToStruct() (*StructValue, error)
- func (jv JsonValue) ToTime() (time.Time, error)
- func (jv JsonValue) Type() string
- type LOGICAL_AND
- type LOGICAL_OR
- type LimitClause
- type LimitData
- type LimitScanData
- type LimitScanTransformer
- type LiteralData
- type LiteralTransformer
- type MAX
- type MIN
- type MergeData
- type MergeStatement
- type MergeStmtAction
- func (a *MergeStmtAction) Args() []interface{}
- func (a *MergeStmtAction) Cleanup(ctx context.Context, conn *Conn) error
- func (a *MergeStmtAction) ExecContext(ctx context.Context, conn *Conn) (driver.Result, error)
- func (a *MergeStmtAction) Prepare(ctx context.Context, conn *Conn) (driver.Stmt, error)
- func (a *MergeStmtAction) QueryContext(ctx context.Context, conn *Conn) (*Rows, error)
- type MergeStmtTransformer
- type MergeWhenClause
- type MergeWhenClauseData
- type Month
- type NameAndFunc
- type NamePath
- type NameWithType
- type NodeExtractor
- func (e *NodeExtractor) ExtractExpressionData(node ast.Node, ctx TransformContext) (ExpressionData, error)
- func (e *NodeExtractor) ExtractScanData(node ast.Node, ctx TransformContext) (ScanData, error)
- func (e *NodeExtractor) ExtractStatementData(node ast.Node, ctx TransformContext) (StatementData, error)
- func (e *NodeExtractor) SetCoordinator(coordinator Coordinator)
- type NodeID
- type NullStmtAction
- func (a *NullStmtAction) Args() []interface{}
- func (a *NullStmtAction) Cleanup(ctx context.Context, conn *Conn) error
- func (a *NullStmtAction) ExecContext(ctx context.Context, conn *Conn) (driver.Result, error)
- func (a *NullStmtAction) Prepare(ctx context.Context, conn *Conn) (driver.Stmt, error)
- func (a *NullStmtAction) QueryContext(ctx context.Context, conn *Conn) (*Rows, error)
- type NumericValue
- func (nv *NumericValue) Add(v Value) (Value, error)
- func (nv *NumericValue) Div(v Value) (ret Value, e error)
- func (nv *NumericValue) EQ(v Value) (bool, error)
- func (nv *NumericValue) Format(verb rune) string
- func (nv *NumericValue) GT(v Value) (bool, error)
- func (nv *NumericValue) GTE(v Value) (bool, error)
- func (nv *NumericValue) Interface() interface{}
- func (nv *NumericValue) LT(v Value) (bool, error)
- func (nv *NumericValue) LTE(v Value) (bool, error)
- func (nv *NumericValue) Mul(v Value) (Value, error)
- func (nv *NumericValue) Sub(v Value) (Value, error)
- func (nv *NumericValue) ToArray() (*ArrayValue, error)
- func (nv *NumericValue) ToBool() (bool, error)
- func (nv *NumericValue) ToBytes() ([]byte, error)
- func (nv *NumericValue) ToFloat64() (float64, error)
- func (nv *NumericValue) ToInt64() (int64, error)
- func (nv *NumericValue) ToJSON() (string, error)
- func (nv *NumericValue) ToRat() (*big.Rat, error)
- func (nv *NumericValue) ToString() (string, error)
- func (nv *NumericValue) ToStruct() (*StructValue, error)
- func (nv *NumericValue) ToTime() (time.Time, error)
- type OrderByItem
- type OrderByItemData
- type OrderByScanData
- type OrderByScanTransformer
- type OrderedValue
- type OutputColumnListProvider
- type ParameterData
- type ParameterDefinition
- type ParameterDefinitionData
- type ParameterTransformer
- type ParseFunction
- type ParseLocation
- type ProjectScanData
- type ProjectScanTransformer
- type QueryCoordinator
- func (c *QueryCoordinator) GetRegisteredExpressionTypes() []string
- func (c *QueryCoordinator) GetRegisteredScanTypes() []string
- func (c *QueryCoordinator) GetRegisteredStatementTypes() []string
- func (c *QueryCoordinator) RegisterExpressionTransformer(nodeType reflect.Type, transformer ExpressionTransformer)
- func (c *QueryCoordinator) RegisterScanTransformer(nodeType reflect.Type, transformer ScanTransformer)
- func (c *QueryCoordinator) RegisterStatementTransformer(nodeType reflect.Type, transformer StatementTransformer)
- func (c *QueryCoordinator) TransformExpression(exprData ExpressionData, ctx TransformContext) (*SQLExpression, error)
- func (c *QueryCoordinator) TransformExpressionDataList(exprDataList []ExpressionData, ctx TransformContext) ([]*SQLExpression, error)
- func (c *QueryCoordinator) TransformOptionalExpressionData(exprData *ExpressionData, ctx TransformContext) (*SQLExpression, error)
- func (c *QueryCoordinator) TransformScan(scanData ScanData, ctx TransformContext) (*FromItem, error)
- func (c *QueryCoordinator) TransformStatement(stmtData StatementData, ctx TransformContext) (SQLFragment, error)
- func (c *QueryCoordinator) TransformStatementNode(node ast.Node, ctx TransformContext) (SQLFragment, error)
- func (c *QueryCoordinator) TransformWithEntry(scanData ScanData, ctx TransformContext) (*WithClause, error)
- type QueryStmt
- func (s *QueryStmt) CheckNamedValue(value *driver.NamedValue) error
- func (s *QueryStmt) Close() error
- func (s *QueryStmt) Exec(args []driver.Value) (driver.Result, error)
- func (s *QueryStmt) ExecContext(ctx context.Context, query string, args []driver.NamedValue) (driver.Result, error)
- func (s *QueryStmt) NumInput() int
- func (s *QueryStmt) OutputColumns() []*ColumnSpec
- func (s *QueryStmt) Query(args []driver.Value) (driver.Rows, error)
- func (s *QueryStmt) QueryContext(ctx context.Context, args []driver.NamedValue) (driver.Rows, error)
- type QueryStmtAction
- func (a *QueryStmtAction) Args() []interface{}
- func (a *QueryStmtAction) Cleanup(ctx context.Context, conn *Conn) error
- func (a *QueryStmtAction) ExecContext(ctx context.Context, conn *Conn) (driver.Result, error)
- func (a *QueryStmtAction) ExplainQueryPlan(ctx context.Context, conn *Conn) error
- func (a *QueryStmtAction) Prepare(ctx context.Context, conn *Conn) (driver.Stmt, error)
- func (a *QueryStmtAction) QueryContext(ctx context.Context, conn *Conn) (*Rows, error)
- type QueryStmtTransformer
- type QueryTransformFactory
- func (f *QueryTransformFactory) CreateCoordinator() Coordinator
- func (f *QueryTransformFactory) CreateTransformContext(ctx context.Context) TransformContext
- func (f *QueryTransformFactory) GetRegisteredTransformers() map[string][]string
- func (f *QueryTransformFactory) TransformQuery(ctx context.Context, queryNode ast.Node) (*TransformResult, error)
- type ResolvedAggregateScan
- type ResolvedAnalyticScan
- type ResolvedArrayScan
- type ResolvedBarrierScan
- type ResolvedCloneScan
- type ResolvedExecuteAsRoleScan
- type ResolvedFilterScan
- type ResolvedGroupRowsScan
- type ResolvedJoinScan
- type ResolvedLimitOffsetScan
- type ResolvedMatchRecognizeScan
- type ResolvedOrderByScan
- type ResolvedPivotScan
- type ResolvedProjectScan
- type ResolvedRecursiveScan
- type ResolvedRelationArgumentScan
- type ResolvedSampleScan
- type ResolvedSetOperationScan
- type ResolvedSingleRowScan
- type ResolvedSubqueryScan
- type ResolvedTVFScan
- type ResolvedTableScan
- type ResolvedUnpivotScan
- type ResolvedValueTableScan
- type ResolvedWithRefScan
- type ResolvedWithScan
- type Result
- type Rows
- type SQLBuilderVisitor
- func (v *SQLBuilderVisitor) VisitAggregateFunctionCallNode(node *ast.AggregateFunctionCallNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitAggregateScanNode(node *ast.AggregateScanNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitAnalyticFunctionCallNode(node *ast.AnalyticFunctionCallNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitAnalyticFunctionGroupNode(node *ast.AnalyticFunctionGroupNode) ([]*SelectListItem, error)
- func (v *SQLBuilderVisitor) VisitAnalyticScanNode(node *ast.AnalyticScanNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitArgumentRefNode(node *ast.ArgumentRefNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitArrayScan(node *ast.ArrayScanNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitCastNode(node *ast.CastNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitColumnRefNode(node *ast.ColumnRefNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitComputedColumnNode(node *ast.ComputedColumnNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitCreateFunctionStmt(node *ast.CreateFunctionStmtNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitCreateTableAsSelectStmt(node *ast.CreateTableAsSelectStmtNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitCreateViewStatement(node *ast.CreateViewStmtNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitDMLDefaultNode(node *ast.DMLDefaultNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitDMLStatement(node ast.Node) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitDMLValueNode(node *ast.DMLValueNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitDeleteStatement(node *ast.DeleteStmtNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitDropFunctionStmt(node *ast.DropFunctionStmtNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitDropStmt(node *ast.DropStmtNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitExpression(expr ast.Node) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitFilterScanNode(node *ast.FilterScanNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitFunctionCallNode(node *ast.FunctionCallNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitGetJsonFieldNode(node *ast.GetJsonFieldNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitGetStructFieldNode(node *ast.GetStructFieldNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitInsertRowNode(node *ast.InsertRowNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitInsertStatement(node *ast.InsertStmtNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitJoinScan(node *ast.JoinScanNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitLimitOffsetScanNode(node *ast.LimitOffsetScanNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitLiteralNode(node *ast.LiteralNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitMakeStructNode(node *ast.MakeStructNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitMergeStatement(node *ast.MergeStmtNode) ([]*SQLExpression, error)
- func (v *SQLBuilderVisitor) VisitOrderByItemNode(node *ast.OrderByItemNode) ([]*OrderByItem, error)
- func (v *SQLBuilderVisitor) VisitOrderByScanNode(node *ast.OrderByScanNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitOutputColumnNode(node *ast.OutputColumnNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitParameterNode(node *ast.ParameterNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitProjectScan(node *ast.ProjectScanNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitQuery(node *ast.QueryStmtNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitSQL(node *ast.CreateTableStmtNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitScan(scan ast.Node) (*FromItem, error)
- func (v *SQLBuilderVisitor) VisitSetOperationItemNode(node *ast.SetOperationItemNode) (*FromItem, error)
- func (v *SQLBuilderVisitor) VisitSetOperationScanNode(node *ast.SetOperationScanNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitSingleRowScanNode(node *ast.SingleRowScanNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitSubqueryExpressionNode(node *ast.SubqueryExprNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitTableScan(node *ast.TableScanNode, fromOnly bool) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitTruncateStmt(node *ast.TruncateStmtNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitUpdateItem(node *ast.UpdateItemNode) (*SetItem, error)
- func (v *SQLBuilderVisitor) VisitUpdateStatement(node *ast.UpdateStmtNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitWithEntryNode(node *ast.WithEntryNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitWithRefScanNode(node *ast.WithRefScanNode) (SQLFragment, error)
- func (v *SQLBuilderVisitor) VisitWithScanNode(node *ast.WithScanNode) (SQLFragment, error)
- type SQLExpression
- func NewBinaryExpression(left *SQLExpression, operator string, right *SQLExpression) *SQLExpression
- func NewCaseExpression(whenClauses []*WhenClause, elseExpr *SQLExpression) *SQLExpression
- func NewColumnExpression(column string, tableAlias ...string) *SQLExpression
- func NewExistsExpression(subquery *SelectStatement) *SQLExpression
- func NewFunctionExpression(name string, args ...*SQLExpression) *SQLExpression
- func NewLiteralExpression(value string) *SQLExpression
- func NewLiteralExpressionFromGoValue(t types.Type, value interface{}) (*SQLExpression, error)
- func NewSimpleCaseExpression(caseExpr *SQLExpression, whenClauses []*WhenClause, elseExpr *SQLExpression) *SQLExpression
- func NewStarExpression(tableAlias ...string) *SQLExpression
- func NewUniqueColumnExpression(column *ast.Column, tableAlias ...string) *SQLExpression
- type SQLFragment
- type SQLWriter
- type SQLiteFunction
- type STDDEV
- type STDDEV_POP
- type STDDEV_SAMP
- type STRING_AGG
- type SUM
- type SafeValue
- func (v *SafeValue) Add(arg Value) (Value, error)
- func (v *SafeValue) Div(arg Value) (Value, error)
- func (v *SafeValue) EQ(arg Value) (bool, error)
- func (v *SafeValue) Format(verb rune) string
- func (v *SafeValue) GT(arg Value) (bool, error)
- func (v *SafeValue) GTE(arg Value) (bool, error)
- func (v *SafeValue) Interface() interface{}
- func (v *SafeValue) LT(arg Value) (bool, error)
- func (v *SafeValue) LTE(arg Value) (bool, error)
- func (v *SafeValue) Mul(arg Value) (Value, error)
- func (v *SafeValue) Sub(arg Value) (Value, error)
- func (v *SafeValue) ToArray() (*ArrayValue, error)
- func (v *SafeValue) ToBool() (bool, error)
- func (v *SafeValue) ToBytes() ([]byte, error)
- func (v *SafeValue) ToFloat64() (float64, error)
- func (v *SafeValue) ToInt64() (int64, error)
- func (v *SafeValue) ToJSON() (string, error)
- func (v *SafeValue) ToRat() (*big.Rat, error)
- func (v *SafeValue) ToString() (string, error)
- func (v *SafeValue) ToStruct() (*StructValue, error)
- func (v *SafeValue) ToTime() (time.Time, error)
- type ScanData
- type ScanTransformer
- type ScanType
- type Scope
- type ScopeBehavior
- type ScopeInfo
- type ScopeManager
- type ScopeToken
- type SelectData
- type SelectItemData
- type SelectListItem
- type SelectStatement
- type SelectType
- type SetItem
- type SetItemData
- type SetOperation
- type SetOperationData
- type SetOperationScanTransformer
- type SingleRowScanTransformer
- type StatementData
- type StatementTransformer
- func NewCreateFunctionTransformer(coord Coordinator) StatementTransformer
- func NewCreateTableTransformer(coord Coordinator) StatementTransformer
- func NewDeleteTransformer(coord Coordinator) StatementTransformer
- func NewInsertTransformer(coord Coordinator) StatementTransformer
- func NewTruncateTransformer() StatementTransformer
- func NewUpdateTransformer(coord Coordinator) StatementTransformer
- type StatementType
- type StmtAction
- type StmtActionFunc
- type StringValue
- func (sv StringValue) Add(v Value) (Value, error)
- func (sv StringValue) Div(v Value) (Value, error)
- func (sv StringValue) EQ(v Value) (bool, error)
- func (sv StringValue) Format(verb rune) string
- func (sv StringValue) GT(v Value) (bool, error)
- func (sv StringValue) GTE(v Value) (bool, error)
- func (sv StringValue) Interface() interface{}
- func (sv StringValue) LT(v Value) (bool, error)
- func (sv StringValue) LTE(v Value) (bool, error)
- func (sv StringValue) Mul(v Value) (Value, error)
- func (sv StringValue) Sub(v Value) (Value, error)
- func (sv StringValue) ToArray() (*ArrayValue, error)
- func (sv StringValue) ToBool() (bool, error)
- func (sv StringValue) ToBytes() ([]byte, error)
- func (sv StringValue) ToFloat64() (float64, error)
- func (sv StringValue) ToInt64() (int64, error)
- func (sv StringValue) ToJSON() (string, error)
- func (sv StringValue) ToRat() (*big.Rat, error)
- func (sv StringValue) ToString() (string, error)
- func (sv StringValue) ToStruct() (*StructValue, error)
- func (sv StringValue) ToTime() (time.Time, error)
- type StructValue
- func (sv *StructValue) Add(v Value) (Value, error)
- func (sv *StructValue) Div(v Value) (Value, error)
- func (sv *StructValue) EQ(v Value) (bool, error)
- func (sv *StructValue) Format(verb rune) string
- func (sv *StructValue) GT(v Value) (bool, error)
- func (sv *StructValue) GTE(v Value) (bool, error)
- func (sv *StructValue) Interface() interface{}
- func (sv *StructValue) LT(v Value) (bool, error)
- func (sv *StructValue) LTE(v Value) (bool, error)
- func (sv *StructValue) Mul(v Value) (Value, error)
- func (sv *StructValue) Sub(v Value) (Value, error)
- func (sv *StructValue) ToArray() (*ArrayValue, error)
- func (sv *StructValue) ToBool() (bool, error)
- func (sv *StructValue) ToBytes() ([]byte, error)
- func (sv *StructValue) ToFloat64() (float64, error)
- func (sv *StructValue) ToInt64() (int64, error)
- func (sv *StructValue) ToJSON() (string, error)
- func (sv *StructValue) ToRat() (*big.Rat, error)
- func (sv *StructValue) ToString() (string, error)
- func (sv *StructValue) ToStruct() (*StructValue, error)
- func (sv *StructValue) ToTime() (time.Time, error)
- type StructValueLayout
- type SubqueryData
- type SubqueryTransformer
- type TableFunction
- type TableReference
- type TableScanData
- type TableScanTransformer
- type TableSpec
- type TimeFormatType
- type TimeParserPostProcessor
- type TimeValue
- func (t TimeValue) Add(v Value) (Value, error)
- func (t TimeValue) Div(v Value) (Value, error)
- func (t TimeValue) EQ(v Value) (bool, error)
- func (t TimeValue) Format(verb rune) string
- func (t TimeValue) GT(v Value) (bool, error)
- func (t TimeValue) GTE(v Value) (bool, error)
- func (t TimeValue) Interface() interface{}
- func (t TimeValue) LT(v Value) (bool, error)
- func (t TimeValue) LTE(v Value) (bool, error)
- func (t TimeValue) Mul(v Value) (Value, error)
- func (t TimeValue) Sub(v Value) (Value, error)
- func (t TimeValue) ToArray() (*ArrayValue, error)
- func (t TimeValue) ToBool() (bool, error)
- func (t TimeValue) ToBytes() ([]byte, error)
- func (t TimeValue) ToFloat64() (float64, error)
- func (t TimeValue) ToInt64() (int64, error)
- func (t TimeValue) ToJSON() (string, error)
- func (t TimeValue) ToRat() (*big.Rat, error)
- func (t TimeValue) ToString() (string, error)
- func (t TimeValue) ToStruct() (*StructValue, error)
- func (t TimeValue) ToTime() (time.Time, error)
- type TimestampValue
- func (t TimestampValue) Add(v Value) (Value, error)
- func (t TimestampValue) AddValueWithPart(v time.Duration, part string) (Value, error)
- func (t TimestampValue) Div(v Value) (Value, error)
- func (t TimestampValue) EQ(v Value) (bool, error)
- func (d TimestampValue) Format(verb rune) string
- func (t TimestampValue) GT(v Value) (bool, error)
- func (t TimestampValue) GTE(v Value) (bool, error)
- func (d TimestampValue) Interface() interface{}
- func (t TimestampValue) LT(v Value) (bool, error)
- func (t TimestampValue) LTE(v Value) (bool, error)
- func (t TimestampValue) Mul(v Value) (Value, error)
- func (t TimestampValue) Sub(v Value) (Value, error)
- func (t TimestampValue) ToArray() (*ArrayValue, error)
- func (t TimestampValue) ToBool() (bool, error)
- func (t TimestampValue) ToBytes() ([]byte, error)
- func (t TimestampValue) ToFloat64() (float64, error)
- func (t TimestampValue) ToInt64() (int64, error)
- func (t TimestampValue) ToJSON() (string, error)
- func (t TimestampValue) ToRat() (*big.Rat, error)
- func (t TimestampValue) ToString() (string, error)
- func (t TimestampValue) ToStruct() (*StructValue, error)
- func (t TimestampValue) ToTime() (time.Time, error)
- type TransformConfig
- type TransformContext
- type TransformResult
- type Transformer
- type TruncateStatement
- type TruncateStmtAction
- func (a *TruncateStmtAction) Args() []interface{}
- func (a *TruncateStmtAction) Cleanup(ctx context.Context, conn *Conn) error
- func (a *TruncateStmtAction) ExecContext(ctx context.Context, conn *Conn) (driver.Result, error)
- func (a *TruncateStmtAction) Prepare(ctx context.Context, conn *Conn) (driver.Stmt, error)
- func (a *TruncateStmtAction) QueryContext(ctx context.Context, conn *Conn) (*Rows, error)
- type Type
- func (t *Type) AvailableAutoIndex() bool
- func (t *Type) FormatType() string
- func (t *Type) FunctionArgumentType() (*types.FunctionArgumentType, error)
- func (t *Type) GoReflectType() (reflect.Type, error)
- func (t *Type) IsArray() bool
- func (t *Type) IsStruct() bool
- func (t *Type) ToZetaSQLType() (types.Type, error)
- type UpdateData
- type UpdateStatement
- type VARIANCE
- type VAR_POP
- type VAR_SAMP
- type Value
- func ABS(a Value) (Value, error)
- func ACOS(x Value) (Value, error)
- func ACOSH(x Value) (Value, error)
- func ADD(a, b Value) (Value, error)
- func AND(args ...Value) (Value, error)
- func ARRAY_CONCAT(args ...Value) (Value, error)
- func ARRAY_IN(a, b Value) (Value, error)
- func ARRAY_LENGTH(v *ArrayValue) (Value, error)
- func ARRAY_OFFSET(v Value, idx int) (Value, error)
- func ARRAY_ORDINAL(v Value, idx int) (Value, error)
- func ARRAY_REVERSE(v *ArrayValue) (Value, error)
- func ARRAY_SAFE_OFFSET(v Value, idx int) (Value, error)
- func ARRAY_SAFE_ORDINAL(v Value, idx int) (Value, error)
- func ARRAY_TO_STRING(arr *ArrayValue, delim string, nullText ...string) (Value, error)
- func ASCII(v string) (Value, error)
- func ASIN(x Value) (Value, error)
- func ASINH(x Value) (Value, error)
- func ATAN(x Value) (Value, error)
- func ATAN2(x, y Value) (Value, error)
- func ATANH(x Value) (Value, error)
- func BETWEEN(target, start, end Value) (Value, error)
- func BIT_AND(a, b Value) (Value, error)
- func BIT_COUNT(v Value) (Value, error)
- func BIT_LEFT_SHIFT(a, b Value) (Value, error)
- func BIT_NOT(a Value) (Value, error)
- func BIT_OR(a, b Value) (Value, error)
- func BIT_RIGHT_SHIFT(a, b Value) (Value, error)
- func BIT_XOR(a, b Value) (Value, error)
- func BYTE_LENGTH(v []byte) (Value, error)
- func CAST(expr Value, fromType, toType *Type, isSafeCast bool) (Value, error)
- func CEIL(x Value) (Value, error)
- func CHAR_LENGTH(v []byte) (Value, error)
- func CHR(v int64) (Value, error)
- func COALESCE(args ...Value) (Value, error)
- func CODE_POINTS_TO_BYTES(v *ArrayValue) (Value, error)
- func CODE_POINTS_TO_STRING(v *ArrayValue) (Value, error)
- func COLLATE(v, spec string) (Value, error)
- func CONCAT(args ...Value) (Value, error)
- func CONTAINS_SUBSTR(value string, search string) (Value, error)
- func COS(x Value) (Value, error)
- func COSH(x Value) (Value, error)
- func CURRENT_DATE(zone string) (Value, error)
- func CURRENT_DATETIME(zone string) (Value, error)
- func CURRENT_DATETIME_WITH_TIME(v time.Time) (Value, error)
- func CURRENT_DATE_WITH_TIME(v time.Time) (Value, error)
- func CURRENT_TIME(zone string) (Value, error)
- func CURRENT_TIMESTAMP(zone string) (Value, error)
- func CURRENT_TIMESTAMP_WITH_TIME(v time.Time) (Value, error)
- func CURRENT_TIME_WITH_TIME(v time.Time) (Value, error)
- func CastValue(t types.Type, v Value) (Value, error)
- func DATE(args ...Value) (Value, error)
- func DATETIME(args ...Value) (Value, error)
- func DATETIME_ADD(t time.Time, v int64, part string) (Value, error)
- func DATETIME_DIFF(a, b time.Time, part string) (Value, error)
- func DATETIME_SUB(t time.Time, v int64, part string) (Value, error)
- func DATETIME_TRUNC(t time.Time, part string) (Value, error)
- func DATE_ADD(t time.Time, v int64, part string) (Value, error)
- func DATE_DIFF(a, b time.Time, part string) (Value, error)
- func DATE_FROM_UNIX_DATE(unixdate int64) (Value, error)
- func DATE_SUB(t time.Time, v int64, part string) (Value, error)
- func DATE_TRUNC(t time.Time, part string) (Value, error)
- func DISTINCT() (Value, error)
- func DIV(x, y Value) (Value, error)
- func DecodeValue(v driver.Value) (Value, error)
- func ENDS_WITH(value, ends Value) (Value, error)
- func EQ(a, b Value) (Value, error)
- func EVAL_JAVASCRIPT(code string, retType *Type, argNames []string, args []Value) (Value, error)
- func EXP(x Value) (Value, error)
- func EXTRACT(v Value, part, zone string) (Value, error)
- func FARM_FINGERPRINT(v []byte) (Value, error)
- func FLOOR(x Value) (Value, error)
- func FORMAT(format string, args ...Value) (Value, error)
- func FORMAT_DATE(format string, t time.Time) (Value, error)
- func FORMAT_DATETIME(format string, t time.Time) (Value, error)
- func FORMAT_TIME(format string, t time.Time) (Value, error)
- func FORMAT_TIMESTAMP(format string, t time.Time, zone string) (Value, error)
- func FROM_BASE32(v string) (Value, error)
- func FROM_BASE64(v string) (Value, error)
- func FROM_HEX(v string) (Value, error)
- func GENERATE_ARRAY(start, end Value, step ...Value) (Value, error)
- func GENERATE_DATE_ARRAY(start, end Value, step ...Value) (Value, error)
- func GENERATE_TIMESTAMP_ARRAY(start, end Value, step int64, part string) (Value, error)
- func GENERATE_UUID() (Value, error)
- func GREATEST(args ...Value) (Value, error)
- func GT(a, b Value) (Value, error)
- func GTE(a, b Value) (Value, error)
- func HLL_COUNT_EXTRACT(sketch []byte) (Value, error)
- func IEEE_DIVIDE(x, y Value) (Value, error)
- func IF(cond, trueV, falseV Value) (Value, error)
- func IFNULL(expr, nullResult Value) (Value, error)
- func IGNORE_NULLS() (Value, error)
- func IN(a Value, values ...Value) (Value, error)
- func INITCAP(value string, delimiters []rune) (Value, error)
- func INSTR(source, search Value, position, occurrence int64) (Value, error)
- func INTERVAL(value int64, part string) (Value, error)
- func IS_DISTINCT_FROM(a, b Value) (Value, error)
- func IS_FALSE(a Value) (Value, error)
- func IS_INF(a Value) (Value, error)
- func IS_NAN(a Value) (Value, error)
- func IS_NOT_DISTINCT_FROM(a, b Value) (Value, error)
- func IS_NULL(a Value) (Value, error)
- func IS_TRUE(a Value) (Value, error)
- func JSON_EXTRACT(v, path string) (Value, error)
- func JSON_EXTRACT_ARRAY(v, path string) (Value, error)
- func JSON_EXTRACT_SCALAR(v, path string) (Value, error)
- func JSON_EXTRACT_STRING_ARRAY(v, path string) (Value, error)
- func JSON_FIELD(v, fieldName string) (Value, error)
- func JSON_QUERY(v, path string) (Value, error)
- func JSON_QUERY_ARRAY(v, path string) (Value, error)
- func JSON_SUBSCRIPT(v string, field Value) (Value, error)
- func JSON_TYPE(v JsonValue) (Value, error)
- func JSON_VALUE(v, path string) (Value, error)
- func JSON_VALUE_ARRAY(v, path string) (Value, error)
- func JUSTIFY_DAYS(v *IntervalValue) (Value, error)
- func JUSTIFY_HOURS(v *IntervalValue) (Value, error)
- func JUSTIFY_INTERVAL(v *IntervalValue) (Value, error)
- func LAST_DAY(t time.Time, part string) (Value, error)
- func LEAST(args ...Value) (Value, error)
- func LEFT(v Value, length int64) (Value, error)
- func LENGTH(v Value) (Value, error)
- func LIKE(a, b Value) (Value, error)
- func LIMIT(limit int64) (Value, error)
- func LN(x Value) (Value, error)
- func LOG(x, y Value) (Value, error)
- func LOG10(x Value) (Value, error)
- func LOWER(v Value) (Value, error)
- func LPAD(originalValue Value, returnLength int64, pattern Value) (Value, error)
- func LT(a, b Value) (Value, error)
- func LTE(a, b Value) (Value, error)
- func LTRIM(v Value, cutsetV Value) (Value, error)
- func MAKE_ARRAY(args ...Value) (Value, error)
- func MAKE_INTERVAL(year, month, day, hour, minute, second int64) (Value, error)
- func MAKE_STRUCT(args ...Value) (Value, error)
- func MD5(v []byte) (Value, error)
- func MOD(x, y Value) (Value, error)
- func MUL(a, b Value) (Value, error)
- func NET_HOST(v string) (Value, error)
- func NET_IPV4_FROM_INT64(v int64) (Value, error)
- func NET_IPV4_TO_INT64(v []byte) (Value, error)
- func NET_IP_FROM_STRING(v string) (Value, error)
- func NET_IP_NET_MASK(output, prefix int64) (Value, error)
- func NET_IP_TO_STRING(v []byte) (Value, error)
- func NET_IP_TRUNC(v []byte, length int64) (Value, error)
- func NET_PUBLIC_SUFFIX(v string) (Value, error)
- func NET_REG_DOMAIN(v string) (Value, error)
- func NET_SAFE_IP_FROM_STRING(v string) (Value, error)
- func NORMALIZE(v, mode string) (Value, error)
- func NORMALIZE_AND_CASEFOLD(v, mode string) (Value, error)
- func NOT(a Value) (Value, error)
- func NOT_EQ(a, b Value) (Value, error)
- func NULLIF(expr, exprToMatch Value) (Value, error)
- func OP_DIV(a, b Value) (Value, error)
- func OR(args ...Value) (Value, error)
- func ORDER_BY(value Value, isAsc bool) (Value, error)
- func PARSE_BIGNUMERIC(numeric string) (Value, error)
- func PARSE_DATE(format, date string) (Value, error)
- func PARSE_DATETIME(format, date string) (Value, error)
- func PARSE_JSON(expr, mode string) (Value, error)
- func PARSE_NUMERIC(numeric string) (Value, error)
- func PARSE_TIME(format, date string) (Value, error)
- func PARSE_TIMESTAMP(format, date string) (Value, error)
- func PARSE_TIMESTAMP_WITH_TIMEZONE(format, date, zone string) (Value, error)
- func POW(x, y Value) (Value, error)
- func RAND() (Value, error)
- func RANGE_BUCKET(point Value, array *ArrayValue) (Value, error)
- func REGEXP_CONTAINS(value, expr string) (Value, error)
- func REGEXP_EXTRACT(value Value, expr string, position, occurrence int64) (Value, error)
- func REGEXP_EXTRACT_ALL(value Value, expr string) (Value, error)
- func REGEXP_INSTR(sourceValue, exprValue Value, position, occurrence, occurrencePos int64) (Value, error)
- func REGEXP_REPLACE(value, exprValue, replacementValue Value) (Value, error)
- func REPEAT(originalValue Value, repetitions int64) (Value, error)
- func REPLACE(originalValue, fromValue, toValue Value) (Value, error)
- func REVERSE(value Value) (Value, error)
- func RIGHT(value Value, length int64) (Value, error)
- func ROUND(x Value, precision int) (Value, error)
- func RPAD(originalValue Value, returnLength int64, pattern Value) (Value, error)
- func RTRIM(v Value, cutsetV Value) (Value, error)
- func SAFE_ADD(x, y Value) (Value, error)
- func SAFE_CONVERT_BYTES_TO_STRING(value []byte) (Value, error)
- func SAFE_DIVIDE(x, y Value) (Value, error)
- func SAFE_MULTIPLY(x, y Value) (Value, error)
- func SAFE_NEGATE(x Value) (Value, error)
- func SAFE_SUBTRACT(x, y Value) (Value, error)
- func SESSION_USER() (Value, error)
- func SHA1(v []byte) (Value, error)
- func SHA256(v []byte) (Value, error)
- func SHA512(v []byte) (Value, error)
- func SIGN(a Value) (Value, error)
- func SIN(x Value) (Value, error)
- func SINH(x Value) (Value, error)
- func SOUNDEX(value string) (Value, error)
- func SPLIT(value, delimValue Value) (Value, error)
- func SQRT(x Value) (Value, error)
- func STARTS_WITH(value, starts Value) (Value, error)
- func STRING(t time.Time, zone string) (Value, error)
- func STRPOS(value, search Value) (Value, error)
- func STRUCT_FIELD(v Value, idx int) (Value, error)
- func SUB(a, b Value) (Value, error)
- func SUBSTR(value Value, pos int64, length *int64) (Value, error)
- func TAN(x Value) (Value, error)
- func TANH(x Value) (Value, error)
- func TIME(args ...Value) (Value, error)
- func TIMESTAMP(v Value, zone string) (Value, error)
- func TIMESTAMP_ADD(t time.Time, v int64, part string) (Value, error)
- func TIMESTAMP_DIFF(a, b time.Time, part string) (Value, error)
- func TIMESTAMP_MICROS(sec int64) (Value, error)
- func TIMESTAMP_MILLIS(sec int64) (Value, error)
- func TIMESTAMP_SECONDS(sec int64) (Value, error)
- func TIMESTAMP_SUB(t time.Time, v int64, part string) (Value, error)
- func TIMESTAMP_TRUNC(t time.Time, part, zone string) (Value, error)
- func TIME_ADD(t time.Time, v int64, part string) (Value, error)
- func TIME_DIFF(a, b time.Time, part string) (Value, error)
- func TIME_SUB(t time.Time, v int64, part string) (Value, error)
- func TIME_TRUNC(t time.Time, part string) (Value, error)
- func TO_BASE32(v []byte) (Value, error)
- func TO_BASE64(v []byte) (Value, error)
- func TO_CODE_POINTS(v Value) (Value, error)
- func TO_HEX(v []byte) (Value, error)
- func TO_JSON(v Value, stringifyWideNumbers bool) (Value, error)
- func TO_JSON_STRING(v Value, prettyPrint bool) (Value, error)
- func TRANSLATE(expr, source, target Value) (Value, error)
- func TRIM(v Value, cutsetV Value) (Value, error)
- func TRUNC(x Value) (Value, error)
- func UNICODE(v string) (Value, error)
- func UNIX_DATE(t time.Time) (Value, error)
- func UNIX_MICROS(t time.Time) (Value, error)
- func UNIX_MILLIS(t time.Time) (Value, error)
- func UNIX_SECONDS(t time.Time) (Value, error)
- func UPPER(v Value) (Value, error)
- func ValueFromGoValue(v interface{}) (Value, error)
- func ValueFromZetaSQLValue(v types.Value) (Value, error)
- type ValueLayout
- type ValueType
- type WINDOW_ANY_VALUE
- type WINDOW_ARRAY_AGG
- type WINDOW_AVG
- type WINDOW_CORR
- type WINDOW_COUNT
- type WINDOW_COUNTIF
- type WINDOW_COUNT_STAR
- type WINDOW_COVAR_POP
- type WINDOW_COVAR_SAMP
- type WINDOW_CUME_DIST
- type WINDOW_DENSE_RANK
- type WINDOW_FIRST_VALUE
- type WINDOW_LAG
- type WINDOW_LAST_VALUE
- type WINDOW_LEAD
- type WINDOW_LOGICAL_AND
- type WINDOW_LOGICAL_OR
- type WINDOW_MAX
- type WINDOW_MIN
- type WINDOW_NTH_VALUE
- type WINDOW_NTILE
- func (f *WINDOW_NTILE) Done(agg *WindowFuncAggregatedStatus) (Value, error)
- func (f *WINDOW_NTILE) Inverse(values []Value, agg *WindowFuncAggregatedStatus) error
- func (f *WINDOW_NTILE) ParseArguments(args []Value) error
- func (f *WINDOW_NTILE) Step(values []Value, agg *WindowFuncAggregatedStatus) error
- type WINDOW_PERCENTILE_CONT
- type WINDOW_PERCENTILE_DISC
- type WINDOW_PERCENT_RANK
- type WINDOW_RANK
- type WINDOW_ROW_NUMBER
- type WINDOW_STDDEV
- type WINDOW_STDDEV_POP
- type WINDOW_STDDEV_SAMP
- type WINDOW_STRING_AGG
- type WINDOW_SUM
- type WINDOW_VARIANCE
- type WINDOW_VAR_POP
- type WINDOW_VAR_SAMP
- type WhenClause
- type WhenClauseData
- type WildcardTable
- func (t *WildcardTable) AnonymizationInfo() *types.AnonymizationInfo
- func (t *WildcardTable) Column(idx int) types.Column
- func (t *WildcardTable) CreateEvaluatorTableIterator(columnIdxs []int) (*types.EvaluatorTableIterator, error)
- func (t *WildcardTable) FindColumnByName(name string) types.Column
- func (t *WildcardTable) FormatSQL(ctx context.Context) (string, error)
- func (t *WildcardTable) FullName() string
- func (t *WildcardTable) IsValueTable() bool
- func (t *WildcardTable) Name() string
- func (t *WildcardTable) NumColumns() int
- func (t *WildcardTable) PrimaryKey() []int
- func (t *WildcardTable) SerializationID() int64
- func (t *WildcardTable) SupportsAnonymization() bool
- func (t *WildcardTable) TableTypeName(mode types.ProductMode) string
- type WindowAggregator
- func (a *WindowAggregator) Final(ctx *sqlite.FunctionContext)
- func (a *WindowAggregator) Step(ctx *sqlite.FunctionContext, stepArgs []driver.Value) error
- func (a *WindowAggregator) WindowInverse(ctx *sqlite.FunctionContext, stepArgs []driver.Value) error
- func (a *WindowAggregator) WindowValue(ctx *sqlite.FunctionContext) (driver.Value, error)
- type WindowAggregatorMinimumImpl
- type WindowAggregatorWithArgumentParser
- type WindowBindFunction
- type WindowFuncAggregatedStatus
- func (s *WindowFuncAggregatedStatus) Distinct() bool
- func (s *WindowFuncAggregatedStatus) IgnoreNulls() bool
- func (s *WindowFuncAggregatedStatus) Inverse(value Value) error
- func (s *WindowFuncAggregatedStatus) RelevantValues() ([]Value, error)
- func (s *WindowFuncAggregatedStatus) Step(value Value) error
- type WindowFuncInfo
- type WindowSpecification
- type WindowSpecificationData
- type WithClause
- type WithEntryData
- type WithEntryTransformer
- type WithRefScanData
- type WithRefScanTransformer
- type WithScanData
- type WithScanTransformer
Constants ¶
const MERGED_TABLE = "zetasqlite_merged_table"
const NullStatmentActionQuery = "SELECT 'unsupported statement';"
Variables ¶
var NodeKindToScopeBehavior = map[ast.Kind]ScopeBehavior{ ast.TableScan: ScopeOpener, ast.ArrayScan: ScopeOpener, ast.TVFScan: ScopeOpener, ast.RelationArgumentScan: ScopeOpener, ast.ProjectScan: ScopeFilter, ast.AggregateScan: ScopeFilter, ast.AnalyticScan: ScopeFilter, ast.SetOperationScan: ScopeFilter, ast.FilterScan: ScopePassthrough, ast.OrderByScan: ScopePassthrough, ast.LimitOffsetScan: ScopePassthrough, ast.SampleScan: ScopePassthrough, ast.SingleRowScan: ScopePassthrough, ast.JoinScan: ScopeMerger, ast.RecursiveScan: ScopeMerger, ast.WithScan: ScopeTransformer, ast.WithRefScan: ScopeTransformer, ast.SubqueryExpr: ScopeTransformer, ast.PivotScan: ScopeTransformer, ast.UnpivotScan: ScopeTransformer, }
NodeKindToScopeBehavior maps ZetaSQL resolved node kinds to their column scope behavior.
This map provides a quick lookup for determining how any scan node handles column scopes during AST traversal. Use this for:
- Validating column resolution during visitor implementation
- Planning SQL generation strategies for different node types
- Debugging column availability issues in complex queries
- Understanding data flow through querybuilder trees
Usage:
behavior := NodeKindToScopeBehavior[RESOLVED_PROJECT_SCAN] if behavior == ScopeFilter { // Handle column restriction logic }
var WeekPartToOffset = map[string]int{
"WEEK": 0,
"WEEK_MONDAY": 1,
"WEEK_TUESDAY": 2,
"WEEK_WEDNESDAY": 3,
"WEEK_THURSDAY": 4,
"WEEK_FRIDAY": 5,
"WEEK_SATURDAY": 6,
}
Functions ¶
func EncodeGoValue ¶
func EncodeGoValues ¶
func EncodeGoValues(v []interface{}, params []*ast.ParameterNode) ([]interface{}, error)
func EncodeNamedValues ¶
func EncodeNamedValues(v []driver.NamedValue, params []*ast.ParameterNode) ([]sql.NamedArg, error)
func EncodeValue ¶
func GetNodesByBehavior ¶
func GetNodesByBehavior(behavior ScopeBehavior) []ast.Kind
GetNodesByBehavior returns all node kinds that exhibit the specified scope behavior
func GetUniqueColumnName ¶
func IsScopeFilter ¶
IsScopeFilter returns true if the node kind removes/transforms columns
func IsScopeMerger ¶
IsScopeMerger returns true if the node kind combines columns from multiple sources
func IsScopeOpener ¶
IsScopeOpener returns true if the node kind creates/produces new columns
func IsScopePassthrough ¶
IsScopePassthrough returns true if the node kind preserves input columns exactly
func IsScopeTransformer ¶
IsScopeTransformer returns true if the node kind has special column handling
func LiteralFromValue ¶
func RegisterFunctions ¶
func RegisterFunctions() error
func ValidateColumnFlow ¶
ValidateColumnFlow validates that column flow follows ZetaSQL scope rules.
This function can be used during AST traversal to ensure that: - Scope openers properly introduce new columns - Scope filters correctly restrict available columns - Scope passthrough preserves column identity - Scope mergers properly combine column sets - Scope transformers handle special cases correctly
Parameters:
nodeKind: The resolved node kind (e.g., "RESOLVED_PROJECT_SCAN") inputColumns: Column IDs available from input scan(s) outputColumns: Column IDs produced by this scan
Returns error if column flow violates ZetaSQL scope rules.
Types ¶
type APPROX_COUNT_DISTINCT ¶
type APPROX_COUNT_DISTINCT struct {
// contains filtered or unexported fields
}
func (*APPROX_COUNT_DISTINCT) Done ¶
func (f *APPROX_COUNT_DISTINCT) Done() (Value, error)
func (*APPROX_COUNT_DISTINCT) Step ¶
func (f *APPROX_COUNT_DISTINCT) Step(v Value, opt *AggregatorOption) error
type APPROX_QUANTILES ¶
type APPROX_QUANTILES struct {
// contains filtered or unexported fields
}
func (*APPROX_QUANTILES) Done ¶
func (f *APPROX_QUANTILES) Done() (Value, error)
func (*APPROX_QUANTILES) Step ¶
func (f *APPROX_QUANTILES) Step(v Value, num int64, opt *AggregatorOption) error
type APPROX_TOP_COUNT ¶
type APPROX_TOP_COUNT struct {
// contains filtered or unexported fields
}
func (*APPROX_TOP_COUNT) Done ¶
func (f *APPROX_TOP_COUNT) Done() (Value, error)
func (*APPROX_TOP_COUNT) Step ¶
func (f *APPROX_TOP_COUNT) Step(v Value, num int64, opt *AggregatorOption) error
type APPROX_TOP_SUM ¶
type APPROX_TOP_SUM struct {
// contains filtered or unexported fields
}
func (*APPROX_TOP_SUM) Done ¶
func (f *APPROX_TOP_SUM) Done() (Value, error)
func (*APPROX_TOP_SUM) Step ¶
func (f *APPROX_TOP_SUM) Step(v, weight Value, num int64, opt *AggregatorOption) error
type ARRAY_CONCAT_AGG ¶
type ARRAY_CONCAT_AGG struct {
// contains filtered or unexported fields
}
func (*ARRAY_CONCAT_AGG) Done ¶
func (f *ARRAY_CONCAT_AGG) Done() (Value, error)
func (*ARRAY_CONCAT_AGG) Step ¶
func (f *ARRAY_CONCAT_AGG) Step(v *ArrayValue, opt *AggregatorOption) error
type AggregateBindFunction ¶
type AggregateBindFunction func() func(ctx sqlite.FunctionContext) (sqlite.AggregateFunction, error)
type AggregateFuncInfo ¶
type AggregateFuncInfo struct { Name string BindFunc AggregateBindFunction }
type AggregateNameAndFunc ¶
type AggregateNameAndFunc struct { Name string MakeAggregate func(ctx sqlite.FunctionContext) (sqlite.AggregateFunction, error) }
type AggregateOrderBy ¶
func (*AggregateOrderBy) UnmarshalJSON ¶
func (a *AggregateOrderBy) UnmarshalJSON(b []byte) error
type AggregateScanData ¶
type AggregateScanData struct { InputScan ScanData `json:"input_scan,omitempty"` GroupByList []*ComputedColumnData `json:"group_by_list,omitempty"` AggregateList []*ComputedColumnData `json:"aggregate_list,omitempty"` GroupingSets []*GroupingSetData `json:"grouping_sets,omitempty"` }
AggregateScanData represents aggregate operation data
type AggregateScanTransformer ¶
type AggregateScanTransformer struct {
// contains filtered or unexported fields
}
AggregateScanTransformer handles aggregate operation transformations from ZetaSQL to SQLite.
In BigQuery/ZetaSQL, aggregate scans represent GROUP BY operations with aggregate functions like SUM, COUNT, AVG, etc. This includes complex features like ROLLUP, CUBE, and GROUPING SETS that create multiple levels of aggregation in a single query.
The transformer converts ZetaSQL AggregateScan nodes by: - Transforming the input scan that provides data for aggregation - Converting aggregate expressions (SUM, COUNT, etc.) with zetasqlite function wrappers - Processing GROUP BY expressions with proper ZetaSQL semantics - Handling ROLLUP and GROUPING SETS via UNION ALL of different grouping levels - Managing NULL values for rollup totals and subtotals
Key challenges: - ROLLUP generates multiple grouping levels (detail, subtotals, grand total) - Grouping columns become NULL in higher aggregation levels - Preserving ZetaSQL's grouping and aggregation semantics in SQLite
func NewAggregateScanTransformer ¶
func NewAggregateScanTransformer(coordinator Coordinator) *AggregateScanTransformer
NewAggregateScanTransformer creates a new aggregate scan transformer
func (*AggregateScanTransformer) Transform ¶
func (t *AggregateScanTransformer) Transform(data ScanData, ctx TransformContext) (*FromItem, error)
Transform converts AggregateScanData to FromItem with SELECT statement containing aggregation
type Aggregator ¶
type Aggregator struct {
// contains filtered or unexported fields
}
func (*Aggregator) Final ¶
func (a *Aggregator) Final(ctx *sqlite.FunctionContext)
func (*Aggregator) Step ¶
func (a *Aggregator) Step(ctx *sqlite.FunctionContext, stepArgs []driver.Value) error
func (*Aggregator) WindowInverse ¶
func (a *Aggregator) WindowInverse(ctx *sqlite.FunctionContext, rowArgs []driver.Value) error
WindowInverse is called to remove the oldest presently aggregated result of Step from the current window. The arguments are those passed to Step for the row being removed. The argument Values are not valid past the return of the function.
func (*Aggregator) WindowValue ¶
func (a *Aggregator) WindowValue(ctx *sqlite.FunctionContext) (driver.Value, error)
type AggregatorFuncOption ¶
type AggregatorFuncOption struct { Type AggregatorFuncOptionType `json:"type"` Value interface{} `json:"value"` }
func (*AggregatorFuncOption) UnmarshalJSON ¶
func (o *AggregatorFuncOption) UnmarshalJSON(b []byte) error
type AggregatorFuncOptionType ¶
type AggregatorFuncOptionType string
const ( AggregatorFuncOptionUnknown AggregatorFuncOptionType = "aggregate_unknown" AggregatorFuncOptionDistinct AggregatorFuncOptionType = "aggregate_distinct" AggregatorFuncOptionLimit AggregatorFuncOptionType = "aggregate_limit" AggregatorFuncOptionOrderBy AggregatorFuncOptionType = "aggregate_order_by" AggregatorFuncOptionIgnoreNulls AggregatorFuncOptionType = "aggregate_ignore_nulls" )
type AggregatorOption ¶
type AggregatorOption struct { Distinct bool IgnoreNulls bool Limit *int64 OrderBy []*AggregateOrderBy }
type AliasGenerator ¶
type AliasGenerator struct {
// contains filtered or unexported fields
}
AliasGenerator creates unique aliases for tables and columns
func NewAliasGenerator ¶
func NewAliasGenerator() *AliasGenerator
func (*AliasGenerator) GenerateSubqueryAlias ¶
func (ag *AliasGenerator) GenerateSubqueryAlias() string
func (*AliasGenerator) GenerateTableAlias ¶
func (ag *AliasGenerator) GenerateTableAlias() string
type AnalyticScanData ¶
type AnalyticScanData struct { InputScan ScanData `json:"input_scan,omitempty"` // The nested scan providing input FunctionList []*ComputedColumnData `json:"function_list,omitempty"` // List of analytic function calls }
AnalyticScanData represents analytic (window function) scan operation data
type AnalyticScanTransformer ¶
type AnalyticScanTransformer struct {
// contains filtered or unexported fields
}
AnalyticScanTransformer handles analytic scan (window function) transformations from ZetaSQL to SQLite.
In BigQuery/ZetaSQL, analytic scans represent window functions that compute values over a set of rows related to the current row. This includes functions like ROW_NUMBER(), RANK(), LAG(), LEAD(), SUM() OVER(), etc. with PARTITION BY and ORDER BY clauses.
The transformer converts ZetaSQL AnalyticScan nodes by: - Recursively transforming the input scan that provides the base data - Pre-transforming all window function expressions before column registration - Creating SELECT list with both passthrough columns and computed window functions - Extracting ORDER BY clauses from window specifications for proper result ordering - Ensuring proper column qualification and fragment context management
Window functions require careful ordering to ensure correct evaluation, which is preserved through ORDER BY clauses derived from the PARTITION BY and ORDER BY specifications in the window function definitions.
func NewAnalyticScanTransformer ¶
func NewAnalyticScanTransformer(coordinator Coordinator) *AnalyticScanTransformer
NewAnalyticScanTransformer creates a new analytic scan transformer
func (*AnalyticScanTransformer) Transform ¶
func (t *AnalyticScanTransformer) Transform(data ScanData, ctx TransformContext) (*FromItem, error)
Transform converts AnalyticScanData to a FromItem representing window function operations
type Analyzer ¶
type Analyzer struct {
// contains filtered or unexported fields
}
func NewAnalyzer ¶
func (*Analyzer) AddNamePath ¶
func (*Analyzer) Analyze ¶
func (a *Analyzer) Analyze(ctx context.Context, conn *Conn, query string, args []driver.NamedValue) ([]StmtActionFunc, error)
func (*Analyzer) MaxNamePath ¶
func (*Analyzer) SetAutoIndexMode ¶
func (*Analyzer) SetExplainMode ¶
func (*Analyzer) SetMaxNamePath ¶
func (*Analyzer) SetNamePath ¶
type ArgumentInfo ¶
type ArgumentInfo struct { Name string `json:"name,omitempty"` Type types.Type `json:"type,omitempty"` }
ArgumentInfo represents function argument metadata
type ArrayScanData ¶
type ArrayScanData struct { InputScan *ScanData `json:"input_scan,omitempty"` // Optional input scan for correlated arrays ArrayExpr ExpressionData `json:"array_expr,omitempty"` // Array expression to UNNEST ElementColumn *ColumnData `json:"element_column,omitempty"` // Column for array elements ArrayOffsetColumn *ColumnData `json:"array_offset_column,omitempty"` // Optional column for array indices IsOuter bool `json:"is_outer,omitempty"` // Whether to use LEFT JOIN (true) or INNER JOIN (false) JoinExpr *ExpressionData `json:"join_expr,omitempty"` // Optional join condition }
ArrayScanData represents array scan (UNNEST) operation data
type ArrayScanTransformer ¶
type ArrayScanTransformer struct {
// contains filtered or unexported fields
}
ArrayScanTransformer handles array scan (UNNEST operations) transformations from ZetaSQL to SQLite.
In BigQuery/ZetaSQL, array scans represent UNNEST operations that flatten array values into individual rows. This enables queries to iterate over array elements as if they were rows in a table, with optional position/offset information and join conditions.
The transformer converts ZetaSQL ArrayScan nodes by: - Transforming array expressions through the coordinator - Using SQLite's json_each() table function with zetasqlite_decode_array() for UNNEST - Handling correlated arrays with proper JOIN semantics (INNER vs LEFT) - Managing element and offset column availability in the fragment context - Supporting both standalone UNNEST and UNNEST with input scans
The json_each() approach provides 'key' (offset) and 'value' (element) columns that map to ZetaSQL's array element and offset semantics in SQLite.
func NewArrayScanTransformer ¶
func NewArrayScanTransformer(coordinator Coordinator) *ArrayScanTransformer
NewArrayScanTransformer creates a new ArrayScanTransformer
func (*ArrayScanTransformer) Transform ¶
func (t *ArrayScanTransformer) Transform(data ScanData, ctx TransformContext) (*FromItem, error)
Transform converts ArrayScanData to a FromItem representing UNNEST operation
type ArrayValue ¶
type ArrayValue struct {
// contains filtered or unexported fields
}
func (*ArrayValue) Format ¶
func (av *ArrayValue) Format(verb rune) string
func (*ArrayValue) Interface ¶
func (av *ArrayValue) Interface() interface{}
func (*ArrayValue) ToArray ¶
func (av *ArrayValue) ToArray() (*ArrayValue, error)
func (*ArrayValue) ToBool ¶
func (av *ArrayValue) ToBool() (bool, error)
func (*ArrayValue) ToBytes ¶
func (av *ArrayValue) ToBytes() ([]byte, error)
func (*ArrayValue) ToFloat64 ¶
func (av *ArrayValue) ToFloat64() (float64, error)
func (*ArrayValue) ToInt64 ¶
func (av *ArrayValue) ToInt64() (int64, error)
func (*ArrayValue) ToJSON ¶
func (av *ArrayValue) ToJSON() (string, error)
func (*ArrayValue) ToString ¶
func (av *ArrayValue) ToString() (string, error)
func (*ArrayValue) ToStruct ¶
func (av *ArrayValue) ToStruct() (*StructValue, error)
type BIT_AND_AGG ¶
type BIT_AND_AGG struct {
// contains filtered or unexported fields
}
func (*BIT_AND_AGG) Done ¶
func (f *BIT_AND_AGG) Done() (Value, error)
func (*BIT_AND_AGG) Step ¶
func (f *BIT_AND_AGG) Step(v Value, opt *AggregatorOption) error
type BIT_OR_AGG ¶
type BIT_OR_AGG struct {
// contains filtered or unexported fields
}
func (*BIT_OR_AGG) Done ¶
func (f *BIT_OR_AGG) Done() (Value, error)
func (*BIT_OR_AGG) Step ¶
func (f *BIT_OR_AGG) Step(v Value, opt *AggregatorOption) error
type BIT_XOR_AGG ¶
type BIT_XOR_AGG struct {
// contains filtered or unexported fields
}
func (*BIT_XOR_AGG) Done ¶
func (f *BIT_XOR_AGG) Done() (Value, error)
func (*BIT_XOR_AGG) Step ¶
func (f *BIT_XOR_AGG) Step(v Value, opt *AggregatorOption) error
type BeginStmtAction ¶
type BeginStmtAction struct{}
func (*BeginStmtAction) Args ¶
func (a *BeginStmtAction) Args() []interface{}
func (*BeginStmtAction) Cleanup ¶
func (a *BeginStmtAction) Cleanup(ctx context.Context, conn *Conn) error
func (*BeginStmtAction) ExecContext ¶
func (*BeginStmtAction) QueryContext ¶
type BinaryExpression ¶
type BinaryExpression struct { Left *SQLExpression Right *SQLExpression Operator string }
func (*BinaryExpression) String ¶
func (e *BinaryExpression) String() string
func (*BinaryExpression) WriteDebugString ¶
func (e *BinaryExpression) WriteDebugString(writer *SQLWriter, prefix string)
func (*BinaryExpression) WriteSql ¶
func (e *BinaryExpression) WriteSql(writer *SQLWriter) error
type BinaryExpressionData ¶
type BinaryExpressionData struct { Left ExpressionData `json:"left,omitempty"` Operator string `json:"operator,omitempty"` Right ExpressionData `json:"right,omitempty"` }
BinaryExpressionData represents binary operation data
type BindFunction ¶
type BoolValue ¶
type BoolValue bool
func (BoolValue) ToArray ¶
func (bv BoolValue) ToArray() (*ArrayValue, error)
func (BoolValue) ToStruct ¶
func (bv BoolValue) ToStruct() (*StructValue, error)
type BytesValue ¶
type BytesValue []byte
func (BytesValue) Format ¶
func (bv BytesValue) Format(verb rune) string
func (BytesValue) Interface ¶
func (bv BytesValue) Interface() interface{}
func (BytesValue) ToArray ¶
func (bv BytesValue) ToArray() (*ArrayValue, error)
func (BytesValue) ToBool ¶
func (bv BytesValue) ToBool() (bool, error)
func (BytesValue) ToBytes ¶
func (bv BytesValue) ToBytes() ([]byte, error)
func (BytesValue) ToFloat64 ¶
func (bv BytesValue) ToFloat64() (float64, error)
func (BytesValue) ToInt64 ¶
func (bv BytesValue) ToInt64() (int64, error)
func (BytesValue) ToJSON ¶
func (bv BytesValue) ToJSON() (string, error)
func (BytesValue) ToString ¶
func (bv BytesValue) ToString() (string, error)
func (BytesValue) ToStruct ¶
func (bv BytesValue) ToStruct() (*StructValue, error)
type COUNT_STAR ¶
type COUNT_STAR struct {
// contains filtered or unexported fields
}
func (*COUNT_STAR) Done ¶
func (f *COUNT_STAR) Done() (Value, error)
func (*COUNT_STAR) Step ¶
func (f *COUNT_STAR) Step(opt *AggregatorOption) error
type COVAR_SAMP ¶
type COVAR_SAMP struct {
// contains filtered or unexported fields
}
func (*COVAR_SAMP) Done ¶
func (f *COVAR_SAMP) Done() (Value, error)
func (*COVAR_SAMP) Step ¶
func (f *COVAR_SAMP) Step(x, y Value, opt *AggregatorOption) error
type CaseExpression ¶
type CaseExpression struct { CaseExpr *SQLExpression // Optional expression after CASE (for CASE expr WHEN...) WhenClauses []*WhenClause // WHEN condition THEN result pairs ElseExpr *SQLExpression // Optional ELSE expression }
CaseExpression represents SQL CASE expressions
func (*CaseExpression) String ¶
func (c *CaseExpression) String() string
func (*CaseExpression) WriteSql ¶
func (c *CaseExpression) WriteSql(writer *SQLWriter) error
WriteSql method for CaseExpression
type CaseExpressionData ¶
type CaseExpressionData struct { CaseExpr *ExpressionData `json:"case_expr,omitempty"` // Optional - for CASE expr WHEN... WhenClauses []*WhenClauseData `json:"when_clauses,omitempty"` ElseClause *ExpressionData `json:"else_clause,omitempty"` }
CaseExpressionData represents CASE expression data
type CastData ¶
type CastData struct { Expression ExpressionData `json:"expression,omitempty"` FromType types.Type `json:"from_type,omitempty"` ToType types.Type `json:"to_type,omitempty"` SafeCast bool `json:"safe_cast,omitempty"` ReturnNullOnErr bool `json:"return_null_on_err,omitempty"` }
CastData represents type casting data
type CastTransformer ¶
type CastTransformer struct {
// contains filtered or unexported fields
}
CastTransformer handles transformation of type casting operations from ZetaSQL to SQLite.
BigQuery/ZetaSQL has a rich type system with complex types (STRUCT, ARRAY, etc.) and sophisticated casting rules that differ significantly from SQLite's simpler type system. ZetaSQL supports both explicit CAST() operations and implicit type coercion.
The transformer converts ZetaSQL cast operations by: - Recursively transforming the expression being cast - Encoding source and target type information as JSON - Using the zetasqlite_cast runtime function for complex type conversions - Handling safe cast semantics (SAFE_CAST returns NULL on conversion failure)
The zetasqlite_cast function bridges the type system gap by implementing ZetaSQL's casting semantics in the SQLite runtime, preserving behavior for complex types and edge cases that SQLite's native CAST cannot handle.
func NewCastTransformer ¶
func NewCastTransformer(coordinator Coordinator) *CastTransformer
NewCastTransformer creates a new cast transformer
func (*CastTransformer) Transform ¶
func (t *CastTransformer) Transform(data ExpressionData, ctx TransformContext) (*SQLExpression, error)
Transform converts CastData to SQLExpression
type Catalog ¶
type Catalog struct {
// contains filtered or unexported fields
}
func (*Catalog) AddNewFunctionSpec ¶
func (*Catalog) AddNewTableSpec ¶
func (*Catalog) DeleteFunctionSpec ¶
func (*Catalog) DeleteTableSpec ¶
func (*Catalog) ExtendedTypeSuperTypes ¶
func (*Catalog) FindConnection ¶
func (c *Catalog) FindConnection(path []string) (types.Connection, error)
func (*Catalog) FindConstant ¶
func (*Catalog) FindConversion ¶
func (*Catalog) FindFunction ¶
func (*Catalog) FindProcedure ¶
func (*Catalog) FindTableValuedFunction ¶
func (c *Catalog) FindTableValuedFunction(path []string) (types.TableValuedFunction, error)
func (*Catalog) SuggestConstant ¶
func (*Catalog) SuggestFunction ¶
func (*Catalog) SuggestModel ¶
func (*Catalog) SuggestTable ¶
func (*Catalog) SuggestTableValuedFunction ¶
type CatalogSpecKind ¶
type CatalogSpecKind string
const ( TableSpecKind CatalogSpecKind = "table" ViewSpecKind CatalogSpecKind = "view" FunctionSpecKind CatalogSpecKind = "function" )
type ChangedCatalog ¶
type ChangedCatalog struct { Table *ChangedTable Function *ChangedFunction }
func (*ChangedCatalog) Changed ¶
func (c *ChangedCatalog) Changed() bool
type ChangedFunction ¶
type ChangedFunction struct { Added []*FunctionSpec Deleted []*FunctionSpec }
func (*ChangedFunction) Changed ¶
func (f *ChangedFunction) Changed() bool
type ChangedTable ¶
func (*ChangedTable) Changed ¶
func (t *ChangedTable) Changed() bool
type ColumnData ¶
type ColumnData struct { ID int `json:"id,omitempty"` Name string `json:"name,omitempty"` Type string `json:"type,omitempty"` TableName string `json:"table_name,omitempty"` }
ColumnData represents extracted column information for JSON serialization
type ColumnDefinition ¶
type ColumnDefinition struct { Name string Type string NotNull bool DefaultValue *SQLExpression IsPrimaryKey bool }
func (*ColumnDefinition) String ¶
func (c *ColumnDefinition) String() string
func (*ColumnDefinition) WriteSql ¶
func (c *ColumnDefinition) WriteSql(writer *SQLWriter) error
ColumnDefinition WriteSql implementation
type ColumnDefinitionData ¶
type ColumnDefinitionData struct { Name string `json:"name,omitempty"` Type string `json:"type,omitempty"` NotNull bool `json:"not_null,omitempty"` IsPrimaryKey bool `json:"is_primary_key,omitempty"` DefaultValue *ExpressionData `json:"default_value,omitempty"` }
ColumnDefinitionData represents column definition data
type ColumnInfo ¶
type ColumnInfo struct { Name string Type string TableAlias string Expression *SQLExpression ID int IsAggregated bool ColumnID string `json:"column_id,omitempty"` // Full column identifier like "A.id#1" }
ColumnInfo stores metadata about available columns
func (ColumnInfo) Clone ¶
func (i ColumnInfo) Clone() *ColumnInfo
type ColumnListProvider ¶
ColumnListProvider provides a common interface for AST nodes that contain column lists. This interface allows different node types to be treated uniformly when accessing their columns.
type ColumnMapping ¶
type ColumnMapping struct { SourceColumnMap map[*ColumnData]string // original column -> new column name for source table TargetColumnMap map[*ColumnData]string // original column -> new column name for target table AllColumnMap map[*ColumnData]string // all original column -> new column names }
ColumnMapping represents the mapping between original and new column names
func (ColumnMapping) LookupName ¶
func (m ColumnMapping) LookupName(column *ColumnData) (string, bool)
type ColumnRefData ¶
type ColumnRefData struct { Column *ast.Column `json:"column,omitempty"` TableAlias string `json:"table_alias,omitempty"` ColumnName string `json:"column_name,omitempty"` ColumnID int `json:"column_id,omitempty"` TableName string `json:"table_name,omitempty"` // Original table name from AST }
ColumnRefData represents column reference data
type ColumnRefTransformer ¶
type ColumnRefTransformer struct {
// contains filtered or unexported fields
}
ColumnRefTransformer handles transformation of column reference expressions from ZetaSQL to SQLite.
In BigQuery/ZetaSQL, column references can appear in various contexts (SELECT lists, WHERE clauses, ORDER BY, etc.) and may need qualified names to resolve ambiguity in complex queries with joins, subqueries, or CTEs. The ZetaSQL analyzer resolves these references to specific column IDs.
The transformer converts ZetaSQL ColumnRef nodes into SQLite column references with: - Proper qualification using table aliases when needed - Column name resolution through fragment context - ID-based lookup for disambiguation in complex nested queries
The fragment context maintains the mapping between column IDs and their qualified names, ensuring that column references work correctly across subquery boundaries and joins.
func NewColumnRefTransformer ¶
func NewColumnRefTransformer(coordinator Coordinator) *ColumnRefTransformer
NewColumnRefTransformer creates a new column reference transformer
func (*ColumnRefTransformer) Transform ¶
func (t *ColumnRefTransformer) Transform(data ExpressionData, ctx TransformContext) (*SQLExpression, error)
Transform converts ColumnRefData to SQLExpression
type ColumnSpec ¶
type ColumnSpec struct { Name string `json:"name"` Type *Type `json:"type"` IsNotNull bool `json:"isNotNull"` }
func (*ColumnSpec) SQLiteSchema ¶
func (s *ColumnSpec) SQLiteSchema() string
type CombinationFormatTimeInfo ¶
type CombinationFormatTimeInfo struct { AvailableTypes []TimeFormatType Parse func([]rune, *time.Time) (int, error) Format func(*time.Time) ([]rune, error) }
func (*CombinationFormatTimeInfo) Available ¶
func (i *CombinationFormatTimeInfo) Available(typ TimeFormatType) bool
type CommitStmtAction ¶
type CommitStmtAction struct{}
func (*CommitStmtAction) Args ¶
func (a *CommitStmtAction) Args() []interface{}
func (*CommitStmtAction) Cleanup ¶
func (a *CommitStmtAction) Cleanup(ctx context.Context, conn *Conn) error
func (*CommitStmtAction) ExecContext ¶
func (*CommitStmtAction) QueryContext ¶
type CompoundSQLFragment ¶
type CompoundSQLFragment struct {
// contains filtered or unexported fields
}
CompoundSQLFragment represents multiple SQL statements that should be executed in sequence
func NewCompoundSQLFragment ¶
func NewCompoundSQLFragment(statements []string) *CompoundSQLFragment
NewCompoundSQLFragment creates a new compound SQL fragment
func (*CompoundSQLFragment) GetStatements ¶
func (c *CompoundSQLFragment) GetStatements() []string
GetStatements returns the individual statements in the compound fragment
func (*CompoundSQLFragment) String ¶
func (c *CompoundSQLFragment) String() string
String returns the compound fragment as a collection of statements Note: This is primarily for compatibility - the actual execution will handle each statement separately
func (*CompoundSQLFragment) WriteSql ¶
func (c *CompoundSQLFragment) WriteSql(writer *SQLWriter) error
WriteSql writes the compound fragment to a SQL writer
type ComputedColumnData ¶
type ComputedColumnData struct { Column *ast.Column `json:"column,omitempty"` Expression ExpressionData `json:"expression,omitempty"` }
ComputedColumnData represents computed column data
type Conn ¶
type Conn struct {
// contains filtered or unexported fields
}
func (*Conn) ExecContext ¶
func (*Conn) PrepareContext ¶
type Coordinator ¶
type Coordinator interface { // AST-based transformation methods (for initial entry points) TransformStatementNode(node ast.Node, ctx TransformContext) (SQLFragment, error) // Data-based transformation methods (for transformers working with pure data) TransformExpression(exprData ExpressionData, ctx TransformContext) (*SQLExpression, error) TransformStatement(stmtData StatementData, ctx TransformContext) (SQLFragment, error) TransformScan(scanData ScanData, ctx TransformContext) (*FromItem, error) TransformWithEntry(scanData ScanData, ctx TransformContext) (*WithClause, error) }
Coordinator orchestrates the transformation process without doing the transformations itself
type CreateData ¶
type CreateData struct { Type CreateType `json:"type,omitempty"` Table *CreateTableData `json:"table,omitempty"` View *CreateViewData `json:"view,omitempty"` Function *CreateFunctionData `json:"function,omitempty"` }
CreateData represents CREATE statement data
type CreateFunctionData ¶
type CreateFunctionData struct { FunctionName string `json:"function_name,omitempty"` Parameters []*ParameterDefinitionData `json:"parameters,omitempty"` ReturnType string `json:"return_type,omitempty"` Language string `json:"language,omitempty"` Code string `json:"code,omitempty"` Options map[string]ExpressionData `json:"options,omitempty"` }
CreateFunctionData represents CREATE FUNCTION data
type CreateFunctionStatement ¶
type CreateFunctionStatement struct { IfNotExists bool FunctionName string Parameters []*ParameterDefinition ReturnType string Language string Code string Options map[string]*SQLExpression }
func (*CreateFunctionStatement) String ¶
func (s *CreateFunctionStatement) String() string
func (*CreateFunctionStatement) WriteSql ¶
func (s *CreateFunctionStatement) WriteSql(writer *SQLWriter) error
CreateFunctionStatement WriteSql implementation
type CreateFunctionStmt ¶
type CreateFunctionStmt struct {
// contains filtered or unexported fields
}
func (*CreateFunctionStmt) Close ¶
func (s *CreateFunctionStmt) Close() error
func (*CreateFunctionStmt) NumInput ¶
func (s *CreateFunctionStmt) NumInput() int
type CreateFunctionStmtAction ¶
type CreateFunctionStmtAction struct {
// contains filtered or unexported fields
}
func (*CreateFunctionStmtAction) Args ¶
func (a *CreateFunctionStmtAction) Args() []interface{}
func (*CreateFunctionStmtAction) Cleanup ¶
func (a *CreateFunctionStmtAction) Cleanup(ctx context.Context, conn *Conn) error
func (*CreateFunctionStmtAction) ExecContext ¶
func (*CreateFunctionStmtAction) QueryContext ¶
type CreateTableAsSelectStmtTransformer ¶
type CreateTableAsSelectStmtTransformer struct {
// contains filtered or unexported fields
}
CreateTableAsSelectStmtTransformer handles transformation of CreateTableAsSelectStmt nodes from ZetaSQL to SQLite.
In BigQuery/ZetaSQL, a CreateTableAsSelectStmt represents a CREATE TABLE AS SELECT statement, which creates a new table based on the result of a SELECT query. This transformer converts ZetaSQL CREATE TABLE AS SELECT statements to SQLite-compatible CREATE TABLE AS SELECT syntax.
The transformer handles: - Extracting table name and creation options (IF NOT EXISTS) - Recursively transforming the SELECT query scan through the coordinator - Transforming each output column expression in the SELECT list - Creating the final CreateTableStatement structure for SQL generation
This transformer bridges the gap between ZetaSQL's resolved AST structure and the SQLite CREATE TABLE AS SELECT statement representation.
func NewCreateTableAsSelectStmtTransformer ¶
func NewCreateTableAsSelectStmtTransformer(coordinator Coordinator) *CreateTableAsSelectStmtTransformer
NewCreateTableAsSelectStmtTransformer creates a new CREATE TABLE AS SELECT statement transformer
func (*CreateTableAsSelectStmtTransformer) CanTransform ¶
func (t *CreateTableAsSelectStmtTransformer) CanTransform(node ast.Node) bool
CanTransform checks if this transformer can handle the given node type
func (*CreateTableAsSelectStmtTransformer) Transform ¶
func (t *CreateTableAsSelectStmtTransformer) Transform(data StatementData, ctx TransformContext) (SQLFragment, error)
Transform transforms CREATE TABLE AS SELECT statement data into a SQL fragment
type CreateTableData ¶
type CreateTableData struct { TableName string `json:"table_name,omitempty"` Columns []*ColumnDefinitionData `json:"columns,omitempty"` AsSelect *SelectData `json:"as_select,omitempty"` IfNotExists bool `json:"if_not_exists,omitempty"` }
CreateTableData represents CREATE TABLE data
type CreateTableStatement ¶
type CreateTableStatement struct { IfNotExists bool TableName string Columns []*ColumnDefinition AsSelect *SelectStatement }
func (*CreateTableStatement) String ¶
func (s *CreateTableStatement) String() string
func (*CreateTableStatement) WriteSql ¶
func (s *CreateTableStatement) WriteSql(writer *SQLWriter) error
CreateTableStatement WriteSql implementation
type CreateTableStmt ¶
type CreateTableStmt struct {
// contains filtered or unexported fields
}
func (*CreateTableStmt) Close ¶
func (s *CreateTableStmt) Close() error
func (*CreateTableStmt) NumInput ¶
func (s *CreateTableStmt) NumInput() int
type CreateTableStmtAction ¶
type CreateTableStmtAction struct {
// contains filtered or unexported fields
}
func (*CreateTableStmtAction) Args ¶
func (a *CreateTableStmtAction) Args() []interface{}
func (*CreateTableStmtAction) Cleanup ¶
func (a *CreateTableStmtAction) Cleanup(ctx context.Context, conn *Conn) error
func (*CreateTableStmtAction) ExecContext ¶
func (*CreateTableStmtAction) QueryContext ¶
type CreateType ¶
type CreateType int
CreateType identifies the type of CREATE statement
const ( CreateTypeTable CreateType = iota CreateTypeView CreateTypeFunction )
type CreateViewData ¶
type CreateViewData struct { ViewName string `json:"view_name,omitempty"` Query SelectData `json:"query,omitempty"` }
CreateViewData represents CREATE VIEW data
type CreateViewStatement ¶
type CreateViewStatement struct { IfNotExists bool ViewName string Query SQLFragment }
func (*CreateViewStatement) String ¶
func (s *CreateViewStatement) String() string
func (*CreateViewStatement) WriteSql ¶
func (s *CreateViewStatement) WriteSql(writer *SQLWriter) error
CreateViewStatement WriteSql implementation
type CreateViewStmt ¶
type CreateViewStmt struct {
// contains filtered or unexported fields
}
func (*CreateViewStmt) Close ¶
func (s *CreateViewStmt) Close() error
func (*CreateViewStmt) NumInput ¶
func (s *CreateViewStmt) NumInput() int
type CreateViewStmtAction ¶
type CreateViewStmtAction struct {
// contains filtered or unexported fields
}
func (*CreateViewStmtAction) Args ¶
func (a *CreateViewStmtAction) Args() []interface{}
func (*CreateViewStmtAction) Cleanup ¶
func (a *CreateViewStmtAction) Cleanup(ctx context.Context, conn *Conn) error
func (*CreateViewStmtAction) ExecContext ¶
func (*CreateViewStmtAction) QueryContext ¶
type CreateViewStmtTransformer ¶
type CreateViewStmtTransformer struct {
// contains filtered or unexported fields
}
CreateViewStmtTransformer handles transformation of CreateViewStmt nodes from ZetaSQL to SQLite.
In BigQuery/ZetaSQL, a CreateViewStmt represents a CREATE VIEW statement, which creates a new view based on the result of a SELECT query. This transformer converts ZetaSQL CREATE VIEW statements to SQLite-compatible CREATE VIEW syntax.
The transformer handles: - Extracting view name and creation options (IF NOT EXISTS) - Recursively transforming the SELECT query scan through the coordinator - Transforming each output column expression in the SELECT list - Creating the final CreateViewStatement structure for SQL generation
This transformer bridges the gap between ZetaSQL's resolved AST structure and the SQLite CREATE VIEW statement representation.
func NewCreateViewStmtTransformer ¶
func NewCreateViewStmtTransformer(coordinator Coordinator) *CreateViewStmtTransformer
NewCreateViewStmtTransformer creates a new CREATE VIEW statement transformer
func (*CreateViewStmtTransformer) CanTransform ¶
func (t *CreateViewStmtTransformer) CanTransform(node ast.Node) bool
CanTransform checks if this transformer can handle the given node type
func (*CreateViewStmtTransformer) Transform ¶
func (t *CreateViewStmtTransformer) Transform(data StatementData, ctx TransformContext) (SQLFragment, error)
Transform transforms CREATE VIEW statement data into a SQL fragment
type CustomInverseWindowAggregate ¶
type CustomInverseWindowAggregate interface {
Inverse(values []Value, agg *WindowFuncAggregatedStatus) error
}
type CustomStepWindowAggregate ¶
type CustomStepWindowAggregate interface {
Step(values []Value, agg *WindowFuncAggregatedStatus) error
}
type DMLStmt ¶
type DMLStmt struct {
// contains filtered or unexported fields
}
func (*DMLStmt) CheckNamedValue ¶
func (s *DMLStmt) CheckNamedValue(value *driver.NamedValue) error
func (*DMLStmt) ExecContext ¶
type DMLStmtAction ¶
type DMLStmtAction struct {
// contains filtered or unexported fields
}
func (*DMLStmtAction) Args ¶
func (a *DMLStmtAction) Args() []interface{}
func (*DMLStmtAction) Cleanup ¶
func (a *DMLStmtAction) Cleanup(ctx context.Context, conn *Conn) error
func (*DMLStmtAction) ExecContext ¶
func (*DMLStmtAction) QueryContext ¶
type DMLStmtTransformer ¶
type DMLStmtTransformer struct {
// contains filtered or unexported fields
}
DMLStmtTransformer handles transformation of DML statement nodes (INSERT, UPDATE, DELETE) from ZetaSQL to SQLite.
In BigQuery/ZetaSQL, DML statements include INSERT, UPDATE, and DELETE operations that modify table data. These operations have specific semantics and syntax that need to be converted to SQLite equivalents.
The transformer converts ZetaSQL DML statements by: - Handling INSERT ... VALUES vs INSERT ... SELECT patterns - Converting UPDATE statements with SET clauses and optional WHERE conditions - Transforming DELETE statements with WHERE clauses - Properly formatting table names and column references for SQLite - Ensuring expression transformations work correctly within DML contexts
This transformer bridges the gap between ZetaSQL's resolved DML AST structure and the SQLite DML statement representation, ensuring all components are properly transformed and SQL generation produces valid SQLite syntax.
func NewDMLStmtTransformer ¶
func NewDMLStmtTransformer(coordinator Coordinator) *DMLStmtTransformer
NewDMLStmtTransformer creates a new DML statement transformer
func (*DMLStmtTransformer) Transform ¶
func (t *DMLStmtTransformer) Transform(data StatementData, ctx TransformContext) (SQLFragment, error)
Transform converts DML statement data to appropriate SQL statement fragments
type DateValue ¶
func (DateValue) AddDateWithInterval ¶
func (DateValue) ToArray ¶
func (d DateValue) ToArray() (*ArrayValue, error)
func (DateValue) ToStruct ¶
func (d DateValue) ToStruct() (*StructValue, error)
type DatetimeValue ¶
func (DatetimeValue) Format ¶
func (d DatetimeValue) Format(verb rune) string
func (DatetimeValue) Interface ¶
func (d DatetimeValue) Interface() interface{}
func (DatetimeValue) ToArray ¶
func (d DatetimeValue) ToArray() (*ArrayValue, error)
func (DatetimeValue) ToBool ¶
func (d DatetimeValue) ToBool() (bool, error)
func (DatetimeValue) ToBytes ¶
func (d DatetimeValue) ToBytes() ([]byte, error)
func (DatetimeValue) ToFloat64 ¶
func (d DatetimeValue) ToFloat64() (float64, error)
func (DatetimeValue) ToInt64 ¶
func (d DatetimeValue) ToInt64() (int64, error)
func (DatetimeValue) ToJSON ¶
func (d DatetimeValue) ToJSON() (string, error)
func (DatetimeValue) ToString ¶
func (d DatetimeValue) ToString() (string, error)
func (DatetimeValue) ToStruct ¶
func (d DatetimeValue) ToStruct() (*StructValue, error)
type DefaultFragmentContext ¶
type DefaultFragmentContext struct {
// contains filtered or unexported fields
}
DefaultFragmentContext provides fragment context functionality
func NewDefaultFragmentContext ¶
func NewDefaultFragmentContext() *DefaultFragmentContext
NewDefaultFragmentContext creates a new fragment context
func (*DefaultFragmentContext) AddAvailableColumn ¶
func (fc *DefaultFragmentContext) AddAvailableColumn(columnID int, info *ColumnInfo)
AddAvailableColumn adds a column to the available columns map
func (*DefaultFragmentContext) AddAvailableColumnsForDML ¶
func (fc *DefaultFragmentContext) AddAvailableColumnsForDML(scanData *ScanData)
AddAvailableColumnsForDML When transforming the columns for the base table of a DML statement, do not use aliases, instead use the underlying SQLite column names
func (*DefaultFragmentContext) EnterScope ¶
func (fc *DefaultFragmentContext) EnterScope() ScopeToken
EnterScope enters a new scope
func (*DefaultFragmentContext) ExitScope ¶
func (fc *DefaultFragmentContext) ExitScope(token ScopeToken)
ExitScope exits the current scope
func (*DefaultFragmentContext) GetColumnExpression ¶
func (fc *DefaultFragmentContext) GetColumnExpression(columnID int) *SQLExpression
GetColumnExpression gets the SQL expression for a column
func (*DefaultFragmentContext) GetID ¶
func (fc *DefaultFragmentContext) GetID() string
func (*DefaultFragmentContext) GetQualifiedColumnExpression ¶
func (fc *DefaultFragmentContext) GetQualifiedColumnExpression(columnID int) *SQLExpression
func (*DefaultFragmentContext) GetQualifiedColumnRef ¶
func (fc *DefaultFragmentContext) GetQualifiedColumnRef(columnID int) (string, string)
GetQualifiedColumnRef returns the qualified column reference for a column ID
func (*DefaultFragmentContext) RegisterColumnScope ¶
func (fc *DefaultFragmentContext) RegisterColumnScope(columnID int, scopeAlias string)
RegisterColumnScope registers a mapping from column ID to scope alias
func (*DefaultFragmentContext) RegisterColumnScopeMapping ¶
func (fc *DefaultFragmentContext) RegisterColumnScopeMapping(scopeAlias string, columns []*ColumnData)
RegisterColumnScopeMapping registers scope mappings for a list of columns
type DefaultScopeToken ¶
type DefaultScopeToken struct {
// contains filtered or unexported fields
}
DefaultScopeToken implements ScopeToken
func (*DefaultScopeToken) ID ¶
func (t *DefaultScopeToken) ID() string
ID returns the scope identifier
type DefaultTransformContext ¶
type DefaultTransformContext struct {
// contains filtered or unexported fields
}
DefaultTransformContext provides a default implementation of TransformContext
func NewDefaultTransformContext ¶
func NewDefaultTransformContext(ctx context.Context, config *TransformConfig) *DefaultTransformContext
NewDefaultTransformContext creates a new transform context
func (*DefaultTransformContext) AddWithEntryColumnMapping ¶
func (c *DefaultTransformContext) AddWithEntryColumnMapping(name string, columns []*ColumnData)
AddWithEntryColumnMapping adds column mappings for a WITH query
func (*DefaultTransformContext) Config ¶
func (c *DefaultTransformContext) Config() *TransformConfig
Config returns the transformation configuration
func (*DefaultTransformContext) Context ¶
func (c *DefaultTransformContext) Context() context.Context
Context returns the underlying Go context
func (*DefaultTransformContext) FragmentContext ¶
func (c *DefaultTransformContext) FragmentContext() FragmentContextProvider
FragmentContext returns the fragment context provider
func (*DefaultTransformContext) GetWithEntryMapping ¶
func (c *DefaultTransformContext) GetWithEntryMapping(name string) map[string]string
GetWithEntryMapping retrieves column mappings for a WITH query
func (*DefaultTransformContext) WithFragmentContext ¶
func (c *DefaultTransformContext) WithFragmentContext(fc FragmentContextProvider) TransformContext
WithFragmentContext returns a new context with updated fragment context
type DeleteData ¶
type DeleteData struct { TableName string `json:"table_name,omitempty"` TableScan *ScanData `json:"table_scan,omitempty"` WhereClause *ExpressionData `json:"where_clause,omitempty"` }
DeleteData represents DELETE statement data
type DeleteStatement ¶
type DeleteStatement struct { Table SQLFragment WhereExpr SQLFragment }
func (*DeleteStatement) String ¶
func (d *DeleteStatement) String() string
func (*DeleteStatement) WriteSql ¶
func (d *DeleteStatement) WriteSql(writer *SQLWriter) error
type DisableQueryFormattingKey ¶
type DisableQueryFormattingKey struct{}
type DropData ¶
type DropData struct { IfExists bool `json:"if_exists,omitempty"` ObjectType string `json:"object_type,omitempty"` // TABLE, VIEW, INDEX, SCHEMA, FUNCTION ObjectName string `json:"object_name,omitempty"` }
DropData represents DROP statement data
type DropStatement ¶
func (*DropStatement) String ¶
func (s *DropStatement) String() string
func (*DropStatement) WriteSql ¶
func (s *DropStatement) WriteSql(writer *SQLWriter) error
DropStatement WriteSql implementation
type DropStmtAction ¶
type DropStmtAction struct {
// contains filtered or unexported fields
}
func (*DropStmtAction) Args ¶
func (a *DropStmtAction) Args() []interface{}
func (*DropStmtAction) Cleanup ¶
func (a *DropStmtAction) Cleanup(ctx context.Context, conn *Conn) error
func (*DropStmtAction) ExecContext ¶
func (*DropStmtAction) QueryContext ¶
type DropStmtTransformer ¶
type DropStmtTransformer struct {
// contains filtered or unexported fields
}
DropStmtTransformer handles transformation of DROP statement data to DropStatement fragments.
In BigQuery/ZetaSQL, DROP statements are used to remove database objects like tables, views, indexes, schemas, and functions. These statements are typically simple and don't require complex recursive transformation.
The transformer converts extracted DropData by: - Validating the input data type is StatementTypeDrop - Creating a DropStatement SQLFragment with the extracted object information - No recursive transformation is needed since DROP statements are leaf-level operations
This transformer bridges the gap between the extracted DropData and the DropStatement SQL generation, ensuring proper object type handling and name formatting.
func NewDropStmtTransformer ¶
func NewDropStmtTransformer(coordinator Coordinator) *DropStmtTransformer
NewDropStmtTransformer creates a new DROP statement transformer
func (*DropStmtTransformer) Transform ¶
func (t *DropStmtTransformer) Transform(data StatementData, ctx TransformContext) (SQLFragment, error)
Transform converts DROP statement data to DropStatement This mirrors the logic from the existing VisitDropStmt method
type ErrorGroup ¶
type ErrorGroup struct {
// contains filtered or unexported fields
}
func (*ErrorGroup) Add ¶
func (eg *ErrorGroup) Add(e error)
func (*ErrorGroup) Error ¶
func (eg *ErrorGroup) Error() string
func (*ErrorGroup) HasError ¶
func (eg *ErrorGroup) HasError() bool
type ExistsExpression ¶
type ExistsExpression struct {
Subquery *SelectStatement
}
ExistsExpression represents SQL EXISTS expressions
func (*ExistsExpression) String ¶
func (e *ExistsExpression) String() string
func (*ExistsExpression) WriteSql ¶
func (e *ExistsExpression) WriteSql(writer *SQLWriter) error
WriteSql method for ExistsExpression
type ExpressionData ¶
type ExpressionData struct { Type ExpressionType `json:"type,omitempty"` Parameter *ParameterData `json:"parameter,omitempty"` Literal *LiteralData `json:"literal,omitempty"` Function *FunctionCallData `json:"function,omitempty"` Cast *CastData `json:"cast,omitempty"` Column *ColumnRefData `json:"column,omitempty"` Binary *BinaryExpressionData `json:"binary,omitempty"` Case *CaseExpressionData `json:"case,omitempty"` Subquery *SubqueryData `json:"subquery,omitempty"` }
ExpressionData represents the pure data extracted from an expression node
func NewColumnExpressionData ¶
func NewColumnExpressionData(column *ast.Column) ExpressionData
func NewFunctionCallExpressionData ¶
func NewFunctionCallExpressionData(name string, arguments ...ExpressionData) ExpressionData
func (*ExpressionData) Value ¶
func (e *ExpressionData) Value() interface{}
type ExpressionTransformer ¶
type ExpressionTransformer interface { Transformer[ExpressionData, *SQLExpression] }
ExpressionTransformer specifically handles expression transformations
func NewAggregateFunctionTransformer ¶
func NewAggregateFunctionTransformer(coord Coordinator) ExpressionTransformer
Placeholder transformer constructors - these would be implemented in separate files
func NewAnalyticFunctionTransformer ¶
func NewAnalyticFunctionTransformer(coord Coordinator) ExpressionTransformer
func NewComputedColumnTransformer ¶
func NewComputedColumnTransformer(coord Coordinator) ExpressionTransformer
func NewDMLDefaultTransformer ¶
func NewDMLDefaultTransformer() ExpressionTransformer
func NewDMLValueTransformer ¶
func NewDMLValueTransformer(coord Coordinator) ExpressionTransformer
func NewGetJsonFieldTransformer ¶
func NewGetJsonFieldTransformer(coord Coordinator) ExpressionTransformer
func NewGetStructFieldTransformer ¶
func NewGetStructFieldTransformer(coord Coordinator) ExpressionTransformer
func NewMakeStructTransformer ¶
func NewMakeStructTransformer(coord Coordinator) ExpressionTransformer
func NewOutputColumnTransformer ¶
func NewOutputColumnTransformer(coord Coordinator) ExpressionTransformer
func NewSubqueryExprTransformer ¶
func NewSubqueryExprTransformer(coord Coordinator) ExpressionTransformer
type ExpressionType ¶
type ExpressionType int
ExpressionType represents different types of SQL expressions
const ( ExpressionTypeColumn ExpressionType = iota ExpressionTypeLiteral ExpressionTypeParameter ExpressionTypeFunction ExpressionTypeBinary ExpressionTypeUnary ExpressionTypeSubquery ExpressionTypeStar ExpressionTypeCase ExpressionTypeExists ExpressionTypeCast )
type FilterScanData ¶
type FilterScanData struct { InputScan ScanData `json:"input_scan,omitempty"` FilterExpr ExpressionData `json:"filter_expr,omitempty"` }
FilterScanData represents filter operation data
type FilterScanTransformer ¶
type FilterScanTransformer struct {
// contains filtered or unexported fields
}
FilterScanTransformer handles WHERE clause filter transformations from ZetaSQL to SQLite.
In BigQuery/ZetaSQL, a FilterScan represents SQL WHERE clause operations that filter rows from an input scan based on boolean expressions. This corresponds to row-level filtering that occurs before grouping, aggregation, or other operations.
The transformer converts ZetaSQL FilterScan nodes into SQLite WHERE clauses by: - Recursively transforming the input scan to get the data source - Transforming the filter expression through the coordinator - Creating a SELECT * FROM (...) WHERE <condition> wrapper - Preserving column availability through the fragment context
Filter expressions can be complex boolean logic involving column references, function calls, comparisons, and logical operators (AND, OR, NOT).
func NewFilterScanTransformer ¶
func NewFilterScanTransformer(coordinator Coordinator) *FilterScanTransformer
NewFilterScanTransformer creates a new filter scan transformer
func (*FilterScanTransformer) Transform ¶
func (t *FilterScanTransformer) Transform(data ScanData, ctx TransformContext) (*FromItem, error)
Transform converts FilterScanData to FromItem with WHERE clause
type FloatValue ¶
type FloatValue float64
func (FloatValue) Format ¶
func (fv FloatValue) Format(verb rune) string
func (FloatValue) Interface ¶
func (fv FloatValue) Interface() interface{}
func (FloatValue) ToArray ¶
func (fv FloatValue) ToArray() (*ArrayValue, error)
func (FloatValue) ToBool ¶
func (fv FloatValue) ToBool() (bool, error)
func (FloatValue) ToBytes ¶
func (fv FloatValue) ToBytes() ([]byte, error)
func (FloatValue) ToFloat64 ¶
func (fv FloatValue) ToFloat64() (float64, error)
func (FloatValue) ToInt64 ¶
func (fv FloatValue) ToInt64() (int64, error)
func (FloatValue) ToJSON ¶
func (fv FloatValue) ToJSON() (string, error)
func (FloatValue) ToString ¶
func (fv FloatValue) ToString() (string, error)
func (FloatValue) ToStruct ¶
func (fv FloatValue) ToStruct() (*StructValue, error)
type FormatContext ¶
type FormatContext struct {
// contains filtered or unexported fields
}
type FormatFlag ¶
type FormatFlag int
const ( FormatFlagNone FormatFlag = 0 FormatFlagMinus FormatFlag = 1 FormatFlagPlus FormatFlag = 2 FormatFlagSpace FormatFlag = 3 FormatFlagSharp FormatFlag = 4 FormatFlagZero FormatFlag = 5 FormatFlagQuote FormatFlag = 6 )
type FormatInfo ¶
type FormatInfo struct {
// contains filtered or unexported fields
}
type FormatParam ¶
type FormatParam struct {
// contains filtered or unexported fields
}
type FormatPrecision ¶
type FormatPrecision struct {
// contains filtered or unexported fields
}
type FormatTimeInfo ¶
type FormatTimeInfo struct { AvailableTypes []TimeFormatType Parse ParseFunction Format func(*time.Time) ([]rune, error) }
func (*FormatTimeInfo) Available ¶
func (i *FormatTimeInfo) Available(typ TimeFormatType) bool
type FormatWidth ¶
type FormatWidth struct {
// contains filtered or unexported fields
}
type FragmentContext ¶
type FragmentContext struct { // Current scope information TableAliases map[string]string CurrentScope *ScopeInfo // Symbol management AliasGenerator *AliasGenerator WithEntries map[string]map[string]string // Testing instrumentation (optional) OnPushScope func(scopeType string, stackDepth int) OnPopScope func(alias string, stackDepth int) ResolvedColumns map[string]*ColumnInfo // contains filtered or unexported fields }
FragmentContext stores contextual information during AST traversal
func NewFragmentContext ¶
func NewFragmentContext() *FragmentContext
func (*FragmentContext) AddAvailableColumn ¶
func (fc *FragmentContext) AddAvailableColumn(column *ast.Column, info *ColumnInfo)
Column management methods
func (*FragmentContext) AddWithEntryColumnMapping ¶
func (fc *FragmentContext) AddWithEntryColumnMapping(name string, columns []*ast.Column)
func (*FragmentContext) FilterScope ¶
func (fc *FragmentContext) FilterScope(scopeType string, list []*ast.Column)
func (*FragmentContext) GetColumnExpression ¶
func (fc *FragmentContext) GetColumnExpression(column *ast.Column) *SQLExpression
func (*FragmentContext) OpenScope ¶
func (fc *FragmentContext) OpenScope(scopeType string, columns []*ast.Column) ScopeInfo
func (*FragmentContext) PopScope ¶
func (fc *FragmentContext) PopScope(alias string) *ScopeInfo
func (*FragmentContext) PushScope ¶
func (fc *FragmentContext) PushScope(scopeType string)
func (*FragmentContext) UseScope ¶
func (fc *FragmentContext) UseScope(scopeType string) func()
type FragmentContextProvider ¶
type FragmentContextProvider interface { GetColumnExpression(columnID int) *SQLExpression GetQualifiedColumnExpression(columnID int) *SQLExpression AddAvailableColumn(columnID int, info *ColumnInfo) GetID() string EnterScope() ScopeToken ExitScope(token ScopeToken) // Column ID to scope mapping for qualified references GetQualifiedColumnRef(columnID int) (columnName, tableAlias string) RegisterColumnScope(columnID int, scopeAlias string) RegisterColumnScopeMapping(scopeAlias string, columns []*ColumnData) AddAvailableColumnsForDML(data *ScanData) }
FragmentContextProvider abstracts the fragment context functionality
type FragmentStorage ¶
type FragmentStorage struct {
// contains filtered or unexported fields
}
FragmentStorage implements the main storage mechanism
type FrameBound ¶
type FrameBound struct { Type string // UNBOUNDED, CURRENT, PRECEDING, FOLLOWING Offset *SQLExpression }
FrameBound represents frame boundary specifications
func (*FrameBound) WriteSql ¶
func (f *FrameBound) WriteSql(writer *SQLWriter) error
type FrameBoundData ¶
type FrameBoundData struct { Type string // UNBOUNDED, CURRENT, PRECEDING, FOLLOWING Offset ExpressionData }
FrameBoundData represents frame boundary specifications
type FrameClause ¶
type FrameClause struct { Unit string // ROWS, RANGE, GROUPS Start *FrameBound End *FrameBound }
FrameClause represents window frame specifications
func (*FrameClause) WriteSql ¶
func (f *FrameClause) WriteSql(writer *SQLWriter) error
type FrameClauseData ¶
type FrameClauseData struct { Unit string // ROWS, RANGE, GROUPS Start *FrameBoundData End *FrameBoundData }
FrameClause represents window frame specifications
type FromItem ¶
type FromItem struct { Type FromItemType TableName string Alias string Subquery *SelectStatement Join *JoinClause WithRef string TableFunction *TableFunction UnnestExpr *SQLExpression Hints []string }
FromItem represents items in the FROM clause
func NewInnerJoin ¶
func NewInnerJoin(left, right *FromItem, condition *SQLExpression) *FromItem
NewInnerJoin creates an INNER JOIN
func NewSubqueryFromItem ¶
func NewSubqueryFromItem(subquery *SelectStatement, alias string) *FromItem
NewSubqueryFromItem creates a subquery FROM item
func NewTableFromItem ¶
NewTableFromItem creates a table FROM item
type FromItemType ¶
type FromItemType int
FromItemType represents different types of FROM clause items
const ( FromItemTypeTable FromItemType = iota FromItemTypeSubquery FromItemTypeJoin FromItemTypeWithRef FromItemTypeTableFunction FromItemTypeUnnest FromItemTypeSingleRow )
type FuncInfo ¶
type FuncInfo struct { Name string BindFunc BindFunction SafeFunc BindFunction }
type FunctionCall ¶
type FunctionCall struct { Name string Arguments []*SQLExpression IsDistinct bool WindowSpec *WindowSpecification }
FunctionCall represents SQL function calls
func (*FunctionCall) String ¶
func (f *FunctionCall) String() string
func (*FunctionCall) WriteSql ¶
func (f *FunctionCall) WriteSql(writer *SQLWriter) error
type FunctionCallData ¶
type FunctionCallData struct { Name string `json:"name,omitempty"` Arguments []ExpressionData `json:"arguments,omitempty"` WindowSpec *WindowSpecificationData `json:"window_spec,omitempty"` Signature *FunctionSignature `json:"signature,omitempty"` }
FunctionCallData represents function call data
type FunctionCallTransformer ¶
type FunctionCallTransformer struct {
// contains filtered or unexported fields
}
FunctionCallTransformer handles transformation of function calls from ZetaSQL to SQLite.
BigQuery/ZetaSQL supports a rich set of built-in functions with different semantics than SQLite. This transformer bridges the gap by: - Converting ZetaSQL function calls to SQLite equivalents - Handling special ZetaSQL functions (IFNULL, IF, CASE) via custom zetasqlite_* functions - Managing window functions with proper OVER clause transformation - Processing function arguments recursively through the coordinator - Injecting current time for time-dependent functions when needed
Key ZetaSQL -> SQLite transformations handled: - zetasqlite_ifnull -> CASE WHEN...IS NULL pattern - zetasqlite_if -> CASE WHEN...THEN...ELSE pattern - zetasqlite_case_* -> CASE expressions with proper value/condition handling - Window functions with PARTITION BY, ORDER BY, and frame specifications - Built-in function mapping through the function registry
The transformer ensures function semantics are preserved across the SQL dialect boundary.
func NewFunctionCallTransformer ¶
func NewFunctionCallTransformer(coordinator Coordinator) *FunctionCallTransformer
NewFunctionCallTransformer creates a new function call transformer
func (*FunctionCallTransformer) Transform ¶
func (t *FunctionCallTransformer) Transform(data ExpressionData, ctx TransformContext) (*SQLExpression, error)
Transform converts FunctionCallData to SQLExpression
type FunctionSignature ¶
type FunctionSignature struct {
Arguments []*ArgumentInfo `json:"arguments,omitempty"`
}
FunctionSignature represents function signature information
type FunctionSpec ¶
type FunctionSpec struct { IsTemp bool `json:"isTemp"` NamePath []string `json:"name"` Language string `json:"language"` Args []*NameWithType `json:"args"` Return *Type `json:"return"` Body *SQLExpression `json:"body"` Code string `json:"code"` UpdatedAt time.Time `json:"updatedAt"` CreatedAt time.Time `json:"createdAt"` }
func (*FunctionSpec) CallSQL ¶
func (s *FunctionSpec) CallSQL(ctx context.Context, callNode *ast.BaseFunctionCallNode, argValues []*SQLExpression) (*SQLExpression, error)
func (*FunctionSpec) CallSQLData ¶
func (s *FunctionSpec) CallSQLData(ctx context.Context, functionData *FunctionCallData, argValues []*SQLExpression) (*SQLExpression, error)
func (*FunctionSpec) FuncName ¶
func (s *FunctionSpec) FuncName() string
func (*FunctionSpec) SQL ¶
func (s *FunctionSpec) SQL() string
type GroupingSetData ¶
type GroupingSetData struct {
GroupByColumns []*ComputedColumnData `json:"group_by_columns,omitempty"`
}
GroupingSetData represents a grouping set
type HLL_COUNT_INIT ¶
type HLL_COUNT_INIT struct {
// contains filtered or unexported fields
}
func (*HLL_COUNT_INIT) Done ¶
func (f *HLL_COUNT_INIT) Done() (Value, error)
func (*HLL_COUNT_INIT) Step ¶
func (f *HLL_COUNT_INIT) Step(input Value, precision int64, opt *AggregatorOption) (e error)
type HLL_COUNT_MERGE ¶
type HLL_COUNT_MERGE struct {
// contains filtered or unexported fields
}
func (*HLL_COUNT_MERGE) Done ¶
func (f *HLL_COUNT_MERGE) Done() (Value, error)
func (*HLL_COUNT_MERGE) Step ¶
func (f *HLL_COUNT_MERGE) Step(sketch []byte, opt *AggregatorOption) error
type HLL_COUNT_MERGE_PARTIAL ¶
type HLL_COUNT_MERGE_PARTIAL struct {
// contains filtered or unexported fields
}
func (*HLL_COUNT_MERGE_PARTIAL) Done ¶
func (f *HLL_COUNT_MERGE_PARTIAL) Done() (Value, error)
func (*HLL_COUNT_MERGE_PARTIAL) Step ¶
func (f *HLL_COUNT_MERGE_PARTIAL) Step(sketch []byte, opt *AggregatorOption) error
type InsertData ¶
type InsertData struct { TableName string `json:"table_name,omitempty"` Columns []string `json:"columns,omitempty"` Values [][]ExpressionData `json:"values,omitempty"` Query *SelectData `json:"query,omitempty"` }
InsertData represents INSERT statement data
type InsertStatement ¶
type InsertStatement struct { TableName string Columns []string Query *SelectStatement Rows []SQLFragment }
func (*InsertStatement) String ¶
func (d *InsertStatement) String() string
func (*InsertStatement) WriteSql ¶
func (d *InsertStatement) WriteSql(writer *SQLWriter) error
type IntValue ¶
type IntValue int64
func (IntValue) ToArray ¶
func (iv IntValue) ToArray() (*ArrayValue, error)
func (IntValue) ToStruct ¶
func (iv IntValue) ToStruct() (*StructValue, error)
type IntervalValue ¶
type IntervalValue struct {
*bigquery.IntervalValue
}
func (*IntervalValue) Format ¶
func (iv *IntervalValue) Format(verb rune) string
func (*IntervalValue) Interface ¶
func (iv *IntervalValue) Interface() interface{}
func (*IntervalValue) ToArray ¶
func (iv *IntervalValue) ToArray() (*ArrayValue, error)
func (*IntervalValue) ToBool ¶
func (iv *IntervalValue) ToBool() (bool, error)
func (*IntervalValue) ToBytes ¶
func (iv *IntervalValue) ToBytes() ([]byte, error)
func (*IntervalValue) ToFloat64 ¶
func (iv *IntervalValue) ToFloat64() (float64, error)
func (*IntervalValue) ToInt64 ¶
func (iv *IntervalValue) ToInt64() (int64, error)
func (*IntervalValue) ToJSON ¶
func (iv *IntervalValue) ToJSON() (string, error)
func (*IntervalValue) ToString ¶
func (iv *IntervalValue) ToString() (string, error)
func (*IntervalValue) ToStruct ¶
func (iv *IntervalValue) ToStruct() (*StructValue, error)
type JoinClause ¶
type JoinClause struct { Type JoinType Left *FromItem Right *FromItem Condition *SQLExpression Using []string }
JoinClause represents JOIN operations
func (*JoinClause) WriteSql ¶
func (j *JoinClause) WriteSql(writer *SQLWriter) error
type JoinScanData ¶
type JoinScanData struct { JoinType ast.JoinType `json:"join_type,omitempty"` LeftScan ScanData `json:"left_scan,omitempty"` RightScan ScanData `json:"right_scan,omitempty"` JoinCondition *ExpressionData `json:"join_condition,omitempty"` UsingColumns []string `json:"using_columns,omitempty"` }
JoinScanData represents join operation data
type JoinScanTransformer ¶
type JoinScanTransformer struct {
// contains filtered or unexported fields
}
JoinScanTransformer handles JOIN scan transformations from ZetaSQL to SQLite.
In BigQuery/ZetaSQL, a JoinScan represents SQL JOIN operations that combine rows from two input scans based on join conditions and join types. This includes INNER JOIN, LEFT JOIN, RIGHT JOIN, FULL OUTER JOIN, and CROSS JOIN operations.
The transformer converts ZetaSQL JoinScan nodes into SQLite JOIN clauses by: - Recursively transforming left and right input scans - Converting ZetaSQL join types to SQLite equivalents - Transforming join conditions through the coordinator - Wrapping the result in a SELECT * subquery for consistent output structure
Join conditions are expressions that determine which rows from the left and right scans should be combined. The transformer ensures proper column qualification across the join boundary through the fragment context.
func NewJoinScanTransformer ¶
func NewJoinScanTransformer(coordinator Coordinator) *JoinScanTransformer
NewJoinScanTransformer creates a new join scan transformer
func (*JoinScanTransformer) Transform ¶
func (t *JoinScanTransformer) Transform(data ScanData, ctx TransformContext) (*FromItem, error)
Transform converts JoinScanData to FromItem with JOIN clause
type JsonValue ¶
type JsonValue string
func (JsonValue) ToArray ¶
func (jv JsonValue) ToArray() (*ArrayValue, error)
func (JsonValue) ToStruct ¶
func (jv JsonValue) ToStruct() (*StructValue, error)
type LOGICAL_AND ¶
type LOGICAL_AND struct {
// contains filtered or unexported fields
}
func (*LOGICAL_AND) Done ¶
func (f *LOGICAL_AND) Done() (Value, error)
func (*LOGICAL_AND) Step ¶
func (f *LOGICAL_AND) Step(cond Value, opt *AggregatorOption) error
type LOGICAL_OR ¶
type LOGICAL_OR struct {
// contains filtered or unexported fields
}
func (*LOGICAL_OR) Done ¶
func (f *LOGICAL_OR) Done() (Value, error)
func (*LOGICAL_OR) Step ¶
func (f *LOGICAL_OR) Step(cond Value, opt *AggregatorOption) error
type LimitClause ¶
type LimitClause struct { Count *SQLExpression Offset *SQLExpression }
type LimitData ¶
type LimitData struct { Count ExpressionData `json:"count,omitempty"` Offset *ExpressionData `json:"offset,omitempty"` }
LimitData represents LIMIT clause data
type LimitScanData ¶
type LimitScanData struct { InputScan ScanData `json:"input_scan,omitempty"` // The nested scan being limited Count ExpressionData `json:"count,omitempty"` // LIMIT expression Offset ExpressionData `json:"offset,omitempty"` // OFFSET expression (optional) }
LimitScanData represents LIMIT/OFFSET scan operation data
type LimitScanTransformer ¶
type LimitScanTransformer struct {
// contains filtered or unexported fields
}
LimitScanTransformer handles LIMIT/OFFSET scan transformations from ZetaSQL to SQLite.
In BigQuery/ZetaSQL, LIMIT scans control the number of rows returned from a query, optionally with an OFFSET to skip rows. This corresponds to SQL's LIMIT and OFFSET clauses that restrict result set size for pagination and performance.
The transformer converts ZetaSQL LimitScan nodes into SQLite LIMIT clauses by: - Recursively transforming the input scan to get the data source - Transforming count and offset expressions through the coordinator - Wrapping the result in SELECT * FROM (...) LIMIT count OFFSET offset - Preserving the original column structure and availability
Both count and offset can be dynamic expressions (parameters, column references, etc.) rather than just literal numbers, requiring full expression transformation.
func NewLimitScanTransformer ¶
func NewLimitScanTransformer(coordinator Coordinator) *LimitScanTransformer
NewLimitScanTransformer creates a new limit scan transformer
func (*LimitScanTransformer) Transform ¶
func (t *LimitScanTransformer) Transform(data ScanData, ctx TransformContext) (*FromItem, error)
Transform converts LimitScanData to FromItem with LIMIT clause
type LiteralData ¶
type LiteralData struct { Value Value `json:"value,omitempty"` // Use zetasqlite Value which handles both Go literals and ZetaSQL values TypeName string `json:"type_name,omitempty"` // String representation of type for reference Location *ParseLocation `json:"location,omitempty"` }
LiteralData represents literal value data
type LiteralTransformer ¶
type LiteralTransformer struct { }
LiteralTransformer handles transformation of literal values from ZetaSQL to SQLite.
BigQuery/ZetaSQL supports rich literal types including complex values like STRUCT literals, ARRAY literals, and typed NULL values that don't have direct SQLite equivalents. Literals represent constant values in SQL expressions (numbers, strings, booleans, etc.).
The transformer converts ZetaSQL literal values by: - Encoding complex ZetaSQL literals into SQLite-compatible string representations - Preserving type information through the encoding process - Handling special values like typed NULL, NaN, and infinity - Using the LiteralFromValue function for consistent encoding
This ensures that complex BigQuery literal values can be properly represented and processed in the SQLite runtime environment while maintaining their semantic meaning.
func NewLiteralTransformer ¶
func NewLiteralTransformer() *LiteralTransformer
NewLiteralTransformer creates a new literal transformer with the given configuration
func (*LiteralTransformer) Transform ¶
func (t *LiteralTransformer) Transform(data ExpressionData, ctx TransformContext) (*SQLExpression, error)
Transform converts LiteralData to SQLExpression
type MergeData ¶
type MergeData struct { TargetTable string `json:"target_table,omitempty"` TargetScan *ScanData `json:"target_scan,omitempty"` SourceScan *ScanData `json:"source_scan,omitempty"` MergeExpr ExpressionData `json:"merge_expr,omitempty"` WhenClauses []*MergeWhenClauseData `json:"when_clauses,omitempty"` }
MergeData represents MERGE statement data
type MergeStatement ¶
type MergeStatement struct { TargetTable string SourceTable *FromItem MergeClause *SQLExpression WhenClauses []*MergeWhenClause }
func (*MergeStatement) String ¶
func (s *MergeStatement) String() string
func (*MergeStatement) WriteSql ¶
func (s *MergeStatement) WriteSql(writer *SQLWriter) error
MergeStatement WriteSql implementation
type MergeStmtAction ¶
type MergeStmtAction struct {
// contains filtered or unexported fields
}
func (*MergeStmtAction) Args ¶
func (a *MergeStmtAction) Args() []interface{}
func (*MergeStmtAction) Cleanup ¶
func (a *MergeStmtAction) Cleanup(ctx context.Context, conn *Conn) error
func (*MergeStmtAction) ExecContext ¶
func (*MergeStmtAction) QueryContext ¶
type MergeStmtTransformer ¶
type MergeStmtTransformer struct {
// contains filtered or unexported fields
}
MergeStmtTransformer handles transformation of MERGE statement nodes from ZetaSQL to SQLite.
In BigQuery/ZetaSQL, MERGE statements provide a way to conditionally INSERT, UPDATE, or DELETE rows based on whether they match between a target table and a source table/query. Since SQLite doesn't have native MERGE support, this transformer converts MERGE statements into a series of SQLite statements that achieve equivalent behavior.
The transformation strategy is: 1. Create a temporary table with a FULL OUTER JOIN of target and source tables 2. Generate conditional INSERT/UPDATE/DELETE statements based on WHEN clauses 3. Clean up the temporary table
This maintains the same semantics as the original visitor pattern implementation while integrating with the new transformer architecture.
func NewMergeStmtTransformer ¶
func NewMergeStmtTransformer(coordinator Coordinator) *MergeStmtTransformer
NewMergeStmtTransformer creates a new MERGE statement transformer
func (*MergeStmtTransformer) Transform ¶
func (t *MergeStmtTransformer) Transform(data StatementData, ctx TransformContext) (SQLFragment, error)
Transform converts MERGE statement data to a collection of SQL statements that simulate MERGE behavior
type MergeWhenClause ¶
type MergeWhenClause struct { Type string // "MATCHED", "NOT MATCHED" Condition *SQLExpression Action string // "UPDATE", "DELETE", "INSERT" SetList []*SetItem }
func (*MergeWhenClause) String ¶
func (c *MergeWhenClause) String() string
func (*MergeWhenClause) WriteSql ¶
func (c *MergeWhenClause) WriteSql(writer *SQLWriter) error
MergeWhenClause WriteSql implementation
type MergeWhenClauseData ¶
type MergeWhenClauseData struct { MatchType ast.MatchType `json:"match_type,omitempty"` // MATCHED, NOT_MATCHED_BY_SOURCE, NOT_MATCHED_BY_TARGET Condition *ExpressionData `json:"condition,omitempty"` // Optional condition ActionType ast.ActionType `json:"action_type,omitempty"` // INSERT, UPDATE, DELETE InsertColumns []*ColumnData `json:"insert_columns,omitempty"` // For INSERT actions InsertValues []ExpressionData `json:"insert_values,omitempty"` // For INSERT actions SetItems []*SetItemData `json:"set_items,omitempty"` // For UPDATE actions }
MergeWhenClauseData represents a WHEN clause in MERGE statements
type Month ¶
type Month string
const ( January Month = "January" February Month = "February" March Month = "March" April Month = "April" May Month = "May" June Month = "June" July Month = "July" August Month = "August" September Month = "September" October Month = "October" November Month = "November" December Month = "December" )
type NameAndFunc ¶
type NameWithType ¶
func (*NameWithType) FunctionArgumentType ¶
func (t *NameWithType) FunctionArgumentType() (*types.FunctionArgumentType, error)
type NodeExtractor ¶
type NodeExtractor struct {
// contains filtered or unexported fields
}
NodeExtractor is responsible for extracting pure data from AST nodes This separates the concerns of AST traversal from data extraction
func NewNodeExtractor ¶
func NewNodeExtractor() *NodeExtractor
NewNodeExtractor creates a new node extractor
func (*NodeExtractor) ExtractExpressionData ¶
func (e *NodeExtractor) ExtractExpressionData(node ast.Node, ctx TransformContext) (ExpressionData, error)
ExtractExpressionData extracts pure data from expression AST nodes
func (*NodeExtractor) ExtractScanData ¶
func (e *NodeExtractor) ExtractScanData(node ast.Node, ctx TransformContext) (ScanData, error)
ExtractScanData extracts pure data from scan AST nodes
func (*NodeExtractor) ExtractStatementData ¶
func (e *NodeExtractor) ExtractStatementData(node ast.Node, ctx TransformContext) (StatementData, error)
ExtractStatementData extracts pure data from statement AST nodes
func (*NodeExtractor) SetCoordinator ¶
func (e *NodeExtractor) SetCoordinator(coordinator Coordinator)
SetCoordinator sets the coordinator reference for recursive operations
type NodeID ¶
type NodeID string
NodeID represents a unique identifier for AST nodes based on path in the AST
type NullStmtAction ¶
type NullStmtAction struct{}
func (*NullStmtAction) Args ¶
func (a *NullStmtAction) Args() []interface{}
func (*NullStmtAction) Cleanup ¶
func (a *NullStmtAction) Cleanup(ctx context.Context, conn *Conn) error
func (*NullStmtAction) ExecContext ¶
func (*NullStmtAction) QueryContext ¶
type NumericValue ¶
func (*NumericValue) Format ¶
func (nv *NumericValue) Format(verb rune) string
func (*NumericValue) Interface ¶
func (nv *NumericValue) Interface() interface{}
func (*NumericValue) ToArray ¶
func (nv *NumericValue) ToArray() (*ArrayValue, error)
func (*NumericValue) ToBool ¶
func (nv *NumericValue) ToBool() (bool, error)
func (*NumericValue) ToBytes ¶
func (nv *NumericValue) ToBytes() ([]byte, error)
func (*NumericValue) ToFloat64 ¶
func (nv *NumericValue) ToFloat64() (float64, error)
func (*NumericValue) ToInt64 ¶
func (nv *NumericValue) ToInt64() (int64, error)
func (*NumericValue) ToJSON ¶
func (nv *NumericValue) ToJSON() (string, error)
func (*NumericValue) ToString ¶
func (nv *NumericValue) ToString() (string, error)
func (*NumericValue) ToStruct ¶
func (nv *NumericValue) ToStruct() (*StructValue, error)
type OrderByItem ¶
type OrderByItem struct { Expression *SQLExpression Direction string // ASC, DESC NullsOrder string // NULLS FIRST, NULLS LAST }
OrderByItem represents items in ORDER BY clause
func (*OrderByItem) String ¶
func (o *OrderByItem) String() string
func (*OrderByItem) WriteSql ¶
func (o *OrderByItem) WriteSql(writer *SQLWriter) error
type OrderByItemData ¶
type OrderByItemData struct { Expression ExpressionData `json:"expression,omitempty"` IsDescending bool `json:"is_descending,omitempty"` NullOrder ast.NullOrderMode `json:"null_order,omitempty"` }
OrderByItemData represents ORDER BY item data
type OrderByScanData ¶
type OrderByScanData struct { InputScan ScanData `json:"input_scan,omitempty"` OrderByColumns []*OrderByItemData `json:"order_by_columns,omitempty"` }
OrderByScanData represents ORDER BY operation data
type OrderByScanTransformer ¶
type OrderByScanTransformer struct {
// contains filtered or unexported fields
}
OrderByScanTransformer handles ORDER BY scan transformations from ZetaSQL to SQLite.
In BigQuery/ZetaSQL, ORDER BY scans sort result rows based on one or more expressions. This includes complex sorting semantics like NULLS FIRST/LAST, collation handling, and expressions that can reference columns, functions, or computed values.
The transformer converts ZetaSQL OrderByScan nodes into SQLite ORDER BY clauses by: - Recursively transforming the input scan to get the data source - Transforming each ORDER BY expression through the coordinator - Handling ZetaSQL's NULL ordering semantics (NULLS FIRST/LAST) via additional sort keys - Applying zetasqlite_collate for consistent string ordering behavior - Creating SELECT * FROM (...) ORDER BY structure for complex queries
ZetaSQL's NULL ordering is more sophisticated than SQLite's default behavior, requiring additional ORDER BY items to ensure consistent results.
func NewOrderByScanTransformer ¶
func NewOrderByScanTransformer(coordinator Coordinator) *OrderByScanTransformer
NewOrderByScanTransformer creates a new order by scan transformer
func (*OrderByScanTransformer) Transform ¶
func (t *OrderByScanTransformer) Transform(data ScanData, ctx TransformContext) (*FromItem, error)
Transform converts OrderByScanData to FromItem with ORDER BY clause
type OrderedValue ¶
type OrderedValue struct { OrderBy []*AggregateOrderBy Value Value }
type OutputColumnListProvider ¶
type OutputColumnListProvider interface {
OutputColumnList() []ast.OutputColumnNode
}
type ParameterData ¶
type ParameterData struct {
Identifier string `json:"identifier,omitempty"`
}
ParameterData represents a parameter binding value
type ParameterDefinition ¶
func (*ParameterDefinition) String ¶
func (p *ParameterDefinition) String() string
func (*ParameterDefinition) WriteSql ¶
func (p *ParameterDefinition) WriteSql(writer *SQLWriter) error
ParameterDefinition WriteSql implementation
type ParameterDefinitionData ¶
type ParameterDefinitionData struct { Name string `json:"name,omitempty"` Type string `json:"type,omitempty"` }
ParameterDefinitionData represents function parameter data
type ParameterTransformer ¶
type ParameterTransformer struct{}
ParameterTransformer handles transformation of parameters/argument identifiers from ZetaSQL to SQLite.
In BigQuery/ZetaSQL, parameters represent named or positional placeholders in SQL queries that are substituted with actual values at execution time. These can be query parameters like @param_name (named) or ? (positional) that allow dynamic query execution.
The transformer converts ZetaSQL Parameter nodes by: - Extracting the parameter identifier (name or position) - Creating a literal SQLite expression with the identifier - Preserving the parameter reference for runtime substitution
This is the simplest transformer as it performs direct identifier mapping without complex transformation logic, but it's essential for parameterized query support.
func NewParameterTransformer ¶
func NewParameterTransformer() *ParameterTransformer
func (*ParameterTransformer) Transform ¶
func (t *ParameterTransformer) Transform(data ExpressionData, ctx TransformContext) (*SQLExpression, error)
Transform converts ParameterData to SQLExpression
type ParseLocation ¶
type ParseLocation struct { StartLine int `json:"start_line,omitempty"` StartColumn int `json:"start_column,omitempty"` EndLine int `json:"end_line,omitempty"` EndColumn int `json:"end_column,omitempty"` Filename string `json:"filename,omitempty"` }
ParseLocation represents source location information
type ProjectScanData ¶
type ProjectScanData struct { InputScan ScanData `json:"input_scan,omitempty"` ExprList []*ComputedColumnData `json:"expr_list,omitempty"` }
ProjectScanData represents projection operation data
type ProjectScanTransformer ¶
type ProjectScanTransformer struct {
// contains filtered or unexported fields
}
ProjectScanTransformer handles projection scan transformations from ZetaSQL to SQLite.
In BigQuery/ZetaSQL, a ProjectScan represents the SQL SELECT list operation that applies projections (computed expressions) to columns from an input scan. This corresponds to the "SELECT <expr_list>" part of a SQL query where expressions can be: - Simple column references (pass-through columns) - Computed expressions (functions, arithmetic, etc.) - Mix of both
The transformer converts ZetaSQL ProjectScan nodes into SQLite SELECT statements with proper: - Column aliasing using ID-based naming for disambiguation - Expression transformation through the coordinator pattern - Fragment context management for column resolution - Recursive transformation of the input scan
Key challenges addressed: - Ensuring SELECT list is never empty (which causes SQLite syntax errors) - Expression dependency resolution through fragment context
func NewProjectScanTransformer ¶
func NewProjectScanTransformer(coordinator Coordinator) *ProjectScanTransformer
NewProjectScanTransformer creates a new project scan transformer
func (*ProjectScanTransformer) Transform ¶
func (t *ProjectScanTransformer) Transform(data ScanData, ctx TransformContext) (*FromItem, error)
Transform converts ProjectScanData to FromItem with SELECT statement
type QueryCoordinator ¶
type QueryCoordinator struct {
// contains filtered or unexported fields
}
QueryCoordinator orchestrates the transformation process by delegating to appropriate transformers
func NewQueryCoordinator ¶
func NewQueryCoordinator(extractor *NodeExtractor) *QueryCoordinator
NewQueryCoordinator creates a new coordinator with default transformers
func (*QueryCoordinator) GetRegisteredExpressionTypes ¶
func (c *QueryCoordinator) GetRegisteredExpressionTypes() []string
GetRegisteredExpressionTypes returns the types of registered expression transformers
func (*QueryCoordinator) GetRegisteredScanTypes ¶
func (c *QueryCoordinator) GetRegisteredScanTypes() []string
GetRegisteredScanTypes returns the types of registered scan transformers
func (*QueryCoordinator) GetRegisteredStatementTypes ¶
func (c *QueryCoordinator) GetRegisteredStatementTypes() []string
GetRegisteredStatementTypes returns the types of registered statement transformers
func (*QueryCoordinator) RegisterExpressionTransformer ¶
func (c *QueryCoordinator) RegisterExpressionTransformer(nodeType reflect.Type, transformer ExpressionTransformer)
RegisterExpressionTransformer registers a transformer for a specific expression node type
func (*QueryCoordinator) RegisterScanTransformer ¶
func (c *QueryCoordinator) RegisterScanTransformer(nodeType reflect.Type, transformer ScanTransformer)
RegisterScanTransformer registers a transformer for a specific scan node type
func (*QueryCoordinator) RegisterStatementTransformer ¶
func (c *QueryCoordinator) RegisterStatementTransformer(nodeType reflect.Type, transformer StatementTransformer)
RegisterStatementTransformer registers a transformer for a specific statement node type
func (*QueryCoordinator) TransformExpression ¶
func (c *QueryCoordinator) TransformExpression(exprData ExpressionData, ctx TransformContext) (*SQLExpression, error)
TransformExpressionData transforms expression data to SQLExpression
func (*QueryCoordinator) TransformExpressionDataList ¶
func (c *QueryCoordinator) TransformExpressionDataList(exprDataList []ExpressionData, ctx TransformContext) ([]*SQLExpression, error)
TransformExpressionDataList transforms a list of expression data
func (*QueryCoordinator) TransformOptionalExpressionData ¶
func (c *QueryCoordinator) TransformOptionalExpressionData(exprData *ExpressionData, ctx TransformContext) (*SQLExpression, error)
TransformOptionalExpressionData transforms optional expression data
func (*QueryCoordinator) TransformScan ¶
func (c *QueryCoordinator) TransformScan(scanData ScanData, ctx TransformContext) (*FromItem, error)
TransformScanData transforms scan data to FromItem
func (*QueryCoordinator) TransformStatement ¶
func (c *QueryCoordinator) TransformStatement(stmtData StatementData, ctx TransformContext) (SQLFragment, error)
TransformStatementData transforms statement data to SQLFragment
func (*QueryCoordinator) TransformStatementNode ¶
func (c *QueryCoordinator) TransformStatementNode(node ast.Node, ctx TransformContext) (SQLFragment, error)
TransformStatement transforms a statement AST node to SQLFragment
func (*QueryCoordinator) TransformWithEntry ¶
func (c *QueryCoordinator) TransformWithEntry(scanData ScanData, ctx TransformContext) (*WithClause, error)
TransformWithEntryData transforms WITH entry data to WithClause
type QueryStmt ¶
type QueryStmt struct {
// contains filtered or unexported fields
}
func (*QueryStmt) CheckNamedValue ¶
func (s *QueryStmt) CheckNamedValue(value *driver.NamedValue) error
func (*QueryStmt) ExecContext ¶
func (*QueryStmt) OutputColumns ¶
func (s *QueryStmt) OutputColumns() []*ColumnSpec
func (*QueryStmt) QueryContext ¶
type QueryStmtAction ¶
type QueryStmtAction struct {
// contains filtered or unexported fields
}
func (*QueryStmtAction) Args ¶
func (a *QueryStmtAction) Args() []interface{}
func (*QueryStmtAction) Cleanup ¶
func (a *QueryStmtAction) Cleanup(ctx context.Context, conn *Conn) error
func (*QueryStmtAction) ExecContext ¶
func (*QueryStmtAction) ExplainQueryPlan ¶
func (a *QueryStmtAction) ExplainQueryPlan(ctx context.Context, conn *Conn) error
func (*QueryStmtAction) QueryContext ¶
type QueryStmtTransformer ¶
type QueryStmtTransformer struct {
// contains filtered or unexported fields
}
QueryStmtTransformer handles transformation of QueryStmt nodes from ZetaSQL to SQLite.
In BigQuery/ZetaSQL, a QueryStmt represents the outermost SELECT statement in a query, containing the final SELECT list that defines the output columns and their aliases. This is the top-level entry point for transforming complete SQL queries.
The transformer converts ZetaSQL QueryStmt nodes by: - Recursively transforming the main query scan (FROM clause) through the coordinator - Transforming each output column expression in the SELECT list - Preserving column aliases as specified in the original query - Creating the final SelectStatement structure for SQL generation
This transformer bridges the gap between ZetaSQL's resolved AST structure and the SQLite SELECT statement representation, ensuring all query components are properly transformed and integrated.
func NewQueryStmtTransformer ¶
func NewQueryStmtTransformer(coordinator Coordinator) *QueryStmtTransformer
NewQueryStmtTransformer creates a new query statement transformer
func (*QueryStmtTransformer) Transform ¶
func (t *QueryStmtTransformer) Transform(data StatementData, ctx TransformContext) (SQLFragment, error)
Transform converts QueryStmt data to SelectStatement This mirrors the logic from the existing VisitQuery method
type QueryTransformFactory ¶
type QueryTransformFactory struct {
// contains filtered or unexported fields
}
QueryTransformFactory creates and configures the complete transformation pipeline
func NewQueryTransformFactory ¶
func NewQueryTransformFactory(config *TransformConfig) *QueryTransformFactory
NewQueryTransformFactory creates a new factory with the given configuration
func (*QueryTransformFactory) CreateCoordinator ¶
func (f *QueryTransformFactory) CreateCoordinator() Coordinator
CreateCoordinator creates a fully configured coordinator with all transformers registered
func (*QueryTransformFactory) CreateTransformContext ¶
func (f *QueryTransformFactory) CreateTransformContext(ctx context.Context) TransformContext
CreateTransformContext creates a transform context with the factory's configuration
func (*QueryTransformFactory) GetRegisteredTransformers ¶
func (f *QueryTransformFactory) GetRegisteredTransformers() map[string][]string
GetRegisteredTransformers returns information about registered transformers
func (*QueryTransformFactory) TransformQuery ¶
func (f *QueryTransformFactory) TransformQuery(ctx context.Context, queryNode ast.Node) (*TransformResult, error)
TransformQuery is a convenience method that transforms a complete querybuilder
type ResolvedAggregateScan ¶
type ResolvedAggregateScan struct{}
ResolvedAggregateScan represents GROUP BY aggregation.
Column Behavior: RESTRICTS to GROUP BY + aggregate columns - column_list contains grouping columns + aggregate result columns - Input columns not in GROUP BY become unavailable - Aggregate functions create new columns - Implements HAVING clause filtering after grouping
Example:
AggregateScan( column_list=[dept#5, avg_sal#6], groupby_list=[dept#2], aggregate_list=[Avg(salary#3) AS avg_sal#6] ) -> Groups by: dept#2 -> Produces: dept#5 (grouped), avg_sal#6 (aggregate)
type ResolvedAnalyticScan ¶
type ResolvedAnalyticScan struct{}
ResolvedAnalyticScan represents window functions.
Column Behavior: RESTRICTS to input + window function columns - Adds window function result columns to input column set - Window functions computed over partitions/ordering - OVER clause defines computation window - Input columns preserved plus analytic results
Example:
AnalyticScan( column_list=[name#1, salary#2, rank#5], input_scan=TableScan(column_list=[name#1, salary#2]), function_group_list=[ RowNumber() OVER (ORDER BY salary#2 DESC) AS rank#5 ] ) -> Input: name#1, salary#2 -> Adds: rank#5 (window function result)
type ResolvedArrayScan ¶
type ResolvedArrayScan struct{}
ResolvedArrayScan represents UNNEST array operations.
Column Behavior: ADDS element + offset columns to input scope - element_column_list are new columns storing array element values - array_offset_column stores 0-based array position (optional) - column_list includes input_scan columns + element + offset columns - Creates CROSS JOIN UNNEST pattern in SQL generation
Example:
ArrayScan( column_list=[users.name#1, tag#2, pos#3], input_scan=TableScan(users), element_column_list=[tag#2], array_offset_column=pos#3 ) -> Input: users.name#1 -> Adds: tag#2 (array element), pos#3 (array position)
type ResolvedBarrierScan ¶
type ResolvedBarrierScan struct{}
ResolvedBarrierScan represents optimization barriers.
Column Behavior: PRESERVES with optimization boundary - column_list identical to input_scan column_list - Prevents certain querybuilder optimizations across boundary - Used by querybuilder optimizer to control transformation scope - Transparent to column scope but affects querybuilder planning
Example:
BarrierScan( column_list=[id#1, name#2], // Same as input input_scan=FilterScan(column_list=[id#1, name#2]) ) -> Preserves: id#1, name#2 (blocks optimization passes)
type ResolvedCloneScan ¶
type ResolvedCloneScan struct{}
ResolvedCloneScan represents table cloning operations.
Column Behavior: PRESERVES source table columns - Used in CREATE TABLE ... CLONE operations - column_list matches source table exactly - Preserves column names, types, and ordering - Creates new table with identical structure
Example:
CloneScan( column_list=[id#1, name#2, created_at#3], // Same as source source_table=Table("users") ) -> Clones: id#1, name#2, created_at#3 (identical to source)
type ResolvedExecuteAsRoleScan ¶
type ResolvedExecuteAsRoleScan struct{}
ResolvedExecuteAsRoleScan represents role context wrapper.
Column Behavior: PRESERVES input columns (with new IDs) - Creates new output columns that map 1:1 with input columns - Column types and names preserved but get new unique IDs - Establishes security/role context boundary - Makes this node a tracing boundary for rewriters
Example:
ExecuteAsRoleScan( column_list=[id#5, name#6], // New IDs, same structure input_scan=TableScan(column_list=[id#1, name#2]), delegated_user_catalog_object=Role("analyst") ) -> Input: id#1, name#2 -> Output: id#5, name#6 (new IDs, same data/types)
type ResolvedFilterScan ¶
type ResolvedFilterScan struct{}
ResolvedFilterScan represents WHERE clause filtering.
Column Behavior: PRESERVES input columns exactly - column_list identical to input_scan column_list - WHERE condition can reference any input column - Filters rows but doesn't change column structure - Most common passthrough scan type
Example:
FilterScan( column_list=[id#1, name#2, salary#3], // Same as input input_scan=TableScan(column_list=[id#1, name#2, salary#3]), filter_expr=Greater(salary#3, Literal(50000)) ) -> Input: id#1, name#2, salary#3 -> Output: id#1, name#2, salary#3 (same columns, fewer rows)
type ResolvedGroupRowsScan ¶
type ResolvedGroupRowsScan struct{}
ResolvedGroupRowsScan represents GROUP_ROWS() aggregation.
Column Behavior: AGGREGATES into array columns - Special aggregation that collects entire rows into arrays - Similar to GROUP BY but preserves row structure - Creates array-valued columns containing grouped rows - Used in advanced analytics and data processing
Example:
GroupRowsScan( column_list=[dept#5, employee_rows#6], input_scan=TableScan(column_list=[dept#1, name#2, salary#3]), groupby_list=[dept#1] ) -> Groups by: dept#1 -> Produces: dept#5, employee_rows#6 (ARRAY<STRUCT<name, salary>>)
type ResolvedJoinScan ¶
type ResolvedJoinScan struct{}
ResolvedJoinScan represents JOIN operations.
Column Behavior: COMBINES left + right columns - column_list contains columns from both input scans - Left input columns appear first, then right input columns - JOIN condition can reference columns from both sides - USING clause may affect column deduplication - Different join types (INNER, LEFT, RIGHT, FULL, CROSS) affect row filtering
Example:
JoinScan( join_type=INNER, column_list=[u.id#1, u.name#2, p.title#3, p.user_id#4], left_scan=TableScan(users, column_list=[u.id#1, u.name#2]), right_scan=TableScan(profiles, column_list=[p.title#3, p.user_id#4]), join_condition=Equal(u.id#1, p.user_id#4) ) -> Left: u.id#1, u.name#2 -> Right: p.title#3, p.user_id#4 -> Combined: u.id#1, u.name#2, p.title#3, p.user_id#4
type ResolvedLimitOffsetScan ¶
type ResolvedLimitOffsetScan struct{}
ResolvedLimitOffsetScan represents LIMIT/OFFSET clause.
Column Behavior: PRESERVES input columns exactly - column_list identical to input_scan column_list - LIMIT/OFFSET values must be non-negative integer literals or parameters - Restricts row count but doesn't change column structure - Preserves ordering from input scan
Example:
LimitOffsetScan( column_list=[name#1, salary#2], // Same as input input_scan=OrderedScan(column_list=[name#1, salary#2]), limit=Literal(10), offset=Literal(5) ) -> Preserves: name#1, salary#2 (same columns, limited rows)
type ResolvedMatchRecognizeScan ¶
type ResolvedMatchRecognizeScan struct{}
ResolvedMatchRecognizeScan represents MATCH_RECOGNIZE clause.
Column Behavior: COMPLEX pattern matching columns - Implements SQL MATCH_RECOGNIZE for pattern detection in ordered data - Input columns available for pattern matching expressions - Adds pattern matching result columns - PARTITION BY and ORDER BY clauses define matching scope - Pattern variables create complex column dependencies
Example:
MatchRecognizeScan( column_list=[symbol#1, price#2, match_id#5, start_row#6], input_scan=TableScan(column_list=[symbol#1, price#2, date#3]), partition_by=[symbol#1], pattern="STRT DOWN+ UP+", measures=[match_id#5, start_row#6] ) -> Input: symbol#1, price#2, date#3 -> Adds: match_id#5, start_row#6 (pattern results)
type ResolvedOrderByScan ¶
type ResolvedOrderByScan struct{}
ResolvedOrderByScan represents ORDER BY clause.
Column Behavior: PRESERVES input columns exactly - column_list identical to input_scan column_list - ORDER BY expressions can reference any input column - Changes row ordering but not column structure - Sets is_ordered=true for parent scans
Example:
OrderByScan( column_list=[name#1, salary#2], // Same as input input_scan=TableScan(column_list=[name#1, salary#2]), order_by_list=[OrderByItem(salary#2, DESC)] ) -> Preserves: name#1, salary#2 (same columns, sorted rows)
type ResolvedPivotScan ¶
type ResolvedPivotScan struct{}
ResolvedPivotScan represents PIVOT operations.
Column Behavior: TRANSFORMS rows to columns - Takes input rows and converts to columns based on pivot values - Grouping columns preserved in output - Pivot values become new column names - Aggregate values populate the pivoted columns - Complex column transformation from vertical to horizontal layout
Example:
PivotScan( column_list=[product#5, Q1_sales#6, Q2_sales#7], input_scan=TableScan(column_list=[product#1, quarter#2, sales#3]), pivot_expr_list=[quarter#2], pivot_value_list=["Q1", "Q2"], aggregate_list=[Sum(sales#3)] ) -> Input rows: (product, quarter, sales) -> Output columns: (product, Q1_sales, Q2_sales)
type ResolvedProjectScan ¶
type ResolvedProjectScan struct{}
ResolvedProjectScan represents SELECT list projection.
Column Behavior: RESTRICTS to only projected columns - Most important scope filter in SQL queries - column_list contains ONLY projected/computed columns - Input columns available for expressions but not passed through - Each expr in expr_list creates new output column - Implements column aliasing and computed expressions
Example:
ProjectScan( column_list=[name#5, total#6], input_scan=TableScan(column_list=[id#1, name#2, salary#3, bonus#4]), expr_list=[ ComputedColumn(column=name#5, expr=ColumnRef(name#2)), ComputedColumn(column=total#6, expr=Add(salary#3, bonus#4)) ] ) -> Input available: id#1, name#2, salary#3, bonus#4 -> Output restricted to: name#5, total#6
type ResolvedRecursiveScan ¶
type ResolvedRecursiveScan struct{}
ResolvedRecursiveScan represents recursive CTEs.
Column Behavior: COMBINES non-recursive + recursive parts - Implements WITH RECURSIVE clause functionality - non_recursive_term establishes initial result set - recursive_term references the CTE being defined - column_list combines both parts with consistent schema - Requires union-compatible column types
Example:
RecursiveScan( column_list=[id#5, parent_id#6, level#7], non_recursive_term=BaseQuery(column_list=[id#1, parent_id#2, level#3]), recursive_term=RecursiveQuery(column_list=[id#1, parent_id#2, level#4]) ) -> Combines recursive and non-recursive parts iteratively
type ResolvedRelationArgumentScan ¶
type ResolvedRelationArgumentScan struct{}
ResolvedRelationArgumentScan represents function relation arguments.
Column Behavior: PRODUCES argument table columns - Used in TVF calls that accept table arguments - Passes through columns from argument table - Maintains column identity from source relation
type ResolvedSampleScan ¶
type ResolvedSampleScan struct{}
ResolvedSampleScan represents TABLESAMPLE clause.
Column Behavior: PRESERVES input columns exactly - column_list identical to input_scan column_list - Adds optional weight_column for sampling weights - Supports BERNOULLI and RESERVOIR sampling methods - May include REPEATABLE clause for deterministic sampling
Example:
SampleScan( column_list=[id#1, name#2], // Same as input input_scan=TableScan(column_list=[id#1, name#2]), method="BERNOULLI", size=Literal(10.5), unit=PERCENT ) -> Preserves: id#1, name#2 (same columns, sampled rows)
type ResolvedSetOperationScan ¶
type ResolvedSetOperationScan struct{}
ResolvedSetOperationScan represents UNION/INTERSECT/EXCEPT operations.
Column Behavior: RESTRICTS to common column structure - Combines multiple input queries with compatible schemas - Output columns aligned positionally across inputs - Column types must be compatible/coercible - Column names taken from first (left) input
Example:
SetOperationScan( op_type=UNION_ALL, column_list=[name#7, count#8], input_list=[ Query1(column_list=[emp_name#1, emp_count#2]), Query2(column_list=[cust_name#3, cust_count#4]) ] ) -> Aligns: emp_name#1 ↔ cust_name#3 → name#7 -> Aligns: emp_count#2 ↔ cust_count#4 → count#8
type ResolvedSingleRowScan ¶
type ResolvedSingleRowScan struct{}
ResolvedSingleRowScan represents single row generators.
Column Behavior: PRODUCES empty row (no columns) - Used for queries without FROM clause (SELECT 1) - Creates single row with no columns for expression evaluation - Provides execution context for scalar expressions
Example:
SingleRowScan(column_list=[]) -> Produces: (empty - single row, no columns)
type ResolvedSubqueryScan ¶
type ResolvedSubqueryScan struct{}
ResolvedSubqueryScan represents subquery wrappers.
Column Behavior: ISOLATES subquery scope - Creates scope boundary between inner and outer queries - Subquery has completely independent column scope - Only subquery output columns visible to parent - Used in FROM clause subqueries and table expressions
Example:
SubqueryScan( column_list=[avg_sal#5], subquery=ProjectScan( column_list=[avg_sal#3], input_scan=AggregateScan(...) ) ) -> Subquery scope isolated during evaluation -> Only subquery output (avg_sal#5) available to parent
type ResolvedTVFScan ¶
type ResolvedTVFScan struct{}
ResolvedTVFScan represents table-valued function calls.
Column Behavior: PRODUCES function output columns - Output columns defined by TVF signature - May have parameters from outer scope (correlated) - Creates new column scope independent of input tables
Example:
TVFScan(column_list=[result#1, count#2], function=GenerateArray(1, 10)) -> Produces: result#1, count#2
type ResolvedTableScan ¶
type ResolvedTableScan struct{}
ResolvedTableScan represents base table access operations.
Column Behavior: PRODUCES table columns from schema - Reads columns from table definition in catalog - column_index_list matches 1:1 with the column_list - Identifies ordinal of corresponding column in table's column list - Creates foundation columns that flow upward through AST
Example:
TableScan(column_list=[users.id#1, users.name#2], table=users) -> Produces: users.id#1, users.name#2
type ResolvedUnpivotScan ¶
type ResolvedUnpivotScan struct{}
ResolvedUnpivotScan represents UNPIVOT operations.
Column Behavior: TRANSFORMS columns to rows - Takes input columns and converts to rows - Creates value column containing unpivoted data - Creates name column containing original column names - Preserves non-unpivoted columns - Inverse operation of PIVOT
Example:
UnpivotScan( column_list=[product#5, quarter#6, sales#7], input_scan=TableScan(column_list=[product#1, Q1_sales#2, Q2_sales#3]), unpivot_value_columns=[Q1_sales#2, Q2_sales#3], unpivot_name_column=quarter#6, unpivot_value_column=sales#7 ) -> Input columns: (product, Q1_sales, Q2_sales) -> Output rows: (product, quarter, sales)
type ResolvedValueTableScan ¶
type ResolvedValueTableScan struct{}
ResolvedValueTableScan represents value table access.
Column Behavior: PRODUCES single anonymous column - Value tables have single unnamed column containing structured data - Column represents entire row value (STRUCT, PROTO, etc.) - Used with AS STRUCT/AS VALUE table patterns
type ResolvedWithRefScan ¶
type ResolvedWithRefScan struct{}
ResolvedWithRefScan represents CTE references.
Column Behavior: MAPS CTE columns to new IDs (1:1) - References previously defined CTE by name - column_list matches 1:1 with referenced CTE output - Each column gets new unique ID but preserves type/name - Enables CTE reuse in multiple locations
Example:
WithRefScan( with_query_name="customer_stats", column_list=[name#5, total#6] // New IDs ) -> References CTE with column_list=[name#3, total#4] -> Maps: name#3 → name#5, total#4 → total#6
type ResolvedWithScan ¶
type ResolvedWithScan struct{}
ResolvedWithScan represents CTE definitions.
Column Behavior: ISOLATES CTE scopes, EXPOSES CTE outputs - with_entry_list defines multiple CTEs with isolated scopes - Each CTE has independent column scope during definition - CTE output columns become available to main querybuilder - CTE aliases are unique within querybuilder scope - Supports both recursive and non-recursive CTEs
Example:
WithScan( column_list=[name#7, total_orders#8], // Main querybuilder output with_entry_list=[ WithEntry( name="customer_stats", querybuilder=AggregateScan(column_list=[name#3, total#4]) ) ], querybuilder=ProjectScan( input_scan=WithRefScan("customer_stats", column_list=[name#5, total#6]) ) ) -> CTE "customer_stats" isolated during definition -> CTE output becomes available as WithRefScan input
type Result ¶
type Result struct {
// contains filtered or unexported fields
}
func (*Result) ChangedCatalog ¶
func (r *Result) ChangedCatalog() *ChangedCatalog
func (*Result) LastInsertId ¶
func (*Result) RowsAffected ¶
type Rows ¶
type Rows struct {
// contains filtered or unexported fields
}
func (*Rows) ChangedCatalog ¶
func (r *Rows) ChangedCatalog() *ChangedCatalog
func (*Rows) ColumnTypeDatabaseTypeName ¶
func (*Rows) SetActions ¶
func (r *Rows) SetActions(actions []StmtAction)
type SQLBuilderVisitor ¶
type SQLBuilderVisitor struct {
// contains filtered or unexported fields
}
func NewSQLBuilderVisitor ¶
func NewSQLBuilderVisitor(ctx context.Context) *SQLBuilderVisitor
func (*SQLBuilderVisitor) VisitAggregateFunctionCallNode ¶
func (v *SQLBuilderVisitor) VisitAggregateFunctionCallNode(node *ast.AggregateFunctionCallNode) (SQLFragment, error)
func (*SQLBuilderVisitor) VisitAggregateScanNode ¶
func (v *SQLBuilderVisitor) VisitAggregateScanNode(node *ast.AggregateScanNode) (SQLFragment, error)
VisitAggregateScanNode handles ZetaSQL GROUP BY operations and aggregate functions. It processes both simple GROUP BY queries and complex GROUPING SETS operations.
The function: 1. Visits the input scan to get the base data source 2. Processes all aggregate expressions and makes them available in context 3. Processes GROUP BY columns and wraps them with zetasqlite_group_by function 4. Builds the output column list matching ZetaSQL semantics 5. Delegates to buildGroupingSetsQuery for GROUPING SETS or creates simple GROUP BY
Returns a SelectStatement with the aggregate operation and proper grouping.
func (*SQLBuilderVisitor) VisitAnalyticFunctionCallNode ¶
func (v *SQLBuilderVisitor) VisitAnalyticFunctionCallNode(node *ast.AnalyticFunctionCallNode) (SQLFragment, error)
func (*SQLBuilderVisitor) VisitAnalyticFunctionGroupNode ¶
func (v *SQLBuilderVisitor) VisitAnalyticFunctionGroupNode(node *ast.AnalyticFunctionGroupNode) ([]*SelectListItem, error)
func (*SQLBuilderVisitor) VisitAnalyticScanNode ¶
func (v *SQLBuilderVisitor) VisitAnalyticScanNode(node *ast.AnalyticScanNode) (SQLFragment, error)
VisitAnalyticScanNode handles ZetaSQL window function operations. It processes analytic functions (window functions) and combines them with input columns.
The function: 1. Visits the input scan to get the base data source 2. Processes all analytic function groups and stores them in context 3. Builds a SELECT list combining input columns and analytic results 4. Uses the fragment context to resolve column expressions properly
Analytic functions add computed columns based on window specifications while preserving all input columns.
Returns a SelectStatement with window functions in the SELECT list.
func (*SQLBuilderVisitor) VisitArgumentRefNode ¶
func (v *SQLBuilderVisitor) VisitArgumentRefNode(node *ast.ArgumentRefNode) (SQLFragment, error)
VisitArgumentRefNode converts ZetaSQL function argument references into SQLite parameter syntax. This is used in the context of user-defined functions where arguments are referenced by name.
Returns a LiteralExpression containing the named parameter reference ("@argument_name").
func (*SQLBuilderVisitor) VisitArrayScan ¶
func (v *SQLBuilderVisitor) VisitArrayScan(node *ast.ArrayScanNode) (SQLFragment, error)
VisitArrayScan implements ZetaSQL UNNEST functionality using SQLite's json_each table function. This converts array operations into table-valued functions that can be joined with other tables.
The function: 1. Processes the input scan if present (for correlated arrays) 2. Converts the array expression using zetasqlite_decode_array 3. Uses json_each to unnest the array into rows 4. Maps 'value' column to array elements and 'key' to array indices 5. Handles OUTER joins for optional array elements
Returns a SelectStatement with the unnest operation, potentially joined with input.
func (*SQLBuilderVisitor) VisitCastNode ¶
func (v *SQLBuilderVisitor) VisitCastNode(node *ast.CastNode) (SQLFragment, error)
VisitCastNode handles ZetaSQL type casting operations using the zetasqlite_cast function. It converts type casts by encoding both source and target type information as JSON.
The function creates a call to zetasqlite_cast with the following arguments: 1. Expression to cast 2. JSON-encoded source type information 3. JSON-encoded target type information 4. Boolean flag indicating whether to return NULL on cast errors
Returns a FunctionCall expression for the type cast operation.
func (*SQLBuilderVisitor) VisitColumnRefNode ¶
func (v *SQLBuilderVisitor) VisitColumnRefNode(node *ast.ColumnRefNode) (SQLFragment, error)
VisitColumnRefNode handles column references by looking up the column expression from the fragment context. This allows columns to be properly qualified with table aliases and resolved to their source expressions.
Returns the column expression from the fragment context, which may be a simple column reference or a more complex expression depending on the column's origin.
func (*SQLBuilderVisitor) VisitComputedColumnNode ¶
func (v *SQLBuilderVisitor) VisitComputedColumnNode(node *ast.ComputedColumnNode) (SQLFragment, error)
VisitComputedColumnNode handles computed columns by visiting their underlying expressions and assigning the column's name as an alias.
Computed columns represent expressions that are calculated and given a column name.
Returns the computed expression with the column name set as its alias.
func (*SQLBuilderVisitor) VisitCreateFunctionStmt ¶
func (v *SQLBuilderVisitor) VisitCreateFunctionStmt(node *ast.CreateFunctionStmtNode) (SQLFragment, error)
VisitCreateFunctionStmt for CreateFunctionStmtNode processes CREATE FUNCTION statements
func (*SQLBuilderVisitor) VisitCreateTableAsSelectStmt ¶
func (v *SQLBuilderVisitor) VisitCreateTableAsSelectStmt(node *ast.CreateTableAsSelectStmtNode) (SQLFragment, error)
VisitCreateTableAsSelectStmt for CreateTableAsSelectStmtNode processes CREATE TABLE AS SELECT statements
func (*SQLBuilderVisitor) VisitCreateViewStatement ¶
func (v *SQLBuilderVisitor) VisitCreateViewStatement(node *ast.CreateViewStmtNode) (SQLFragment, error)
func (*SQLBuilderVisitor) VisitDMLDefaultNode ¶
func (v *SQLBuilderVisitor) VisitDMLDefaultNode(node *ast.DMLDefaultNode) (SQLFragment, error)
func (*SQLBuilderVisitor) VisitDMLStatement ¶
func (v *SQLBuilderVisitor) VisitDMLStatement(node ast.Node) (SQLFragment, error)
func (*SQLBuilderVisitor) VisitDMLValueNode ¶
func (v *SQLBuilderVisitor) VisitDMLValueNode(node *ast.DMLValueNode) (SQLFragment, error)
func (*SQLBuilderVisitor) VisitDeleteStatement ¶
func (v *SQLBuilderVisitor) VisitDeleteStatement(node *ast.DeleteStmtNode) (SQLFragment, error)
func (*SQLBuilderVisitor) VisitDropFunctionStmt ¶
func (v *SQLBuilderVisitor) VisitDropFunctionStmt(node *ast.DropFunctionStmtNode) (SQLFragment, error)
VisitDropFunctionStmt for DropFunctionStmtNode processes DROP FUNCTION statements
func (*SQLBuilderVisitor) VisitDropStmt ¶
func (v *SQLBuilderVisitor) VisitDropStmt(node *ast.DropStmtNode) (SQLFragment, error)
VisitDropStmt for DropStmtNode processes DROP statements
func (*SQLBuilderVisitor) VisitExpression ¶
func (v *SQLBuilderVisitor) VisitExpression(expr ast.Node) (SQLFragment, error)
VisitExpression is the central dispatcher that routes different ZetaSQL expression types to their specific handlers. It converts ZetaSQL AST expressions into SQLite-compatible SQL fragments by using the visitor pattern.
Supported expression types include literals, structs, function calls, casts, column references, subqueries, aggregate functions, and parameters.
Returns a SQLFragment representing the converted expression, or an error if the expression type is unsupported or conversion fails.
func (*SQLBuilderVisitor) VisitFilterScanNode ¶
func (v *SQLBuilderVisitor) VisitFilterScanNode(node *ast.FilterScanNode) (SQLFragment, error)
VisitFilterScanNode handles ZetaSQL filter operations by adding WHERE clauses. It wraps the input scan with a SELECT statement that includes the filter condition.
The function: 1. Visits the input scan to get the base data source 2. Visits the filter expression to get the WHERE condition 3. Creates a SELECT * statement with the WHERE clause applied
This preserves all columns from the input while applying the filter.
Returns a SelectStatement with the WHERE clause.
func (*SQLBuilderVisitor) VisitFunctionCallNode ¶
func (v *SQLBuilderVisitor) VisitFunctionCallNode(node *ast.FunctionCallNode) (SQLFragment, error)
VisitFunctionCallNode handles ZetaSQL function calls, with special handling for control flow functions. It converts function calls to SQLite-compatible syntax, transforming certain functions into CASE expressions.
Special transformations: - zetasqlite_ifnull → CASE expression with NULL check - zetasqlite_if → CASE expression with condition - zetasqlite_case_no_value → CASE expression without value comparison - zetasqlite_case_with_value → CASE expression with value comparison
For other functions, it checks the context function map for custom implementations, falling back to standard function call syntax.
Returns a SQLFragment representing the function call or converted CASE expression.
func (*SQLBuilderVisitor) VisitGetJsonFieldNode ¶
func (v *SQLBuilderVisitor) VisitGetJsonFieldNode(node *ast.GetJsonFieldNode) (SQLFragment, error)
VisitGetJsonFieldNode extracts fields from JSON objects using the zetasqlite_get_json_field function. It converts ZetaSQL JSON field access operations into SQLite-compatible function calls.
The function creates a call to zetasqlite_get_json_field(json_expr, field_name).
Returns a FunctionCall expression for the JSON field access.
func (*SQLBuilderVisitor) VisitGetStructFieldNode ¶
func (v *SQLBuilderVisitor) VisitGetStructFieldNode(node *ast.GetStructFieldNode) (SQLFragment, error)
VisitGetStructFieldNode extracts fields from STRUCT objects using the zetasqlite_get_struct_field function. It converts ZetaSQL STRUCT field access operations into SQLite-compatible function calls.
The function creates a call to zetasqlite_get_struct_field(struct_expr, field_index). The field index is used instead of field name for efficient access.
Returns a FunctionCall expression for the STRUCT field access.
func (*SQLBuilderVisitor) VisitInsertRowNode ¶
func (v *SQLBuilderVisitor) VisitInsertRowNode(node *ast.InsertRowNode) (SQLFragment, error)
func (*SQLBuilderVisitor) VisitInsertStatement ¶
func (v *SQLBuilderVisitor) VisitInsertStatement(node *ast.InsertStmtNode) (SQLFragment, error)
func (*SQLBuilderVisitor) VisitJoinScan ¶
func (v *SQLBuilderVisitor) VisitJoinScan(node *ast.JoinScanNode) (SQLFragment, error)
VisitJoinScan converts ZetaSQL JOIN operations into SQLite JOIN syntax. It handles all join types (INNER, LEFT, RIGHT, FULL, CROSS) and properly manages column scoping and qualification.
The function: 1. Visits left and right input scans 2. Processes the join condition after both sides are available 3. Creates a SelectStatement with a JOIN clause 4. Builds the output column list with proper column references 5. Makes joined columns available for parent scopes
Returns a SelectStatement fragment with the JOIN operation.
func (*SQLBuilderVisitor) VisitLimitOffsetScanNode ¶
func (v *SQLBuilderVisitor) VisitLimitOffsetScanNode(node *ast.LimitOffsetScanNode) (SQLFragment, error)
VisitLimitOffsetScanNode handles ZetaSQL LIMIT and OFFSET operations for pagination. It creates a SELECT statement with LIMIT and/or OFFSET clauses.
The function: 1. Visits the input scan to get the base data source 2. Builds the SELECT list with proper column references 3. Adds LIMIT clause if specified 4. Adds OFFSET clause if specified
Both LIMIT and OFFSET are optional and converted from expressions.
Returns a SelectStatement with LIMIT/OFFSET clauses.
func (*SQLBuilderVisitor) VisitLiteralNode ¶
func (v *SQLBuilderVisitor) VisitLiteralNode(node *ast.LiteralNode) (SQLFragment, error)
VisitLiteralNode converts ZetaSQL literal values into SQLite-compatible literal expressions. It handles type conversion and proper escaping of literal values.
Returns a LiteralExpression fragment containing the converted value.
func (*SQLBuilderVisitor) VisitMakeStructNode ¶
func (v *SQLBuilderVisitor) VisitMakeStructNode(node *ast.MakeStructNode) (SQLFragment, error)
VisitMakeStructNode creates STRUCT expressions using the zetasqlite_make_struct function. It converts ZetaSQL STRUCT constructors into function calls with alternating field names and values.
The function: 1. Extracts field names and types from the struct definition 2. Visits each field expression to get its SQL representation 3. Creates alternating name/value argument pairs 4. Calls zetasqlite_make_struct(name1, value1, name2, value2, ...)
Returns a FunctionCall expression for the struct constructor.
func (*SQLBuilderVisitor) VisitMergeStatement ¶
func (v *SQLBuilderVisitor) VisitMergeStatement(node *ast.MergeStmtNode) ([]*SQLExpression, error)
func (*SQLBuilderVisitor) VisitOrderByItemNode ¶
func (v *SQLBuilderVisitor) VisitOrderByItemNode(node *ast.OrderByItemNode) ([]*OrderByItem, error)
VisitOrderByItemNode converts ZetaSQL ORDER BY items into SQLite ORDER BY clauses. It handles null ordering behavior by generating additional ORDER BY items when needed.
The function: 1. Gets the column expression and applies zetasqlite_collate collation 2. Handles NULL ordering (NULLS FIRST/LAST) by creating additional ORDER BY items 3. Sets the sort direction (ASC/DESC) based on the node's IsDescending flag
Returns a slice of OrderByItem objects representing the complete ordering specification.
func (*SQLBuilderVisitor) VisitOrderByScanNode ¶
func (v *SQLBuilderVisitor) VisitOrderByScanNode(node *ast.OrderByScanNode) (SQLFragment, error)
VisitOrderByScanNode handles ZetaSQL ordering operations by adding ORDER BY clauses. It processes ORDER BY items which may include NULL ordering specifications.
The function: 1. Visits the input scan to get the base data source 2. Processes each ORDER BY item, handling NULL ordering requirements 3. Creates a SELECT * statement with the ORDER BY clause applied
Note that each OrderByItemNode may generate multiple OrderByItems to handle ZetaSQL's NULLS FIRST/LAST semantics in SQLite.
Returns a SelectStatement with the ORDER BY clause.
func (*SQLBuilderVisitor) VisitOutputColumnNode ¶
func (v *SQLBuilderVisitor) VisitOutputColumnNode(node *ast.OutputColumnNode) (SQLFragment, error)
VisitOutputColumnNode handles output column references by delegating to the fragment context. Output columns represent the final columns in a querybuilder's result set.
Returns the column expression from the fragment context.
func (*SQLBuilderVisitor) VisitParameterNode ¶
func (v *SQLBuilderVisitor) VisitParameterNode(node *ast.ParameterNode) (SQLFragment, error)
VisitParameterNode converts ZetaSQL parameter references into SQLite parameter syntax. It handles both named and positional parameters used in prepared statements.
Parameter formats: - Named parameters: "@parameter_name" - Positional parameters: "?"
Returns a LiteralExpression containing the parameter placeholder.
func (*SQLBuilderVisitor) VisitProjectScan ¶
func (v *SQLBuilderVisitor) VisitProjectScan(node *ast.ProjectScanNode) (SQLFragment, error)
VisitProjectScan handles ZetaSQL projection operations by converting them into SQLite SELECT statements. A ProjectScan represents a SELECT with computed columns.
The function: 1. Visits the input scan to get the base FROM clause 2. Processes computed expressions and makes them available in context 3. Builds a SELECT statement with the projected column list 4. Assigns column aliases using the format "col{ColumnID}"
Returns a SelectStatement fragment representing the projection.
func (*SQLBuilderVisitor) VisitQuery ¶
func (v *SQLBuilderVisitor) VisitQuery(node *ast.QueryStmtNode) (SQLFragment, error)
VisitQuery Formats the outermost querybuilder statement that runs and produces rows of output, like a SELECT The node's `OutputColumnList()` gives user-visible column names that should be returned. There may be duplicate names, and multiple output columns may reference the same column from `Query()` https://github.com/google/zetasql/blob/master/docs/resolved_ast.md#ResolvedQueryStmt
func (*SQLBuilderVisitor) VisitSQL ¶
func (v *SQLBuilderVisitor) VisitSQL(node *ast.CreateTableStmtNode) (SQLFragment, error)
VisitCreateTableStmt for CreateTableStmtNode processes CREATE TABLE statements
func (*SQLBuilderVisitor) VisitScan ¶
func (v *SQLBuilderVisitor) VisitScan(scan ast.Node) (*FromItem, error)
VisitScan is the central dispatcher for all scan node types in the ZetaSQL AST. It implements the bottom-up traversal pattern where child scans are processed first, then parent scans build upon their results.
The function: 1. Pushes a new scope for this scan operation 2. Dispatches to the appropriate scan-specific visitor 3. Handles column list exposure for scans that implement ColumnListProvider 4. Finalizes the scope and converts the result to a FromItem
This ensures proper column scoping and availability management throughout the scan tree traversal, following the go-zetasql architectural pattern.
Returns a FromItem suitable for use in FROM clauses of parent scans.
func (*SQLBuilderVisitor) VisitSetOperationItemNode ¶
func (v *SQLBuilderVisitor) VisitSetOperationItemNode(node *ast.SetOperationItemNode) (*FromItem, error)
VisitSetOperationItemNode processes individual items in set operations (UNION, INTERSECT, EXCEPT). Each item represents a subquery that contributes to the set operation.
Returns the FromItem representing the subquery scan.
func (*SQLBuilderVisitor) VisitSetOperationScanNode ¶
func (v *SQLBuilderVisitor) VisitSetOperationScanNode(node *ast.SetOperationScanNode) (SQLFragment, error)
VisitSetOperationScanNode handles ZetaSQL set operations (UNION, INTERSECT, EXCEPT) and converts them to SQLite-compatible syntax.
The function: 1. Maps ZetaSQL set operation types to SQLite equivalents 2. Processes all input items (subqueries) in the set operation 3. Creates a SetOperation fragment with proper type and modifier 4. Handles WITH clause propagation from subqueries to top level 5. Creates a wrapper SELECT that exposes the unified column set
Set operations combine multiple queries with the same column structure.
Returns a SelectStatement containing the set operation.
func (*SQLBuilderVisitor) VisitSingleRowScanNode ¶
func (v *SQLBuilderVisitor) VisitSingleRowScanNode(node *ast.SingleRowScanNode) (SQLFragment, error)
VisitSingleRowScanNode handles single-row table scans. They are effectively no-ops but needed as input for other scans
func (*SQLBuilderVisitor) VisitSubqueryExpressionNode ¶
func (v *SQLBuilderVisitor) VisitSubqueryExpressionNode(node *ast.SubqueryExprNode) (SQLFragment, error)
VisitSubqueryExpressionNode handles different types of subquery expressions in ZetaSQL. It converts subqueries based on their type and context within the larger querybuilder.
Supported subquery types: - Scalar: Returns a single value from the subquery - Array: Wraps the result using zetasqlite_array function to create an array - Exists: Creates an EXISTS(subquery) expression - In: Creates an "expr IN (subquery)" expression - LikeAny/LikeAll: Not fully implemented
Returns a SQLFragment representing the subquery expression based on its type.
func (*SQLBuilderVisitor) VisitTableScan ¶
func (v *SQLBuilderVisitor) VisitTableScan(node *ast.TableScanNode, fromOnly bool) (SQLFragment, error)
VisitTableScan converts a ZetaSQL table scan node into a SQLite FROM clause. It generates unique table aliases to prevent column name conflicts and creates column metadata for tracking output columns.
The function: - Generates a unique table alias using the alias generator - Creates ColumnInfo entries for all columns in the table - Stores fragment metadata for later reference - Makes columns available in the current scope
Returns a TableFromItem fragment representing the table reference.
func (*SQLBuilderVisitor) VisitTruncateStmt ¶
func (v *SQLBuilderVisitor) VisitTruncateStmt(node *ast.TruncateStmtNode) (SQLFragment, error)
VisitTruncateStmt for TruncateStmtNode processes TRUNCATE statements
func (*SQLBuilderVisitor) VisitUpdateItem ¶
func (v *SQLBuilderVisitor) VisitUpdateItem(node *ast.UpdateItemNode) (*SetItem, error)
func (*SQLBuilderVisitor) VisitUpdateStatement ¶
func (v *SQLBuilderVisitor) VisitUpdateStatement(node *ast.UpdateStmtNode) (SQLFragment, error)
func (*SQLBuilderVisitor) VisitWithEntryNode ¶
func (v *SQLBuilderVisitor) VisitWithEntryNode(node *ast.WithEntryNode) (SQLFragment, error)
VisitWithEntryNode processes individual entries in WITH clauses (Common Table Expressions). It creates a named subquery that can be referenced by other parts of the querybuilder.
The function: 1. Visits the subquery to get its SQL representation 2. Registers the WITH entry's column mappings in the fragment context 3. Creates a WithClause fragment with the querybuilder name and SELECT * wrapper
This enables proper column resolution when the WITH entry is later referenced.
Returns a WithClause fragment representing the CTE definition.
func (*SQLBuilderVisitor) VisitWithRefScanNode ¶
func (v *SQLBuilderVisitor) VisitWithRefScanNode(node *ast.WithRefScanNode) (SQLFragment, error)
VisitWithRefScanNode handles references to previously defined WITH clauses (CTEs). It creates a SELECT statement that references the WITH clause by name and maps its columns to the expected output format.
The function: 1. Creates a SELECT statement with the WITH querybuilder name as the table 2. Uses stored column mappings to properly reference CTE columns 3. Assigns output column aliases matching the expected format
This enables queries to reference CTEs defined earlier in the WITH clause.
Returns a SelectStatement that references the WITH clause.
func (*SQLBuilderVisitor) VisitWithScanNode ¶
func (v *SQLBuilderVisitor) VisitWithScanNode(node *ast.WithScanNode) (SQLFragment, error)
VisitWithScanNode handles complete WITH statements that define multiple CTEs and execute a main querybuilder that can reference those CTEs.
The function: 1. Processes all WITH entries to create CTE definitions 2. Visits the main querybuilder that uses those CTEs 3. Combines them into a SELECT statement with WITH clauses
This implements ZetaSQL's WITH clause semantics in SQLite-compatible syntax.
Returns a SelectStatement with the complete WITH clause structure.
type SQLExpression ¶
type SQLExpression struct { Type ExpressionType Value string BinaryExpression *BinaryExpression FunctionCall *FunctionCall Subquery *SelectStatement CaseExpression *CaseExpression ExistsExpr *ExistsExpression Alias string TableAlias string Collation string }
SQLExpression represents any SQL expression
func NewBinaryExpression ¶
func NewBinaryExpression(left *SQLExpression, operator string, right *SQLExpression) *SQLExpression
NewBinaryExpression creates a new binary expression
func NewCaseExpression ¶
func NewCaseExpression(whenClauses []*WhenClause, elseExpr *SQLExpression) *SQLExpression
NewCaseExpression creates a new CASE expression (searched case)
func NewColumnExpression ¶
func NewColumnExpression(column string, tableAlias ...string) *SQLExpression
NewColumnExpression creates a new column reference expression
func NewExistsExpression ¶
func NewExistsExpression(subquery *SelectStatement) *SQLExpression
NewExistsExpression creates a new EXISTS expression
func NewFunctionExpression ¶
func NewFunctionExpression(name string, args ...*SQLExpression) *SQLExpression
NewFunctionExpression creates a new function call expression
func NewLiteralExpression ¶
func NewLiteralExpression(value string) *SQLExpression
NewLiteralExpression creates a new literal expression
func NewLiteralExpressionFromGoValue ¶
func NewLiteralExpressionFromGoValue(t types.Type, value interface{}) (*SQLExpression, error)
func NewSimpleCaseExpression ¶
func NewSimpleCaseExpression(caseExpr *SQLExpression, whenClauses []*WhenClause, elseExpr *SQLExpression) *SQLExpression
NewSimpleCaseExpression creates a new CASE expression with a case expression (simple case)
func NewStarExpression ¶
func NewStarExpression(tableAlias ...string) *SQLExpression
NewStarExpression creates a new star (*) expression for SELECT *
func NewUniqueColumnExpression ¶
func NewUniqueColumnExpression(column *ast.Column, tableAlias ...string) *SQLExpression
NewUniqueColumnExpression creates a new unique column reference expression
func (*SQLExpression) String ¶
func (e *SQLExpression) String() string
func (*SQLExpression) WriteSql ¶
func (e *SQLExpression) WriteSql(writer *SQLWriter) error
type SQLFragment ¶
SQLFragment represents any component that can generate SQL
type SQLWriter ¶
type SQLWriter struct {
// contains filtered or unexported fields
}
SQLWriter handles SQL string generation with proper formatting
func NewSQLWriter ¶
func NewSQLWriter() *SQLWriter
func (*SQLWriter) WriteDebug ¶
WriteDebug writes debug information with tree structure formatting
func (*SQLWriter) WriteDebugLine ¶
WriteDebugLine writes debug information with tree structure formatting and newline
type SQLiteFunction ¶
type SQLiteFunction func(...interface{}) (interface{}, error)
type STDDEV ¶
type STDDEV = STDDEV_SAMP
type STDDEV_POP ¶
type STDDEV_POP struct {
// contains filtered or unexported fields
}
func (*STDDEV_POP) Done ¶
func (f *STDDEV_POP) Done() (Value, error)
func (*STDDEV_POP) Step ¶
func (f *STDDEV_POP) Step(v Value, opt *AggregatorOption) error
type STDDEV_SAMP ¶
type STDDEV_SAMP struct {
// contains filtered or unexported fields
}
func (*STDDEV_SAMP) Done ¶
func (f *STDDEV_SAMP) Done() (Value, error)
func (*STDDEV_SAMP) Step ¶
func (f *STDDEV_SAMP) Step(v Value, opt *AggregatorOption) error
type STRING_AGG ¶
type STRING_AGG struct {
// contains filtered or unexported fields
}
func (*STRING_AGG) Done ¶
func (f *STRING_AGG) Done() (Value, error)
func (*STRING_AGG) Step ¶
func (f *STRING_AGG) Step(v Value, delim string, opt *AggregatorOption) error
type SafeValue ¶
type SafeValue struct {
// contains filtered or unexported fields
}
func (*SafeValue) ToArray ¶
func (v *SafeValue) ToArray() (*ArrayValue, error)
func (*SafeValue) ToStruct ¶
func (v *SafeValue) ToStruct() (*StructValue, error)
type ScanData ¶
type ScanData struct { Type ScanType `json:"type,omitempty"` ColumnList []*ColumnData `json:"column_list,omitempty"` // Output columns from this scan TableScan *TableScanData `json:"table_scan,omitempty"` JoinScan *JoinScanData `json:"join_scan,omitempty"` FilterScan *FilterScanData `json:"filter_scan,omitempty"` ProjectScan *ProjectScanData `json:"project_scan,omitempty"` AggregateScan *AggregateScanData `json:"aggregate_scan,omitempty"` OrderByScan *OrderByScanData `json:"order_by_scan,omitempty"` LimitScan *LimitScanData `json:"limit_scan,omitempty"` SetOperationScan *SetOperationData `json:"set_operation_scan,omitempty"` WithScan *WithScanData `json:"with_scan,omitempty"` WithRefScan *WithRefScanData `json:"with_ref_scan,omitempty"` WithEntryScan *WithEntryData `json:"with_entry_scan,omitempty"` ArrayScan *ArrayScanData `json:"array_scan,omitempty"` AnalyticScan *AnalyticScanData `json:"analytic_scan,omitempty"` }
ScanData represents scan operation data
func (*ScanData) FindColumnByID ¶
func (s *ScanData) FindColumnByID(id int) *ColumnData
type ScanTransformer ¶
type ScanTransformer interface { Transformer[ScanData, *FromItem] }
ScanTransformer handles scan node transformations
type Scope ¶
type Scope struct { ID int Variables map[string]*SQLExpression Parent *Scope }
Scope represents a context scope
type ScopeBehavior ¶
type ScopeBehavior int
ScopeBehavior represents the different ways scan nodes handle column scopes
const ( ScopeOpener ScopeBehavior = iota // Creates/produces new columns ScopeFilter // Removes/transforms available columns ScopePassthrough // Preserves input columns exactly ScopeMerger // Combines columns from multiple sources ScopeTransformer // Special column handling (CTEs, subqueries, etc.) ScopeOther // Unique/complex behavior )
func GetScopeBehavior ¶
func GetScopeBehavior(nodeKind ast.Kind) (ScopeBehavior, bool)
GetScopeBehavior returns the scope behavior for a given node kind string. Returns ScopeOther and false if the node kind is not found.
func (ScopeBehavior) String ¶
func (sb ScopeBehavior) String() string
String returns the string representation of ScopeBehavior
type ScopeInfo ¶
type ScopeInfo struct {
ResolvedColumns map[string]*ColumnInfo
}
type ScopeManager ¶
type ScopeManager struct {
// contains filtered or unexported fields
}
ScopeManager manages nested scopes
func NewScopeManager ¶
func NewScopeManager() *ScopeManager
NewScopeManager creates a new scope manager
func (*ScopeManager) CurrentScope ¶
func (sm *ScopeManager) CurrentScope() *Scope
CurrentScope returns the current scope
func (*ScopeManager) EnterScope ¶
func (sm *ScopeManager) EnterScope() *Scope
EnterScope enters a new scope
func (*ScopeManager) ExitScope ¶
func (sm *ScopeManager) ExitScope()
ExitScope exits the current scope
type SelectData ¶
type SelectData struct { SelectList []*SelectItemData `json:"select_list,omitempty"` FromClause *ScanData `json:"from_clause,omitempty"` WhereClause *ExpressionData `json:"where_clause,omitempty"` GroupBy []*ExpressionData `json:"group_by,omitempty"` Having *ExpressionData `json:"having,omitempty"` OrderBy []*OrderByItemData `json:"order_by,omitempty"` Limit *LimitData `json:"limit,omitempty"` SetOperation *SetOperationData `json:"set_operation,omitempty"` }
SelectData represents SELECT statement data
type SelectItemData ¶
type SelectItemData struct { Expression ExpressionData `json:"expression,omitempty"` Alias string `json:"alias,omitempty"` }
SelectItemData represents a SELECT list item
type SelectListItem ¶
type SelectListItem struct { Expression *SQLExpression Alias string IsStarExpansion bool ExceptColumns []string // For SELECT * EXCEPT ReplaceColumns map[string]*SQLExpression // For SELECT * REPLACE }
SelectListItem represents an item in the SELECT clause
func (*SelectListItem) String ¶
func (s *SelectListItem) String() string
func (*SelectListItem) WriteSql ¶
func (s *SelectListItem) WriteSql(writer *SQLWriter) error
type SelectStatement ¶
type SelectStatement struct { // WITH clause WithClauses []*WithClause // SELECT clause SelectType SelectType SelectList []*SelectListItem AsStructType string AsValueType string // FROM clause FromClause *FromItem // WHERE clause WhereClause *SQLExpression // GROUP BY clause GroupByList []*SQLExpression // HAVING clause HavingClause *SQLExpression // ORDER BY clause OrderByList []*OrderByItem // LIMIT OFFSET clause LimitClause *LimitClause // Set operations SetOperation *SetOperation // Hints Hints []string }
SelectStatement represents the main SELECT statement structure
func NewSelectStarStatement ¶
func NewSelectStarStatement(from *FromItem) *SelectStatement
func NewSelectStatement ¶
func NewSelectStatement() *SelectStatement
NewSelectStatement creates a new SELECT statement
func (*SelectStatement) String ¶
func (s *SelectStatement) String() string
func (*SelectStatement) WriteSql ¶
func (s *SelectStatement) WriteSql(writer *SQLWriter) error
type SelectType ¶
type SelectType int
SelectType represents different SELECT variants
const ( SelectTypeStandard SelectType = iota SelectTypeDistinct SelectTypeAll SelectTypeAsStruct SelectTypeAsValue )
type SetItem ¶
type SetItem struct { Column *SQLExpression Value *SQLExpression }
type SetItemData ¶
type SetItemData struct { Column string `json:"column,omitempty"` Value ExpressionData `json:"value,omitempty"` }
SetItemData represents SET item in UPDATE
type SetOperation ¶
type SetOperation struct { Type string // UNION, INTERSECT, EXCEPT Modifier string // ALL, DISTINCT Items []*SelectStatement }
SetOperation represents UNION, INTERSECT, EXCEPT operations
func (*SetOperation) String ¶
func (s *SetOperation) String() string
func (*SetOperation) WriteSql ¶
func (s *SetOperation) WriteSql(writer *SQLWriter) error
type SetOperationData ¶
type SetOperationData struct { Type string `json:"type,omitempty"` // UNION, INTERSECT, EXCEPT Modifier string `json:"modifier,omitempty"` // ALL, DISTINCT Items []StatementData `json:"items,omitempty"` // List of statements to combine }
SetOperationData represents set operation data
type SetOperationScanTransformer ¶
type SetOperationScanTransformer struct {
// contains filtered or unexported fields
}
SetOperationScanTransformer handles set operation transformations from ZetaSQL to SQLite.
In BigQuery/ZetaSQL, set operations combine multiple SELECT statements using UNION, INTERSECT, or EXCEPT operators. These operations can have ALL or DISTINCT modifiers and require compatible column schemas across all operands.
The transformer converts ZetaSQL SetOperationScan nodes by: - Recursively transforming each operand statement through the coordinator - Creating a SetOperation structure with proper type and modifier - Moving WITH clauses from operands to the top level for proper scoping - Wrapping the result in a subquery to establish new column mappings - Ensuring column compatibility and proper aliasing across operands
Set operations follow SQL's standard precedence and evaluation rules, with UNION having the lowest precedence and operations being left-associative.
func NewSetOperationScanTransformer ¶
func NewSetOperationScanTransformer(coordinator Coordinator) *SetOperationScanTransformer
NewSetOperationScanTransformer creates a new set operation scan transformer
func (*SetOperationScanTransformer) Transform ¶
func (t *SetOperationScanTransformer) Transform(data ScanData, ctx TransformContext) (*FromItem, error)
Transform converts SetOperationData to FromItem with set operation structure
type SingleRowScanTransformer ¶
type SingleRowScanTransformer struct {
// contains filtered or unexported fields
}
SingleRowScanTransformer handles single row scan transformations from ZetaSQL to SQLite.
In BigQuery/ZetaSQL, a SingleRowScan represents queries that produce exactly one row without reading from any table - typically SELECT statements with only literal values or expressions that don't reference table columns (e.g., "SELECT 1, 'hello'").
This corresponds to SQL's capability to SELECT constant expressions without a FROM clause. The transformer converts ZetaSQL SingleRowScan nodes by: - Creating a FromItemTypeSingleRow to indicate no table source is needed - Allowing the query to proceed without a FROM clause - Preserving expression evaluation in the SELECT list
This is used for queries like "SELECT CURRENT_DATE()" or "SELECT 1 + 2" where no table data is required, only expression evaluation.
func NewSingleRowScanTransformer ¶
func NewSingleRowScanTransformer(coord Coordinator) *SingleRowScanTransformer
NewSingleRowScanTransformer creates a new table scan transformer
func (*SingleRowScanTransformer) Transform ¶
func (t *SingleRowScanTransformer) Transform(data ScanData, ctx TransformContext) (*FromItem, error)
Transform converts SingleRowScan to nil (no-op)
type StatementData ¶
type StatementData struct { Type StatementType `json:"type,omitempty"` Select *SelectData `json:"select,omitempty"` Insert *InsertData `json:"insert,omitempty"` Update *UpdateData `json:"update,omitempty"` Delete *DeleteData `json:"delete,omitempty"` Create *CreateData `json:"create,omitempty"` Drop *DropData `json:"drop,omitempty"` Merge *MergeData `json:"merge,omitempty"` }
StatementData represents statement-level data
type StatementTransformer ¶
type StatementTransformer interface { Transformer[StatementData, SQLFragment] }
StatementTransformer handles statement-level transformations
func NewCreateFunctionTransformer ¶
func NewCreateFunctionTransformer(coord Coordinator) StatementTransformer
func NewCreateTableTransformer ¶
func NewCreateTableTransformer(coord Coordinator) StatementTransformer
Statement transformer placeholders
func NewDeleteTransformer ¶
func NewDeleteTransformer(coord Coordinator) StatementTransformer
func NewInsertTransformer ¶
func NewInsertTransformer(coord Coordinator) StatementTransformer
func NewTruncateTransformer ¶
func NewTruncateTransformer() StatementTransformer
func NewUpdateTransformer ¶
func NewUpdateTransformer(coord Coordinator) StatementTransformer
type StatementType ¶
type StatementType int
StatementType identifies the type of statement
const ( StatementTypeSelect StatementType = iota StatementTypeInsert StatementTypeUpdate StatementTypeDelete StatementTypeCreate StatementTypeDrop StatementTypeMerge )
type StmtAction ¶
type StmtActionFunc ¶
type StmtActionFunc func() (StmtAction, error)
type StringValue ¶
type StringValue string
func (StringValue) Format ¶
func (sv StringValue) Format(verb rune) string
func (StringValue) Interface ¶
func (sv StringValue) Interface() interface{}
func (StringValue) ToArray ¶
func (sv StringValue) ToArray() (*ArrayValue, error)
func (StringValue) ToBool ¶
func (sv StringValue) ToBool() (bool, error)
func (StringValue) ToBytes ¶
func (sv StringValue) ToBytes() ([]byte, error)
func (StringValue) ToFloat64 ¶
func (sv StringValue) ToFloat64() (float64, error)
func (StringValue) ToInt64 ¶
func (sv StringValue) ToInt64() (int64, error)
func (StringValue) ToJSON ¶
func (sv StringValue) ToJSON() (string, error)
func (StringValue) ToString ¶
func (sv StringValue) ToString() (string, error)
func (StringValue) ToStruct ¶
func (sv StringValue) ToStruct() (*StructValue, error)
type StructValue ¶
type StructValue struct {
// contains filtered or unexported fields
}
func (*StructValue) Format ¶
func (sv *StructValue) Format(verb rune) string
func (*StructValue) Interface ¶
func (sv *StructValue) Interface() interface{}
func (*StructValue) ToArray ¶
func (sv *StructValue) ToArray() (*ArrayValue, error)
func (*StructValue) ToBool ¶
func (sv *StructValue) ToBool() (bool, error)
func (*StructValue) ToBytes ¶
func (sv *StructValue) ToBytes() ([]byte, error)
func (*StructValue) ToFloat64 ¶
func (sv *StructValue) ToFloat64() (float64, error)
func (*StructValue) ToInt64 ¶
func (sv *StructValue) ToInt64() (int64, error)
func (*StructValue) ToJSON ¶
func (sv *StructValue) ToJSON() (string, error)
func (*StructValue) ToString ¶
func (sv *StructValue) ToString() (string, error)
func (*StructValue) ToStruct ¶
func (sv *StructValue) ToStruct() (*StructValue, error)
type StructValueLayout ¶
type StructValueLayout struct { Keys []string `json:"keys"` Values []interface{} `json:"values"` }
type SubqueryData ¶
type SubqueryData struct { Query ScanData `json:"query,omitempty"` SubqueryType ast.SubqueryType `json:"subquery_type,omitempty"` InExpr *ExpressionData `json:"in_expr,omitempty"` }
SubqueryData represents subquery expression data
type SubqueryTransformer ¶
type SubqueryTransformer struct {
// contains filtered or unexported fields
}
SubqueryTransformer handles transformation of subquery expressions from ZetaSQL to SQLite.
In BigQuery/ZetaSQL, subqueries can appear in various expression contexts with different semantics: scalar subqueries (single value), array subqueries, EXISTS checks, and IN checks. Each type has specific behavior and return value expectations.
The transformer converts ZetaSQL subquery expressions by: - Recursively transforming the subquery's scan structure - Wrapping the result in appropriate SQL constructs based on subquery type:
- Scalar: Returns single value, wrapped in parentheses
- Array: Wrapped with zetasqlite_array() for proper array semantics
- EXISTS: Wrapped in EXISTS(...) boolean expression
- IN: Combined with IN expression for membership testing
Subqueries preserve their own column scoping and fragment context while being embedded as expressions in the parent query.
func NewSubqueryTransformer ¶
func NewSubqueryTransformer(coordinator Coordinator) *SubqueryTransformer
NewSubqueryTransformer creates a new subquery transformer
func (*SubqueryTransformer) Transform ¶
func (t *SubqueryTransformer) Transform(data ExpressionData, ctx TransformContext) (*SQLExpression, error)
Transform converts SubqueryData to SQLExpression
type TableFunction ¶
type TableFunction struct { Name string Arguments []*SQLExpression }
TableFunction represents table-valued functions
func (*TableFunction) WriteSql ¶
func (t *TableFunction) WriteSql(writer *SQLWriter) error
type TableReference ¶
TableReference represents a table reference in SQL
type TableScanData ¶
type TableScanData struct { TableName string `json:"table_name,omitempty"` Alias string `json:"alias,omitempty"` Columns []*ColumnData `json:"columns,omitempty"` SyntheticColumns []*SelectItemData `json:"synthetic_columns,omitempty"` }
TableScanData represents table scan data
type TableScanTransformer ¶
type TableScanTransformer struct {
// contains filtered or unexported fields
}
TableScanTransformer handles table scan transformations from ZetaSQL to SQLite.
In BigQuery/ZetaSQL, a TableScan represents the foundational scan operation that reads directly from a table. This is the base case in the recursive scan transformation tree - it has no input scans and corresponds to a table reference in the FROM clause.
The transformer converts ZetaSQL TableScan nodes into SQLite table references with: - Direct table name mapping - Optional table aliasing for disambiguation - Proper FROM clause item generation
This is the simplest transformer as it performs direct mapping without complex logic, but it's crucial as the leaf node in the scan transformation tree.
func NewTableScanTransformer ¶
func NewTableScanTransformer(coordinator Coordinator) *TableScanTransformer
NewTableScanTransformer creates a new table scan transformer
func (*TableScanTransformer) Transform ¶
func (t *TableScanTransformer) Transform(data ScanData, ctx TransformContext) (*FromItem, error)
Transform converts TableScanData to FromItem
type TableSpec ¶
type TableSpec struct { IsTemp bool `json:"isTemp"` IsView bool `json:"isView"` NamePath []string `json:"namePath"` Columns []*ColumnSpec `json:"columns"` PrimaryKey []string `json:"primaryKey"` CreateMode ast.CreateMode `json:"createMode"` Query string `json:"query"` UpdatedAt time.Time `json:"updatedAt"` CreatedAt time.Time `json:"createdAt"` }
func (*TableSpec) Column ¶
func (s *TableSpec) Column(name string) *ColumnSpec
func (*TableSpec) SQLiteSchema ¶
type TimeFormatType ¶
type TimeFormatType int
const ( FormatTypeDate TimeFormatType = 0 FormatTypeDatetime TimeFormatType = 1 FormatTypeTime TimeFormatType = 2 FormatTypeTimestamp TimeFormatType = 3 )
func (TimeFormatType) String ¶
func (t TimeFormatType) String() string
type TimeParserPostProcessor ¶
type TimeValue ¶
func (TimeValue) ToArray ¶
func (t TimeValue) ToArray() (*ArrayValue, error)
func (TimeValue) ToStruct ¶
func (t TimeValue) ToStruct() (*StructValue, error)
type TimestampValue ¶
func (TimestampValue) AddValueWithPart ¶
func (TimestampValue) Format ¶
func (d TimestampValue) Format(verb rune) string
func (TimestampValue) Interface ¶
func (d TimestampValue) Interface() interface{}
func (TimestampValue) ToArray ¶
func (t TimestampValue) ToArray() (*ArrayValue, error)
func (TimestampValue) ToBool ¶
func (t TimestampValue) ToBool() (bool, error)
func (TimestampValue) ToBytes ¶
func (t TimestampValue) ToBytes() ([]byte, error)
func (TimestampValue) ToFloat64 ¶
func (t TimestampValue) ToFloat64() (float64, error)
func (TimestampValue) ToInt64 ¶
func (t TimestampValue) ToInt64() (int64, error)
func (TimestampValue) ToJSON ¶
func (t TimestampValue) ToJSON() (string, error)
func (TimestampValue) ToString ¶
func (t TimestampValue) ToString() (string, error)
func (TimestampValue) ToStruct ¶
func (t TimestampValue) ToStruct() (*StructValue, error)
type TransformConfig ¶
type TransformConfig struct {
Debug bool
}
TransformConfig provides configuration for transformations
func DefaultTransformConfig ¶
func DefaultTransformConfig(debug bool) *TransformConfig
DefaultTransformConfig returns a default configuration
type TransformContext ¶
type TransformContext interface { // Context returns the underlying Go context Context() context.Context // FragmentContext provides column resolution and scoping FragmentContext() FragmentContextProvider // Config returns transformation configuration Config() *TransformConfig // WithFragmentContext returns a new context with updated fragment context WithFragmentContext(fc FragmentContextProvider) TransformContext // WITH clause support AddWithEntryColumnMapping(name string, columns []*ColumnData) GetWithEntryMapping(name string) map[string]string }
TransformContext provides contextual information for transformations
type TransformResult ¶
type TransformResult struct {
Fragment SQLFragment
}
TransformResult represents the result of a transformation
func NewTransformResult ¶
func NewTransformResult(fragment SQLFragment) *TransformResult
NewTransformResult creates a new transform result
type Transformer ¶
type Transformer[Input, Output any] interface { Transform(input Input, ctx TransformContext) (Output, error) }
Transformer represents a pure transformation from input to output
type TruncateStatement ¶
type TruncateStatement struct {
TableName string
}
func (*TruncateStatement) String ¶
func (s *TruncateStatement) String() string
func (*TruncateStatement) WriteSql ¶
func (s *TruncateStatement) WriteSql(writer *SQLWriter) error
TruncateStatement WriteSql implementation
type TruncateStmtAction ¶
type TruncateStmtAction struct {
// contains filtered or unexported fields
}
func (*TruncateStmtAction) Args ¶
func (a *TruncateStmtAction) Args() []interface{}
func (*TruncateStmtAction) Cleanup ¶
func (a *TruncateStmtAction) Cleanup(ctx context.Context, conn *Conn) error
func (*TruncateStmtAction) ExecContext ¶
func (*TruncateStmtAction) QueryContext ¶
type Type ¶
type Type struct { Name string `json:"name"` Kind int `json:"kind"` SignatureKind types.SignatureArgumentKind `json:"signatureKind"` ElementType *Type `json:"elementType"` FieldTypes []*NameWithType `json:"fieldTypes"` }
func (*Type) AvailableAutoIndex ¶
func (*Type) FormatType ¶
func (*Type) FunctionArgumentType ¶
func (t *Type) FunctionArgumentType() (*types.FunctionArgumentType, error)
type UpdateData ¶
type UpdateData struct { TableName string `json:"table_name,omitempty"` TableScan *ScanData `json:"table_scan,omitempty"` SetItems []*SetItemData `json:"set_items,omitempty"` FromClause *ScanData `json:"from_clause,omitempty"` WhereClause *ExpressionData `json:"where_clause,omitempty"` }
UpdateData represents UPDATE statement data
type UpdateStatement ¶
type UpdateStatement struct { Table *FromItem SetItems []*SetItem FromClause *FromItem WhereClause *SQLExpression }
func (*UpdateStatement) String ¶
func (u *UpdateStatement) String() string
func (*UpdateStatement) WriteSql ¶
func (u *UpdateStatement) WriteSql(writer *SQLWriter) error
type Value ¶
type Value interface { Add(Value) (Value, error) Sub(Value) (Value, error) Mul(Value) (Value, error) Div(Value) (Value, error) EQ(Value) (bool, error) GT(Value) (bool, error) GTE(Value) (bool, error) LT(Value) (bool, error) LTE(Value) (bool, error) ToInt64() (int64, error) ToString() (string, error) ToBytes() ([]byte, error) ToFloat64() (float64, error) ToBool() (bool, error) ToArray() (*ArrayValue, error) ToStruct() (*StructValue, error) ToJSON() (string, error) ToTime() (time.Time, error) ToRat() (*big.Rat, error) Format(verb rune) string Interface() interface{} }
func ARRAY_CONCAT ¶
func ARRAY_LENGTH ¶
func ARRAY_LENGTH(v *ArrayValue) (Value, error)
func ARRAY_REVERSE ¶
func ARRAY_REVERSE(v *ArrayValue) (Value, error)
func ARRAY_TO_STRING ¶
func ARRAY_TO_STRING(arr *ArrayValue, delim string, nullText ...string) (Value, error)
func BIT_LEFT_SHIFT ¶
func BIT_RIGHT_SHIFT ¶
func BYTE_LENGTH ¶
func CHAR_LENGTH ¶
func CODE_POINTS_TO_BYTES ¶
func CODE_POINTS_TO_BYTES(v *ArrayValue) (Value, error)
func CODE_POINTS_TO_STRING ¶
func CODE_POINTS_TO_STRING(v *ArrayValue) (Value, error)
func CURRENT_DATE ¶
func CURRENT_DATETIME ¶
func CURRENT_TIME ¶
func CURRENT_TIMESTAMP ¶
func DATE_FROM_UNIX_DATE ¶
func EVAL_JAVASCRIPT ¶
func FARM_FINGERPRINT ¶
func FORMAT_TIMESTAMP ¶
func FROM_BASE32 ¶
func FROM_BASE64 ¶
func GENERATE_UUID ¶
func HLL_COUNT_EXTRACT ¶
func IEEE_DIVIDE ¶
func IGNORE_NULLS ¶
func IS_DISTINCT_FROM ¶
func IS_NOT_DISTINCT_FROM ¶
func JSON_EXTRACT ¶
func JSON_EXTRACT_ARRAY ¶
func JSON_EXTRACT_SCALAR ¶
func JSON_FIELD ¶
func JSON_QUERY ¶
func JSON_QUERY_ARRAY ¶
func JSON_VALUE ¶
func JSON_VALUE_ARRAY ¶
func JUSTIFY_DAYS ¶
func JUSTIFY_DAYS(v *IntervalValue) (Value, error)
func JUSTIFY_HOURS ¶
func JUSTIFY_HOURS(v *IntervalValue) (Value, error)
func JUSTIFY_INTERVAL ¶
func JUSTIFY_INTERVAL(v *IntervalValue) (Value, error)
func MAKE_ARRAY ¶
func MAKE_INTERVAL ¶
func MAKE_STRUCT ¶
func NET_IPV4_FROM_INT64 ¶
func NET_IPV4_TO_INT64 ¶
func NET_IP_FROM_STRING ¶
func NET_IP_NET_MASK ¶
func NET_IP_TO_STRING ¶
func NET_PUBLIC_SUFFIX ¶
func NET_REG_DOMAIN ¶
func NET_SAFE_IP_FROM_STRING ¶
func NORMALIZE_AND_CASEFOLD ¶
func PARSE_BIGNUMERIC ¶
func PARSE_DATE ¶
func PARSE_DATETIME ¶
func PARSE_JSON ¶
func PARSE_NUMERIC ¶
func PARSE_TIME ¶
func PARSE_TIMESTAMP ¶
func RANGE_BUCKET ¶
func RANGE_BUCKET(point Value, array *ArrayValue) (Value, error)
func REGEXP_CONTAINS ¶
func REGEXP_EXTRACT ¶
func REGEXP_INSTR ¶
func REGEXP_REPLACE ¶
func SAFE_DIVIDE ¶
func SAFE_MULTIPLY ¶
func SAFE_NEGATE ¶
func SAFE_SUBTRACT ¶
func SESSION_USER ¶
func STARTS_WITH ¶
func TIMESTAMP_MICROS ¶
func TIMESTAMP_MILLIS ¶
func TIMESTAMP_SECONDS ¶
func TO_CODE_POINTS ¶
func ValueFromGoValue ¶
type ValueLayout ¶
type ValueType ¶
type ValueType string
const ( IntValueType ValueType = "int64" StringValueType ValueType = "string" BytesValueType ValueType = "bytes" FloatValueType ValueType = "float" NumericValueType ValueType = "numeric" BigNumericValueType ValueType = "bignumeric" BoolValueType ValueType = "bool" JsonValueType ValueType = "json" ArrayValueType ValueType = "array" StructValueType ValueType = "struct" DateValueType ValueType = "date" DatetimeValueType ValueType = "datetime" TimeValueType ValueType = "time" TimestampValueType ValueType = "timestamp" IntervalValueType ValueType = "interval" )
type WINDOW_ANY_VALUE ¶
type WINDOW_ANY_VALUE struct { }
func (*WINDOW_ANY_VALUE) Done ¶
func (f *WINDOW_ANY_VALUE) Done(agg *WindowFuncAggregatedStatus) (Value, error)
type WINDOW_ARRAY_AGG ¶
type WINDOW_ARRAY_AGG struct { }
func (*WINDOW_ARRAY_AGG) Done ¶
func (f *WINDOW_ARRAY_AGG) Done(agg *WindowFuncAggregatedStatus) (Value, error)
type WINDOW_AVG ¶
type WINDOW_AVG struct { }
func (*WINDOW_AVG) Done ¶
func (f *WINDOW_AVG) Done(agg *WindowFuncAggregatedStatus) (Value, error)
type WINDOW_CORR ¶
type WINDOW_CORR struct { }
func (*WINDOW_CORR) Done ¶
func (f *WINDOW_CORR) Done(agg *WindowFuncAggregatedStatus) (Value, error)
type WINDOW_COUNT ¶
type WINDOW_COUNT struct { }
func (*WINDOW_COUNT) Done ¶
func (f *WINDOW_COUNT) Done(agg *WindowFuncAggregatedStatus) (Value, error)
func (*WINDOW_COUNT) Step ¶
func (f *WINDOW_COUNT) Step(values []Value, agg *WindowFuncAggregatedStatus) error
type WINDOW_COUNTIF ¶
type WINDOW_COUNTIF struct { }
func (*WINDOW_COUNTIF) Done ¶
func (f *WINDOW_COUNTIF) Done(agg *WindowFuncAggregatedStatus) (Value, error)
type WINDOW_COUNT_STAR ¶
type WINDOW_COUNT_STAR struct { }
func (*WINDOW_COUNT_STAR) Done ¶
func (f *WINDOW_COUNT_STAR) Done(agg *WindowFuncAggregatedStatus) (Value, error)
type WINDOW_COVAR_POP ¶
type WINDOW_COVAR_POP struct { }
func (*WINDOW_COVAR_POP) Done ¶
func (f *WINDOW_COVAR_POP) Done(agg *WindowFuncAggregatedStatus) (Value, error)
type WINDOW_COVAR_SAMP ¶
type WINDOW_COVAR_SAMP struct { }
func (*WINDOW_COVAR_SAMP) Done ¶
func (f *WINDOW_COVAR_SAMP) Done(agg *WindowFuncAggregatedStatus) (Value, error)
type WINDOW_CUME_DIST ¶
type WINDOW_CUME_DIST struct {
// contains filtered or unexported fields
}
func (*WINDOW_CUME_DIST) Done ¶
func (f *WINDOW_CUME_DIST) Done(agg *WindowFuncAggregatedStatus) (Value, error)
func (*WINDOW_CUME_DIST) Inverse ¶
func (f *WINDOW_CUME_DIST) Inverse(values []Value, agg *WindowFuncAggregatedStatus) error
func (*WINDOW_CUME_DIST) Step ¶
func (f *WINDOW_CUME_DIST) Step(values []Value, agg *WindowFuncAggregatedStatus) error
type WINDOW_DENSE_RANK ¶
type WINDOW_DENSE_RANK struct {
// contains filtered or unexported fields
}
WINDOW_DENSE_RANK is implemented by deferring windowing to SQLite See windowFuncFixedRanges["zetasqlite_window_dense_rank"]
func (*WINDOW_DENSE_RANK) Done ¶
func (f *WINDOW_DENSE_RANK) Done(agg *WindowFuncAggregatedStatus) (Value, error)
func (*WINDOW_DENSE_RANK) Step ¶
func (f *WINDOW_DENSE_RANK) Step(values []Value, agg *WindowFuncAggregatedStatus) error
type WINDOW_FIRST_VALUE ¶
type WINDOW_FIRST_VALUE struct { }
func (*WINDOW_FIRST_VALUE) Done ¶
func (f *WINDOW_FIRST_VALUE) Done(agg *WindowFuncAggregatedStatus) (Value, error)
type WINDOW_LAG ¶
type WINDOW_LAG struct {
// contains filtered or unexported fields
}
func (*WINDOW_LAG) Done ¶
func (f *WINDOW_LAG) Done(agg *WindowFuncAggregatedStatus) (Value, error)
func (*WINDOW_LAG) ParseArguments ¶
func (f *WINDOW_LAG) ParseArguments(args []Value) error
type WINDOW_LAST_VALUE ¶
type WINDOW_LAST_VALUE struct { }
func (*WINDOW_LAST_VALUE) Done ¶
func (f *WINDOW_LAST_VALUE) Done(agg *WindowFuncAggregatedStatus) (Value, error)
type WINDOW_LEAD ¶
type WINDOW_LEAD struct {
// contains filtered or unexported fields
}
func (*WINDOW_LEAD) Done ¶
func (f *WINDOW_LEAD) Done(agg *WindowFuncAggregatedStatus) (Value, error)
func (*WINDOW_LEAD) ParseArguments ¶
func (f *WINDOW_LEAD) ParseArguments(args []Value) error
type WINDOW_LOGICAL_AND ¶
type WINDOW_LOGICAL_AND struct { }
func (*WINDOW_LOGICAL_AND) Done ¶
func (f *WINDOW_LOGICAL_AND) Done(agg *WindowFuncAggregatedStatus) (Value, error)
type WINDOW_LOGICAL_OR ¶
type WINDOW_LOGICAL_OR struct { }
func (*WINDOW_LOGICAL_OR) Done ¶
func (f *WINDOW_LOGICAL_OR) Done(agg *WindowFuncAggregatedStatus) (Value, error)
type WINDOW_MAX ¶
type WINDOW_MAX struct { }
func (*WINDOW_MAX) Done ¶
func (f *WINDOW_MAX) Done(agg *WindowFuncAggregatedStatus) (Value, error)
type WINDOW_MIN ¶
type WINDOW_MIN struct { }
func (*WINDOW_MIN) Done ¶
func (f *WINDOW_MIN) Done(agg *WindowFuncAggregatedStatus) (Value, error)
type WINDOW_NTH_VALUE ¶
type WINDOW_NTH_VALUE struct {
// contains filtered or unexported fields
}
func (*WINDOW_NTH_VALUE) Done ¶
func (f *WINDOW_NTH_VALUE) Done(agg *WindowFuncAggregatedStatus) (Value, error)
func (*WINDOW_NTH_VALUE) ParseArguments ¶
func (f *WINDOW_NTH_VALUE) ParseArguments(args []Value) error
type WINDOW_NTILE ¶
type WINDOW_NTILE struct {
// contains filtered or unexported fields
}
func (*WINDOW_NTILE) Done ¶
func (f *WINDOW_NTILE) Done(agg *WindowFuncAggregatedStatus) (Value, error)
func (*WINDOW_NTILE) Inverse ¶
func (f *WINDOW_NTILE) Inverse(values []Value, agg *WindowFuncAggregatedStatus) error
func (*WINDOW_NTILE) ParseArguments ¶
func (f *WINDOW_NTILE) ParseArguments(args []Value) error
func (*WINDOW_NTILE) Step ¶
func (f *WINDOW_NTILE) Step(values []Value, agg *WindowFuncAggregatedStatus) error
type WINDOW_PERCENTILE_CONT ¶
type WINDOW_PERCENTILE_CONT struct {
// contains filtered or unexported fields
}
func (*WINDOW_PERCENTILE_CONT) Done ¶
func (f *WINDOW_PERCENTILE_CONT) Done(agg *WindowFuncAggregatedStatus) (Value, error)
func (*WINDOW_PERCENTILE_CONT) ParseArguments ¶
func (f *WINDOW_PERCENTILE_CONT) ParseArguments(args []Value) error
type WINDOW_PERCENTILE_DISC ¶
type WINDOW_PERCENTILE_DISC struct {
// contains filtered or unexported fields
}
func (*WINDOW_PERCENTILE_DISC) Done ¶
func (f *WINDOW_PERCENTILE_DISC) Done(agg *WindowFuncAggregatedStatus) (Value, error)
func (*WINDOW_PERCENTILE_DISC) ParseArguments ¶
func (f *WINDOW_PERCENTILE_DISC) ParseArguments(args []Value) error
type WINDOW_PERCENT_RANK ¶
type WINDOW_PERCENT_RANK struct {
// contains filtered or unexported fields
}
func (*WINDOW_PERCENT_RANK) Done ¶
func (f *WINDOW_PERCENT_RANK) Done(agg *WindowFuncAggregatedStatus) (Value, error)
func (*WINDOW_PERCENT_RANK) Inverse ¶
func (f *WINDOW_PERCENT_RANK) Inverse(args []Value, agg *WindowFuncAggregatedStatus) error
func (*WINDOW_PERCENT_RANK) Step ¶
func (f *WINDOW_PERCENT_RANK) Step(args []Value, agg *WindowFuncAggregatedStatus) error
type WINDOW_RANK ¶
type WINDOW_RANK struct { }
WINDOW_RANK is implemented by deferring windowing to SQLite See windowFuncFixedRanges["zetasqlite_window_rank"]
func (*WINDOW_RANK) Done ¶
func (f *WINDOW_RANK) Done(agg *WindowFuncAggregatedStatus) (Value, error)
type WINDOW_ROW_NUMBER ¶
type WINDOW_ROW_NUMBER struct { }
func (*WINDOW_ROW_NUMBER) Done ¶
func (f *WINDOW_ROW_NUMBER) Done(agg *WindowFuncAggregatedStatus) (Value, error)
type WINDOW_STDDEV ¶
type WINDOW_STDDEV = WINDOW_STDDEV_SAMP
type WINDOW_STDDEV_POP ¶
type WINDOW_STDDEV_POP struct { }
func (*WINDOW_STDDEV_POP) Done ¶
func (f *WINDOW_STDDEV_POP) Done(agg *WindowFuncAggregatedStatus) (Value, error)
type WINDOW_STDDEV_SAMP ¶
type WINDOW_STDDEV_SAMP struct { }
func (*WINDOW_STDDEV_SAMP) Done ¶
func (f *WINDOW_STDDEV_SAMP) Done(agg *WindowFuncAggregatedStatus) (Value, error)
type WINDOW_STRING_AGG ¶
type WINDOW_STRING_AGG struct {
// contains filtered or unexported fields
}
func (*WINDOW_STRING_AGG) Done ¶
func (f *WINDOW_STRING_AGG) Done(agg *WindowFuncAggregatedStatus) (Value, error)
func (*WINDOW_STRING_AGG) ParseArguments ¶
func (f *WINDOW_STRING_AGG) ParseArguments(args []Value) error
type WINDOW_SUM ¶
type WINDOW_SUM struct { }
func (*WINDOW_SUM) Done ¶
func (f *WINDOW_SUM) Done(agg *WindowFuncAggregatedStatus) (Value, error)
type WINDOW_VARIANCE ¶
type WINDOW_VARIANCE = WINDOW_VAR_SAMP
type WINDOW_VAR_POP ¶
type WINDOW_VAR_POP struct { }
func (*WINDOW_VAR_POP) Done ¶
func (f *WINDOW_VAR_POP) Done(agg *WindowFuncAggregatedStatus) (Value, error)
type WINDOW_VAR_SAMP ¶
type WINDOW_VAR_SAMP struct { }
func (*WINDOW_VAR_SAMP) Done ¶
func (f *WINDOW_VAR_SAMP) Done(agg *WindowFuncAggregatedStatus) (Value, error)
type WhenClause ¶
type WhenClause struct { Condition *SQLExpression Result *SQLExpression }
WhenClause represents a WHEN-THEN clause in a CASE expression
func NewWhenClause ¶
func NewWhenClause(condition *SQLExpression, result *SQLExpression) *WhenClause
NewWhenClause creates a new WHEN clause for CASE expressions
type WhenClauseData ¶
type WhenClauseData struct { Condition ExpressionData `json:"condition,omitempty"` Result ExpressionData `json:"result,omitempty"` }
WhenClauseData represents a WHEN clause in CASE expressions
type WildcardTable ¶
type WildcardTable struct {
// contains filtered or unexported fields
}
func (*WildcardTable) AnonymizationInfo ¶
func (t *WildcardTable) AnonymizationInfo() *types.AnonymizationInfo
func (*WildcardTable) CreateEvaluatorTableIterator ¶
func (t *WildcardTable) CreateEvaluatorTableIterator(columnIdxs []int) (*types.EvaluatorTableIterator, error)
func (*WildcardTable) FindColumnByName ¶
func (t *WildcardTable) FindColumnByName(name string) types.Column
func (*WildcardTable) FormatSQL ¶
func (t *WildcardTable) FormatSQL(ctx context.Context) (string, error)
func (*WildcardTable) FullName ¶
func (t *WildcardTable) FullName() string
func (*WildcardTable) IsValueTable ¶
func (t *WildcardTable) IsValueTable() bool
func (*WildcardTable) Name ¶
func (t *WildcardTable) Name() string
func (*WildcardTable) NumColumns ¶
func (t *WildcardTable) NumColumns() int
func (*WildcardTable) PrimaryKey ¶
func (t *WildcardTable) PrimaryKey() []int
func (*WildcardTable) SerializationID ¶
func (t *WildcardTable) SerializationID() int64
func (*WildcardTable) SupportsAnonymization ¶
func (t *WildcardTable) SupportsAnonymization() bool
func (*WildcardTable) TableTypeName ¶
func (t *WildcardTable) TableTypeName(mode types.ProductMode) string
type WindowAggregator ¶
type WindowAggregator struct {
// contains filtered or unexported fields
}
func (*WindowAggregator) Final ¶
func (a *WindowAggregator) Final(ctx *sqlite.FunctionContext)
func (*WindowAggregator) Step ¶
func (a *WindowAggregator) Step(ctx *sqlite.FunctionContext, stepArgs []driver.Value) error
func (*WindowAggregator) WindowInverse ¶
func (a *WindowAggregator) WindowInverse(ctx *sqlite.FunctionContext, stepArgs []driver.Value) error
func (*WindowAggregator) WindowValue ¶
func (a *WindowAggregator) WindowValue(ctx *sqlite.FunctionContext) (driver.Value, error)
type WindowAggregatorMinimumImpl ¶
type WindowAggregatorMinimumImpl interface {
Done(*WindowFuncAggregatedStatus) (Value, error)
}
type WindowBindFunction ¶
type WindowBindFunction func() func(ctx sqlite.FunctionContext) (sqlite.AggregateFunction, error)
type WindowFuncAggregatedStatus ¶
type WindowFuncAggregatedStatus struct { Values []Value // contains filtered or unexported fields }
func (*WindowFuncAggregatedStatus) Distinct ¶
func (s *WindowFuncAggregatedStatus) Distinct() bool
func (*WindowFuncAggregatedStatus) IgnoreNulls ¶
func (s *WindowFuncAggregatedStatus) IgnoreNulls() bool
func (*WindowFuncAggregatedStatus) Inverse ¶
func (s *WindowFuncAggregatedStatus) Inverse(value Value) error
Inverse removes the oldest entry of a value from the window
func (*WindowFuncAggregatedStatus) RelevantValues ¶
func (s *WindowFuncAggregatedStatus) RelevantValues() ([]Value, error)
RelevantValues retrieves the list of values in the window, respecting both IgnoreNulls and Distinct options
func (*WindowFuncAggregatedStatus) Step ¶
func (s *WindowFuncAggregatedStatus) Step(value Value) error
Step adds a value to the window
type WindowFuncInfo ¶
type WindowFuncInfo struct { Name string BindFunc WindowBindFunction }
type WindowSpecification ¶
type WindowSpecification struct { PartitionBy []*SQLExpression OrderBy []*OrderByItem FrameClause *FrameClause }
WindowSpecification represents OVER clause specifications
func (*WindowSpecification) WriteSql ¶
func (w *WindowSpecification) WriteSql(writer *SQLWriter) error
type WindowSpecificationData ¶
type WindowSpecificationData struct { PartitionBy []*ExpressionData `json:"partition_by,omitempty"` OrderBy []*OrderByItemData `json:"order_by,omitempty"` FrameClause *FrameClauseData `json:"frame_clause,omitempty"` }
type WithClause ¶
type WithClause struct { Name string Columns []string Query *SelectStatement }
WithClause represents CTE (Common Table Expression) definitions
func (*WithClause) String ¶
func (w *WithClause) String() string
func (*WithClause) WriteSql ¶
func (w *WithClause) WriteSql(writer *SQLWriter) error
type WithEntryData ¶
type WithEntryData struct { WithQueryName string `json:"with_query_name,omitempty"` WithSubquery ScanData `json:"with_subquery,omitempty"` ColumnList []*ColumnData `json:"column_list,omitempty"` }
WithEntryData represents individual WITH entry data (CTE definitions)
type WithEntryTransformer ¶
type WithEntryTransformer struct {
// contains filtered or unexported fields
}
WithEntryTransformer handles WITH entry transformations (CTE definitions) from ZetaSQL to SQLite.
In BigQuery/ZetaSQL, a WithEntry represents a single Common Table Expression (CTE) definition within a WITH clause. Each entry defines a named temporary result set with a specific column list that can be referenced by name in subsequent CTEs or the main query.
The transformer converts ZetaSQL WithEntry nodes into SQLite WITH clause definitions by: - Transforming the subquery that defines the CTE's content - Registering the CTE name and column mappings in the transform context - Creating a WithClause structure for inclusion in the parent WITH statement - Managing scope and visibility for CTE references
This enables proper name resolution when the CTE is referenced later in the query, following SQL's lexical scoping rules for Common Table Expressions.
func NewWithEntryTransformer ¶
func NewWithEntryTransformer(coordinator Coordinator) *WithEntryTransformer
NewWithEntryTransformer creates a new WITH entry transformer
func (*WithEntryTransformer) Transform ¶
func (t *WithEntryTransformer) Transform(data ScanData, ctx TransformContext) (*WithClause, error)
Transform converts WithEntryData to WithClause for use in SELECT statements
type WithRefScanData ¶
type WithRefScanData struct { WithQueryName string `json:"with_query_name,omitempty"` ColumnList []*ColumnData `json:"column_list,omitempty"` }
WithRefScanData represents WITH reference scan data (references to CTEs)
type WithRefScanTransformer ¶
type WithRefScanTransformer struct {
// contains filtered or unexported fields
}
WithRefScanTransformer handles WITH reference scan transformations from ZetaSQL to SQLite.
In BigQuery/ZetaSQL, a WithRefScan represents a reference to a previously defined Common Table Expression (CTE) within a WITH clause. This allows queries to reference named temporary result sets by name, following SQL's lexical scoping rules.
The transformer converts ZetaSQL WithRefScan nodes by: - Creating a table reference to the CTE by its name - Retrieving stored column mappings from the transform context - Building a SELECT statement that maps CTE columns to output columns with proper aliases - Ensuring column names match the CTE definition through mapping resolution
The fragment context maintains the mapping between CTE names and their column definitions, enabling proper name resolution when the CTE is referenced.
func NewWithRefScanTransformer ¶
func NewWithRefScanTransformer(coordinator Coordinator) *WithRefScanTransformer
NewWithRefScanTransformer creates a new WITH reference scan transformer
func (*WithRefScanTransformer) Transform ¶
func (t *WithRefScanTransformer) Transform(data ScanData, ctx TransformContext) (*FromItem, error)
Transform converts WithRefScanData to FromItem that references a CTE
type WithScanData ¶
type WithScanData struct { WithEntryList []*WithEntryData `json:"with_entry_list,omitempty"` Query ScanData `json:"query,omitempty"` ColumnList []*ColumnData `json:"column_list,omitempty"` }
WithScanData represents WITH scan data (complete WITH statements)
type WithScanTransformer ¶
type WithScanTransformer struct {
// contains filtered or unexported fields
}
WithScanTransformer handles WITH scan transformations from ZetaSQL to SQLite.
In BigQuery/ZetaSQL, a WithScan represents a complete WITH statement (Common Table Expression) that defines one or more named temporary result sets that can be referenced in the main query. This enables recursive queries, query organization, and performance optimization.
The transformer converts ZetaSQL WithScan nodes into SQLite WITH clauses by: - Processing all WITH entry definitions into CTE declarations - Recursively transforming each WITH entry's subquery - Transforming the main query that references the CTEs - Ensuring proper scoping and name resolution across CTE boundaries
Each WITH entry becomes a named subquery that can be referenced by name in subsequent WITH entries or the main query, following SQL's lexical scoping rules.
func NewWithScanTransformer ¶
func NewWithScanTransformer(coordinator Coordinator) *WithScanTransformer
func (*WithScanTransformer) Transform ¶
func (t *WithScanTransformer) Transform(data ScanData, ctx TransformContext) (*FromItem, error)
Transform converts WithScanData to FromItem with WITH clauses
Source Files
¶
- analyzer.go
- catalog.go
- codec.go
- conn.go
- context.go
- coordinator_coordinator.go
- coordinator_extractor.go
- decoder.go
- encoder.go
- error.go
- formatter.go
- formatter_expressions.go
- formatter_functions.go
- formatter_scans.go
- formatter_scopes.go
- formatter_statements.go
- function.go
- function_aggregate.go
- function_aggregate_option.go
- function_array.go
- function_bind.go
- function_bit.go
- function_date.go
- function_datetime.go
- function_format.go
- function_hash.go
- function_interval.go
- function_javascript.go
- function_json.go
- function_math.go
- function_net.go
- function_numeric.go
- function_register.go
- function_security.go
- function_string.go
- function_time.go
- function_time_parser.go
- function_timestamp.go
- function_window.go
- function_window_option.go
- name_path.go
- node.go
- querybuilder_context.go
- querybuilder_factory.go
- result.go
- rows.go
- spec.go
- sqlbuilder_sqlbuilder.go
- stmt.go
- stmt_action.go
- transformer_cast.go
- transformer_column_ref.go
- transformer_function.go
- transformer_literal.go
- transformer_parameter.go
- transformer_scan_aggregate.go
- transformer_scan_analytic.go
- transformer_scan_array.go
- transformer_scan_filter.go
- transformer_scan_join.go
- transformer_scan_limit.go
- transformer_scan_orderby.go
- transformer_scan_project.go
- transformer_scan_setop.go
- transformer_scan_single_row.go
- transformer_scan_table.go
- transformer_scan_with.go
- transformer_scan_with_ref.go
- transformer_stmt_create_table_as_select.go
- transformer_stmt_create_view.go
- transformer_stmt_dml.go
- transformer_stmt_drop.go
- transformer_stmt_merge.go
- transformer_stmt_query.go
- transformer_subquery.go
- transformer_with_entry.go
- types_config.go
- types_data.go
- types_interfaces.go
- util.go
- value.go
- wildcard_table.go