stats

package
v0.69.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 13, 2025 License: Apache-2.0 Imports: 26 Imported by: 12

Documentation

Overview

Package stats contains the logic to process APM stats.

Index

Constants

This section is empty.

Variables

View Source
var KindsComputed = map[string]struct{}{
	"server":   {},
	"consumer": {},
	"client":   {},
	"producer": {},
}

KindsComputed is the list of span kinds that will have stats computed on them when computeStatsByKind is enabled in the concentrator.

Functions

This section is empty.

Types

type Aggregation

type Aggregation struct {
	BucketsAggregationKey
	PayloadAggregationKey
}

Aggregation contains all the dimension on which we aggregate statistics.

func NewAggregationFromGroup

func NewAggregationFromGroup(g *pb.ClientGroupedStats) Aggregation

NewAggregationFromGroup gets the Aggregation key of grouped stats.

func NewAggregationFromSpan

func NewAggregationFromSpan(s *StatSpan, origin string, aggKey PayloadAggregationKey) Aggregation

NewAggregationFromSpan creates a new aggregation from the provided span and env

type BucketsAggregationKey

type BucketsAggregationKey struct {
	Service        string
	Name           string
	Resource       string
	Type           string
	SpanKind       string
	StatusCode     uint32
	Synthetics     bool
	PeerTagsHash   uint64
	IsTraceRoot    pb.Trilean
	GRPCStatusCode string
}

BucketsAggregationKey specifies the key by which a bucket is aggregated.

type ClientStatsAggregator

type ClientStatsAggregator struct {
	In chan *pb.ClientStatsPayload
	// contains filtered or unexported fields
}

ClientStatsAggregator aggregates client stats payloads on buckets of bucketDuration If a single payload is received on a bucket, this Aggregator is a passthrough. If two or more payloads collide, their counts will be aggregated into one bucket. Multiple payloads will be sent: - Original payloads with their distributions will be sent with counts zeroed. - A single payload with the bucket aggregated counts will be sent. This and the aggregator timestamp alignment ensure that all counts will have at most one point per second per agent for a specific granularity. While distributions are not tied to the agent.

func NewClientStatsAggregator

func NewClientStatsAggregator(conf *config.AgentConfig, writer Writer, statsd statsd.ClientInterface) *ClientStatsAggregator

NewClientStatsAggregator initializes a new aggregator ready to be started

func (*ClientStatsAggregator) Start

func (a *ClientStatsAggregator) Start()

Start starts the aggregator.

func (*ClientStatsAggregator) Stop

func (a *ClientStatsAggregator) Stop()

Stop stops the aggregator. Calling Stop twice will panic.

type Concentrator

type Concentrator struct {
	Writer Writer
	// contains filtered or unexported fields
}

Concentrator produces time bucketed statistics from a stream of raw traces. https://en.wikipedia.org/wiki/Knelson_concentrator Gets an imperial shitton of traces, and outputs pre-computed data structures allowing to find the gold (stats) amongst the traces.

func NewConcentrator

func NewConcentrator(conf *config.AgentConfig, writer Writer, now time.Time, statsd statsd.ClientInterface) *Concentrator

NewConcentrator initializes a new concentrator ready to be started

func (*Concentrator) Add

func (c *Concentrator) Add(t Input)

Add applies the given input to the concentrator.

func (*Concentrator) Flush

func (c *Concentrator) Flush(force bool) *pb.StatsPayload

Flush deletes and returns complete statistic buckets. The force boolean guarantees flushing all buckets if set to true.

func (*Concentrator) Run

func (c *Concentrator) Run()

Run runs the main loop of the concentrator goroutine. Traces are received through `Add`, this loop only deals with flushing.

func (*Concentrator) Start

func (c *Concentrator) Start()

Start starts the concentrator.

func (*Concentrator) Stop

func (c *Concentrator) Stop()

Stop stops the main Run loop.

type Input

type Input struct {
	Traces        []traceutil.ProcessedTrace
	ContainerID   string
	ContainerTags []string
	ProcessTags   string
}

Input specifies a set of traces originating from a certain payload.

func NewStatsInput

func NewStatsInput(numChunks int, containerID string, clientComputedStats bool, processTags string) Input

NewStatsInput allocates a stats input for an incoming trace payload

func OTLPTracesToConcentratorInputs added in v0.55.0

func OTLPTracesToConcentratorInputs(
	traces ptrace.Traces,
	conf *config.AgentConfig,
	containerTagKeys []string,
	peerTagKeys []string,
) []Input

OTLPTracesToConcentratorInputs converts eligible OTLP spans to Concentrator.Input. The converted Inputs only have the minimal number of fields for APM stats calculation and are only meant to be used in Concentrator.Add(). Do not use them for other purposes.

func OTLPTracesToConcentratorInputsWithObfuscation added in v0.63.0

func OTLPTracesToConcentratorInputsWithObfuscation(
	traces ptrace.Traces,
	conf *config.AgentConfig,
	containerTagKeys []string,
	peerTagKeys []string,
	obfuscator *obfuscate.Obfuscator,
) []Input

OTLPTracesToConcentratorInputsWithObfuscation converts eligible OTLP spans to Concentrator Input. The converted Inputs only have the minimal number of fields for APM stats calculation and are only meant to be used in Concentrator.Add(). Do not use them for other purposes. This function enables obfuscation of spans prior to stats calculation and datadogconnector will migrate to this function once this function is published as part of latest pkg/trace module.

type PayloadAggregationKey

type PayloadAggregationKey struct {
	Env             string
	Hostname        string
	Version         string
	ContainerID     string
	GitCommitSha    string
	ImageTag        string
	Lang            string
	ProcessTagsHash uint64
}

PayloadAggregationKey specifies the key by which a payload is aggregated.

type RawBucket

type RawBucket struct {
	// contains filtered or unexported fields
}

RawBucket is used to compute span data and aggregate it within a time-framed bucket. This should not be used outside the agent, use ClientStatsBucket for this.

func NewRawBucket

func NewRawBucket(ts, d uint64) *RawBucket

NewRawBucket opens a new calculation bucket for time ts and initializes it properly

func (*RawBucket) Export

Export transforms a RawBucket into a ClientStatsBucket, typically used before communicating data to the API, as RawBucket is the internal type while ClientStatsBucket is the public, shared one.

func (*RawBucket) HandleSpan

func (sb *RawBucket) HandleSpan(s *StatSpan, weight float64, origin string, aggKey PayloadAggregationKey)

HandleSpan adds the span to this bucket stats, aggregated with the finest grain matching given aggregators

type SpanConcentrator added in v0.57.0

type SpanConcentrator struct {
	// contains filtered or unexported fields
}

SpanConcentrator produces time bucketed statistics from a stream of raw spans.

func NewSpanConcentrator added in v0.57.0

func NewSpanConcentrator(cfg *SpanConcentratorConfig, now time.Time) *SpanConcentrator

NewSpanConcentrator builds a new SpanConcentrator object

func (*SpanConcentrator) AddSpan added in v0.57.0

func (sc *SpanConcentrator) AddSpan(s *StatSpan, aggKey PayloadAggregationKey, containerID string, containerTags []string, origin string)

AddSpan to the SpanConcentrator, appending the new data to the appropriate internal bucket. todo:raphael migrate dd-trace-go API to not depend on containerID/containerTags and add processTags at encoding layer

func (*SpanConcentrator) Flush added in v0.57.0

func (sc *SpanConcentrator) Flush(now int64, force bool) []*pb.ClientStatsPayload

Flush deletes and returns complete ClientStatsPayloads. The force boolean guarantees flushing all buckets if set to true.

func (*SpanConcentrator) NewStatSpan added in v0.57.0

func (sc *SpanConcentrator) NewStatSpan(
	service, resource, name string,
	typ string,
	parentID uint64,
	start, duration int64,
	error int32,
	meta map[string]string,
	metrics map[string]float64,
	peerTags []string,
) (statSpan *StatSpan, ok bool)

NewStatSpan builds a StatSpan from the required fields for stats calculation peerTags is the configured list of peer tags to look for returns (nil,false) if the provided fields indicate a span should not have stats calculated

func (*SpanConcentrator) NewStatSpanFromPB added in v0.57.0

func (sc *SpanConcentrator) NewStatSpanFromPB(s *pb.Span, peerTags []string) (statSpan *StatSpan, ok bool)

NewStatSpanFromPB is a helper version of NewStatSpan that builds a StatSpan from a pb.Span.

type SpanConcentratorConfig added in v0.57.0

type SpanConcentratorConfig struct {
	// ComputeStatsBySpanKind enables/disables the computing of stats based on a span's `span.kind` field
	ComputeStatsBySpanKind bool
	// BucketInterval the size of our pre-aggregation per bucket
	BucketInterval int64
}

SpanConcentratorConfig exposes configuration options for a SpanConcentrator

type StatSpan added in v0.57.0

type StatSpan struct {
	// contains filtered or unexported fields
}

StatSpan holds all the required fields from a span needed to calculate stats

type Writer added in v0.56.0

type Writer interface {
	// Write this payload
	Write(*pb.StatsPayload)
}

Writer is an interface for something that can Write Stats Payloads

Directories

Path Synopsis
oteltest module

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL