Documentation
¶
Index ¶
- Variables
- func MonotonicallyIncreasingNextLeaf() func(uint64) uint64
- func NewController(h *Hammer, a *HammerAnalyser) *tuiController
- func NewLogClients(readLogURLs, writeLogURLs []string, opts ClientOpts) (LogReader, LeafWriter, error)
- func RandomNextLeaf() func(uint64) uint64
- type ClientOpts
- type Hammer
- type HammerAnalyser
- type HammerOpts
- type LeafReader
- type LeafTime
- type LeafWriter
- type LogReader
- type LogWriter
- type Throttle
- type Worker
- type WorkerPool
Constants ¶
This section is empty.
Variables ¶
var ErrRetry = errors.New("retry")
Functions ¶
func MonotonicallyIncreasingNextLeaf ¶
MonotonicallyIncreasingNextLeaf returns a function that always wants the next available leaf after the one it previously fetched. It starts at leaf 0.
func NewController ¶
func NewController(h *Hammer, a *HammerAnalyser) *tuiController
func NewLogClients ¶
func NewLogClients(readLogURLs, writeLogURLs []string, opts ClientOpts) (LogReader, LeafWriter, error)
NewLogClients returns a fetcher and a writer that will read and write leaves to all logs in the `log_url` flag set.
func RandomNextLeaf ¶
RandomNextLeaf returns a function that fetches a random leaf available in the tree.
Types ¶
type ClientOpts ¶
type Hammer ¶
type Hammer struct {
// contains filtered or unexported fields
}
Hammer is responsible for coordinating the operations against the log in the form of write and read operations. The work of analysing the results of hammering should live outside of this class.
func NewHammer ¶
func NewHammer(tracker *client.LogStateTracker, f client.EntryBundleFetcherFunc, w LeafWriter, gen func() []byte, seqLeafChan chan<- LeafTime, errChan chan<- error, opts HammerOpts) *Hammer
type HammerAnalyser ¶
type HammerAnalyser struct { SeqLeafChan chan LeafTime ErrChan chan error QueueTime *movingaverage.ConcurrentMovingAverage IntegrationTime *movingaverage.ConcurrentMovingAverage // contains filtered or unexported fields }
HammerAnalyser is responsible for measuring and interpreting the result of hammering.
func NewHammerAnalyser ¶
func NewHammerAnalyser(treeSizeFn func() uint64) *HammerAnalyser
func (*HammerAnalyser) Run ¶
func (a *HammerAnalyser) Run(ctx context.Context)
type HammerOpts ¶
type LeafReader ¶
type LeafReader struct {
// contains filtered or unexported fields
}
LeafReader reads leaves from the tree. This class is not thread safe.
func NewLeafReader ¶
func NewLeafReader(tracker *client.LogStateTracker, f client.EntryBundleFetcherFunc, next func(uint64) uint64, throttle <-chan bool, errChan chan<- error) *LeafReader
NewLeafReader creates a LeafReader. The next function provides a strategy for which leaves will be read. Custom implementations can be passed, or use RandomNextLeaf or MonotonicallyIncreasingNextLeaf.
func (*LeafReader) Kill ¶
func (r *LeafReader) Kill()
Kills this leaf reader at the next opportune moment. This function may return before the reader is dead.
func (*LeafReader) Run ¶
func (r *LeafReader) Run(ctx context.Context)
Run runs the log reader. This should be called in a goroutine.
type LeafTime ¶
LeafTime records the time at which a leaf was assigned the given index.
This is used when sampling leaves which are added in order to later calculate how long it took to for them to become integrated.
type LeafWriter ¶
LeafWriter is the signature of a function which can write arbitrary data to a log. The data to be written is provided, and the implementation must return the sequence number at which this data will be found in the log, or an error.
type LogWriter ¶
type LogWriter struct {
// contains filtered or unexported fields
}
LogWriter writes new leaves to the log that are generated by `gen`.
func NewLogWriter ¶
func NewLogWriter(writer LeafWriter, gen func() []byte, throttle <-chan bool, errChan chan<- error, leafSampleChan chan<- LeafTime) *LogWriter
NewLogWriter creates a LogWriter. u is the URL of the write endpoint for the log. gen is a function that generates new leaves to add.
type Throttle ¶
type Throttle struct { TokenChan chan bool // contains filtered or unexported fields }
func NewThrottle ¶
type WorkerPool ¶
type WorkerPool struct {
// contains filtered or unexported fields
}
WorkerPool contains a collection of _running_ workers.
func NewWorkerPool ¶
func NewWorkerPool(factory func() Worker) WorkerPool
NewWorkerPool creates a simple pool of workers.
This works well enough for the simple task we ask of it at the moment. If we find ourselves adding more features to this, consider swapping it for a library such as https://github.com/alitto/pond.
func (*WorkerPool) Grow ¶
func (p *WorkerPool) Grow(ctx context.Context)
func (*WorkerPool) Shrink ¶
func (p *WorkerPool) Shrink(ctx context.Context)
func (*WorkerPool) Size ¶
func (p *WorkerPool) Size() int