ccache

package module
v0.0.0-...-b96cecb Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 6, 2025 License: MIT Imports: 7 Imported by: 0

README

CCache 一个LRU缓存,专注于支持高并发

GoDoc Go Report Card Coverage Status MIT License

CCache 是一个用 Go 语言编写的 LRU 缓存,专注于支持高并发。

通过以下方式减少对列表的锁竞争:

  • 引入一个窗口来限制项目被提升的频率
  • 使用缓冲通道来为单个工作者排队处理提升操作
  • 在同一个工作者线程中进行垃圾回收

除非另有说明,所有方法都是线程安全的。

配置

基础配置

导入并创建一个 Cache 实例:

import (
  "github.com/darkit/ccache"
)

// 创建一个存储字符串值的缓存
var cache = ccache.New(ccache.Configure[string]())

Configure 提供了链式 API:

// 创建一个存储整数值的缓存
var cache = ccache.New(ccache.Configure[int]().MaxSize(1000).ItemsToPrune(100))
配置选项

最可能需要调整的配置选项是:

  • MaxSize(int) - 缓存中存储的最大项目数(默认:10000)
  • GetsPerPromote(int) - 在提升一个项目之前被获取的次数(默认:3)
  • ItemsToPrune(int) - 当达到 MaxSize 时要清理的项目数量(默认:1000)

改变缓存内部结构的配置,通常不需要调整:

  • Buckets - ccache 对其内部映射进行分片以提供更高的并发性(默认:16)
  • PromoteBuffer(int) - 用于队列提升操作的缓冲区大小(默认:1024)
  • DeleteBuffer(int) - 用于队列删除操作的缓冲区大小(默认:1024)

基本操作

Get

获取缓存中的值:

item := cache.Get("user:4")
if item == nil {
  //处理空值情况
} else {
  user := item.Value()
}

返回的 *Item 提供了几个方法:

  • Value() T - 缓存的值
  • Expired() bool - 项目是否已过期
  • TTL() time.Duration - 项目过期前的持续时间
  • Expires() time.Time - 项目将要过期的时间
Set

设置缓存值:

cache.Set("user:4", user, time.Minute * 10)
Delete

删除缓存值:

cache.Delete("user:4")
Has

检查键是否存在:

value, exists := cache.Has("user:4")
if exists {
    // 使用 value
} else {
    // 处理不存在的情况
}

高级操作

Fetch

智能获取或设置缓存值:

item, err := cache.Fetch("user:4", time.Minute * 10, func() (*User, error) {
    // 缓存未命中时执行
    user, err := db.GetUser(4)
    if err != nil {
        return nil, err
    }
    return user, nil
})
GetMulti/SetMulti

批量操作:

// 批量获取
items := cache.GetMulti([]string{"user:1", "user:2", "user:3"})

// 批量设置
items := map[string]string{
    "user:1": "value1",
    "user:2": "value2",
}
cache.SetMulti(items, time.Minute*10)
PushSlice/PushMap

向切片、Map类型的缓存追加值:

// 设置初始切片
cache.Set("numbers", []int{1, 2, 3}, time.Minute)

// 追加新的值
err := cache.PushSlice("numbers", []int{4}, time.Minute)
if err != nil {
    // 处理错误
}
// 设置初始切片
cache.Set("map_numbers", map[string]int{"a": 1, "b": 2}, time.Minute)

// 追加新的值
err := cache.PushMap("map_numbers", map[string]int{"c": 3, "d": 4}, time.Minute)
if err != nil {
// 处理错误
}
Extend/Touch

更新缓存项的过期时间。两个方法的主要区别在于是否影响缓存项在 LRU 链表中的位置:

// 使用 Extend - 仅更新过期时间,不改变缓存项在 LRU 中的位置
success := cache.Extend("user:4", time.Minute * 10)

// 使用 Touch - 更新过期时间,同时将缓存项移动到 LRU 链表前端
success := cache.Touch("user:4", time.Minute * 10)

选择建议:

  • 使用 Extend:当只需要延长过期时间,不需要改变访问频率时
  • 使用 Touch:当需要同时更新过期时间和访问状态时

注意:两个方法都返回 bool 值表示操作是否成功(键是否存在)。

Replace

更新值但保持TTL:

cache.Replace("user:4", user)
SetIfNotExists/SetIfNotExistsWithFunc

两种方式设置不存在的键值:

// 方式1:直接设置值
cache.SetIfNotExists("key", "value", time.Minute)

// 方式2:通过函数生成值(延迟计算)
item := cache.SetIfNotExistsWithFunc("key", func() string {
    // 只在键不存在时才会执行
    return expensiveOperation()
}, time.Minute)

两者的区别:

  • SetIfNotExists:直接传入值,简单直观
  • SetIfNotExistsWithFunc
    • 传入函数来生成值
    • 只在键不存在时才会执行函数
    • 适合值的计算成本较高的场景
    • 返回设置的缓存项

注意:SetIfNotExistsWithFunc 在并发场景下更高效,因为它避免了不必要的值计算。

Inc/Dec

数值增减:

newVal, err := cache.Inc("counter", 1)
if err != nil {
    // 处理错误
}

newVal, err = cache.Dec("counter", 1)
if err != nil {
    // 处理错误
}

监控和管理

GetDropped

获取被驱逐的键数量:

dropped := cache.GetDropped()
Stop

停止缓存的后台工作进程:

cache.Stop()

特殊功能

追踪模式

CCache 支持特殊的追踪模式,主要用于与代码中需要维护数据长期引用的其他部分配合使用。

使用 Track() 配置追踪模式:

cache = ccache.New(ccache.Configure[int]().Track())

通过 TrackingGet 获取的项目在调用 Release 之前不会被清除:

item := cache.TrackingGet("user:4")
user := item.Value()   // 如果缓存中没有 "user:4",将返回 nil
item.Release()         // 即使 item.Value() 返回 nil 也可以调用

追踪模式的主要优势:

  • Release 通常在代码的其他地方延迟调用
  • 可以使用 TrackingSet 设置需要追踪的值
  • 有助于确保系统返回一致的数据
  • 适合其他代码部分也持有对象引用的场景
分层缓存(LayeredCache)

LayeredCache 提供了通过主键和次键来存储和检索值的能力。它特别适合需要管理数据多个变体的场景,如 HTTP 缓存。

主要特点:

  • 支持主键和次键的组合
  • 可以针对特定键或主键下所有值进行删除
  • 使用与主缓存相同的配置
  • 支持可选的追踪功能

使用示例:

cache := ccache.Layered(ccache.Configure[string]())

// 设置不同格式的数据
cache.Set("/users/goku", "type:json", "{value_to_cache}", time.Minute * 5)
cache.Set("/users/goku", "type:xml", "<value_to_cache>", time.Minute * 5)

// 获取特定格式
json := cache.Get("/users/goku", "type:json")
xml := cache.Get("/users/goku", "type:xml")

// 删除操作
cache.Delete("/users/goku", "type:json")     // 删除特定格式
cache.DeleteAll("/users/goku")               // 删除所有格式
二级缓存(SecondaryCache)

当使用 LayeredCache 时,有时需要频繁操作某个主键下的缓存条目。SecondaryCache 提供了这种便利性:

cache := ccache.Layered(ccache.Configure[string]())
sCache := cache.GetOrCreateSecondaryCache("/users/goku")
sCache.Set("type:json", "{value_to_cache}", time.Minute * 5)

特点:

  • 与普通 Cache 的操作方式相同
  • Get 操作不返回 nil,而是返回空缓存
  • 简化了对特定主键数据的操作

大小控制

缓存项目的大小管理:

  • 默认每个项目大小为 1
  • 如果值实现了 Size() int64 方法,将使用该方法返回的大小
  • 每个缓存条目有约 350 字节的额外开销(不计入大小限制)

示例:

  • 配置 MaxSize(10000) 可存储 10000 个默认大小的项目
  • 对于实现了 Size() int64 的项目,实际存储数量 = MaxSize/项目大小

许可证

MIT License - 查看 LICENSE 文件了解详情。

Documentation

Overview

An LRU cached aimed at high concurrency

An LRU cached aimed at high concurrency

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Cache

type Cache[T any] struct {
	*Configuration[T]
	// contains filtered or unexported fields
}

func New

func New[T any](config *Configuration[T]) *Cache[T]

Create a new cache with the specified configuration See ccache.Configure() for creating a configuration

func (Cache) Clear

func (c Cache) Clear()

Clears the cache This is a control command.

func (*Cache[T]) Dec

func (c *Cache[T]) Dec(key string, delta int64) (int64, error)

Dec decreases the value of key by delta. Returns error if value is not integer

func (*Cache[T]) Delete

func (c *Cache[T]) Delete(key string) bool

Remove the item from the cache, return true if the item was present, false otherwise.

func (*Cache[T]) DeleteFunc

func (c *Cache[T]) DeleteFunc(matches func(key string, item *Item[T]) bool) int

Deletes all items that the matches func evaluates to true.

func (*Cache[T]) DeletePrefix

func (c *Cache[T]) DeletePrefix(prefix string) int

func (*Cache[T]) Extend

func (c *Cache[T]) Extend(key string, duration time.Duration) bool

Extend the value if it exists, does not set if it doesn't exists. Returns true if the expire time of the item an was extended, false otherwise.

func (*Cache[T]) Fetch

func (c *Cache[T]) Fetch(key string, duration time.Duration, fetch func() (T, error)) (*Item[T], error)

Attempts to get the value from the cache and calles fetch on a miss (missing or stale item). If fetch returns an error, no value is cached and the error is returned back to the caller. Note that Fetch merely calls the public Get and Set functions. If you want a different Fetch behavior, such as thundering herd protection or returning expired items, implement it in your application.

func (*Cache[T]) ForEachFunc

func (c *Cache[T]) ForEachFunc(matches func(key string, item *Item[T]) bool)

func (Cache) GC

func (c Cache) GC()

Forces GC. There should be no reason to call this function, except from tests which require synchronous GC. This is a control command.

func (*Cache[T]) Get

func (c *Cache[T]) Get(key string) *Item[T]

Get an item from the cache. Returns nil if the item wasn't found. This can return an expired item. Use item.Expired() to see if the item is expired and item.TTL() to see how long until the item expires (which will be negative for an already expired item).

func (Cache) GetDropped

func (c Cache) GetDropped() int

Gets the number of items removed from the cache due to memory pressure since the last time GetDropped was called This is a control command.

func (*Cache[T]) GetMulti

func (c *Cache[T]) GetMulti(keys []string) map[string]*Item[T]

GetMulti returns a map of the items corresponding to the given keys

func (Cache) GetSize

func (c Cache) GetSize() int64

Gets the size of the cache. This is an O(1) call to make, but it is handled by the worker goroutine. It's meant to be called periodically for metrics, or from tests. This is a control command.

func (*Cache[T]) GetWithoutPromote

func (c *Cache[T]) GetWithoutPromote(key string) *Item[T]

Same as Get but does not promote the value. This essentially circumvents the "least recently used" aspect of this cache. To some degree, it's akin to a "peak"

func (*Cache[T]) Has

func (c *Cache[T]) Has(key string) (T, bool)

Has checks if the key exists in cache and returns whether it exists along with its value Returns (zero value, false) if the key doesn't exist or is expired

func (*Cache[T]) Inc

func (c *Cache[T]) Inc(key string, delta int64) (int64, error)

Inc increases the value of key by delta. Returns error if value is not integer

func (*Cache[T]) ItemCount

func (c *Cache[T]) ItemCount() int

func (*Cache[T]) Pull

func (c *Cache[T]) Pull(key string) *Item[T]

Pull 获取并删除缓存中的值

func (*Cache[T]) PushMap

func (c *Cache[T]) PushMap(key string, values T, duration time.Duration) error

PushMap merges values into the map in cache Returns error if the value is not a map type

func (*Cache[T]) PushSlice

func (c *Cache[T]) PushSlice(key string, values T, duration time.Duration) error

PushSlice appends values to the slice in cache Returns error if the value is not a slice type

func (*Cache[T]) Replace

func (c *Cache[T]) Replace(key string, value T) bool

Replace the value if it exists, does not set if it doesn't. Returns true if the item existed an was replaced, false otherwise. Replace does not reset item's TTL

func (*Cache[T]) Set

func (c *Cache[T]) Set(key string, value T, duration time.Duration)

Set the value in the cache for the specified duration

func (*Cache[T]) SetIfNotExists

func (c *Cache[T]) SetIfNotExists(key string, value T, duration time.Duration)

SetIfNotExists set the value in the cache for the specified duration if not exists

func (*Cache[T]) SetIfNotExistsWithFunc

func (c *Cache[T]) SetIfNotExistsWithFunc(key string, f func() T, duration time.Duration) *Item[T]

SetIfNotExistsWithFunc set the value in the cache for the specified duration if not exists

func (Cache) SetMaxSize

func (c Cache) SetMaxSize(size int64)

Sets a new max size. That can result in a GC being run if the new maxium size is smaller than the cached size This is a control command.

func (*Cache[T]) SetMulti

func (c *Cache[T]) SetMulti(items map[string]T, duration time.Duration)

SetMulti sets multiple key-value pairs with the same duration

func (Cache) Stop

func (c Cache) Stop()

Sends a stop signal to the worker thread. The worker thread will shut down 5 seconds after the last message is received. The cache should not be used after Stop is called, but concurrently executing requests should properly finish executing. This is a control command.

func (Cache) SyncUpdates

func (c Cache) SyncUpdates()

SyncUpdates waits until the cache has finished asynchronous state updates for any operations that were done by the current goroutine up to now.

For efficiency, the cache's implementation of LRU behavior is partly managed by a worker goroutine that updates its internal data structures asynchronously. This means that the cache's state in terms of (for instance) eviction of LRU items is only eventually consistent; there is no guarantee that it happens before a Get or Set call has returned. Most of the time application code will not care about this, but especially in a test scenario you may want to be able to know when the worker has caught up.

This applies only to cache methods that were previously called by the same goroutine that is now calling SyncUpdates. If other goroutines are using the cache at the same time, there is no way to know whether any of them still have pending state updates when SyncUpdates returns. This is a control command.

func (*Cache[T]) Touch

func (c *Cache[T]) Touch(key string, duration time.Duration) bool

Touch updates the expiry time of an existing item without changing its value

func (*Cache[T]) TrackingGet

func (c *Cache[T]) TrackingGet(key string) TrackedItem[T]

Used when the cache was created with the Track() configuration option. Avoid otherwise

func (*Cache[T]) TrackingSet

func (c *Cache[T]) TrackingSet(key string, value T, duration time.Duration) TrackedItem[T]

Used when the cache was created with the Track() configuration option. Sets the item, and returns a tracked reference to it.

type Configuration

type Configuration[T any] struct {
	// contains filtered or unexported fields
}

func Configure

func Configure[T any]() *Configuration[T]

Creates a configuration object with sensible defaults Use this as the start of the fluent configuration: e.g.: ccache.New(ccache.Configure().MaxSize(10000))

func (*Configuration[T]) Buckets

func (c *Configuration[T]) Buckets(count uint32) *Configuration[T]

Keys are hashed into % bucket count to provide greater concurrency (every set requires a write lock on the bucket). Must be a power of 2 (1, 2, 4, 8, 16, ...) [16]

func (*Configuration[T]) DeleteBuffer

func (c *Configuration[T]) DeleteBuffer(size uint32) *Configuration[T]

The size of the queue for items which should be deleted. If the queue fills up, calls to Delete() will block

func (*Configuration[T]) GetsPerPromote

func (c *Configuration[T]) GetsPerPromote(count int32) *Configuration[T]

Give a large cache with a high read / write ratio, it's usually unnecessary to promote an item on every Get. GetsPerPromote specifies the number of Gets a key must have before being promoted [3]

func (*Configuration[T]) ItemsToPrune

func (c *Configuration[T]) ItemsToPrune(count uint32) *Configuration[T]

The number of items to prune when memory is low [500]

func (*Configuration[T]) MaxSize

func (c *Configuration[T]) MaxSize(max int64) *Configuration[T]

The max size for the cache [5000]

func (*Configuration[T]) OnDelete

func (c *Configuration[T]) OnDelete(callback func(item *Item[T])) *Configuration[T]

OnDelete allows setting a callback function to react to ideam deletion. This typically allows to do a cleanup of resources, such as calling a Close() on cached object that require some kind of tear-down.

func (*Configuration[T]) PromoteBuffer

func (c *Configuration[T]) PromoteBuffer(size uint32) *Configuration[T]

The size of the queue for items which should be promoted. If the queue fills up, promotions are skipped [1024]

func (*Configuration[T]) Track

func (c *Configuration[T]) Track() *Configuration[T]

By turning tracking on and using the cache's TrackingGet, the cache won't evict items which you haven't called Release() on. It's a simple reference counter.

type Item

type Item[T any] struct {
	// contains filtered or unexported fields
}

func (*Item[T]) Expired

func (i *Item[T]) Expired() bool

func (*Item[T]) Expires

func (i *Item[T]) Expires() time.Time

func (*Item[T]) Extend

func (i *Item[T]) Extend(duration time.Duration)

func (*Item[T]) Key

func (i *Item[T]) Key() string

func (*Item[T]) Release

func (i *Item[T]) Release()

func (*Item[T]) String

func (i *Item[T]) String() string

String returns a string representation of the Item. This includes the default string representation of its Value(), as implemented by fmt.Sprintf with "%v", but the exact format of the string should not be relied on; it is provided only for debugging purposes, and because otherwise including an Item in a call to fmt.Printf or fmt.Sprintf expression could cause fields of the Item to be read in a non-thread-safe way.

func (*Item[T]) TTL

func (i *Item[T]) TTL() time.Duration

func (*Item[T]) Value

func (i *Item[T]) Value() T

type LayeredCache

type LayeredCache[T any] struct {
	*Configuration[T]
	// contains filtered or unexported fields
}

func Layered

func Layered[T any](config *Configuration[T]) *LayeredCache[T]

See ccache.Configure() for creating a configuration

func (LayeredCache) Clear

func (c LayeredCache) Clear()

Clears the cache This is a control command.

func (*LayeredCache[T]) Delete

func (c *LayeredCache[T]) Delete(primary, secondary string) bool

Remove the item from the cache, return true if the item was present, false otherwise.

func (*LayeredCache[T]) DeleteAll

func (c *LayeredCache[T]) DeleteAll(primary string) bool

Deletes all items that share the same primary key

func (*LayeredCache[T]) DeleteFunc

func (c *LayeredCache[T]) DeleteFunc(primary string, matches func(key string, item *Item[T]) bool) int

Deletes all items that share the same primary key and where the matches func evaluates to true.

func (*LayeredCache[T]) DeletePrefix

func (c *LayeredCache[T]) DeletePrefix(primary, prefix string) int

Deletes all items that share the same primary key and prefix.

func (*LayeredCache[T]) Fetch

func (c *LayeredCache[T]) Fetch(primary, secondary string, duration time.Duration, fetch func() (T, error)) (*Item[T], error)

Attempts to get the value from the cache and calles fetch on a miss. If fetch returns an error, no value is cached and the error is returned back to the caller. Note that Fetch merely calls the public Get and Set functions. If you want a different Fetch behavior, such as thundering herd protection or returning expired items, implement it in your application.

func (*LayeredCache[T]) ForEachFunc

func (c *LayeredCache[T]) ForEachFunc(primary string, matches func(key string, item *Item[T]) bool)

func (LayeredCache) GC

func (c LayeredCache) GC()

Forces GC. There should be no reason to call this function, except from tests which require synchronous GC. This is a control command.

func (*LayeredCache[T]) Get

func (c *LayeredCache[T]) Get(primary, secondary string) *Item[T]

Get an item from the cache. Returns nil if the item wasn't found. This can return an expired item. Use item.Expired() to see if the item is expired and item.TTL() to see how long until the item expires (which will be negative for an already expired item).

func (LayeredCache) GetDropped

func (c LayeredCache) GetDropped() int

Gets the number of items removed from the cache due to memory pressure since the last time GetDropped was called This is a control command.

func (*LayeredCache[T]) GetOrCreateSecondaryCache

func (c *LayeredCache[T]) GetOrCreateSecondaryCache(primary string) *SecondaryCache[T]

Get the secondary cache for a given primary key. This operation will never return nil. In the case where the primary key does not exist, a new, underlying, empty bucket will be created and returned.

func (LayeredCache) GetSize

func (c LayeredCache) GetSize() int64

Gets the size of the cache. This is an O(1) call to make, but it is handled by the worker goroutine. It's meant to be called periodically for metrics, or from tests. This is a control command.

func (*LayeredCache[T]) GetWithoutPromote

func (c *LayeredCache[T]) GetWithoutPromote(primary, secondary string) *Item[T]

Same as Get but does not promote the value. This essentially circumvents the "least recently used" aspect of this cache. To some degree, it's akin to a "peak"

func (*LayeredCache[T]) Has

func (c *LayeredCache[T]) Has(primary, secondary string) (bool, T)

Has checks if the key exists in layered cache and returns whether it exists along with its value Returns (false, zero value) if the key doesn't exist or is expired

func (*LayeredCache[T]) ItemCount

func (c *LayeredCache[T]) ItemCount() int

func (*LayeredCache[T]) Replace

func (c *LayeredCache[T]) Replace(primary, secondary string, value T) bool

Replace the value if it exists, does not set if it doesn't. Returns true if the item existed an was replaced, false otherwise. Replace does not reset item's TTL nor does it alter its position in the LRU

func (*LayeredCache[T]) Set

func (c *LayeredCache[T]) Set(primary, secondary string, value T, duration time.Duration)

Set the value in the cache for the specified duration

func (LayeredCache) SetMaxSize

func (c LayeredCache) SetMaxSize(size int64)

Sets a new max size. That can result in a GC being run if the new maxium size is smaller than the cached size This is a control command.

func (LayeredCache) Stop

func (c LayeredCache) Stop()

Sends a stop signal to the worker thread. The worker thread will shut down 5 seconds after the last message is received. The cache should not be used after Stop is called, but concurrently executing requests should properly finish executing. This is a control command.

func (LayeredCache) SyncUpdates

func (c LayeredCache) SyncUpdates()

SyncUpdates waits until the cache has finished asynchronous state updates for any operations that were done by the current goroutine up to now.

For efficiency, the cache's implementation of LRU behavior is partly managed by a worker goroutine that updates its internal data structures asynchronously. This means that the cache's state in terms of (for instance) eviction of LRU items is only eventually consistent; there is no guarantee that it happens before a Get or Set call has returned. Most of the time application code will not care about this, but especially in a test scenario you may want to be able to know when the worker has caught up.

This applies only to cache methods that were previously called by the same goroutine that is now calling SyncUpdates. If other goroutines are using the cache at the same time, there is no way to know whether any of them still have pending state updates when SyncUpdates returns. This is a control command.

func (*LayeredCache[T]) TrackingGet

func (c *LayeredCache[T]) TrackingGet(primary, secondary string) TrackedItem[T]

Used when the cache was created with the Track() configuration option. Avoid otherwise

func (*LayeredCache[T]) TrackingSet

func (c *LayeredCache[T]) TrackingSet(primary, secondary string, value T, duration time.Duration) TrackedItem[T]

Set the value in the cache for the specified duration

type List

type List[T any] struct {
	Head *Item[T]
	Tail *Item[T]
}

func NewList

func NewList[T any]() *List[T]

func (*List[T]) Insert

func (l *List[T]) Insert(item *Item[T])

func (*List[T]) MoveToFront

func (l *List[T]) MoveToFront(item *Item[T])

func (*List[T]) Remove

func (l *List[T]) Remove(item *Item[T])

type SecondaryCache

type SecondaryCache[T any] struct {
	// contains filtered or unexported fields
}

func (*SecondaryCache[T]) Delete

func (s *SecondaryCache[T]) Delete(secondary string) bool

Delete a secondary key. The semantics are the same as for LayeredCache.Delete

func (*SecondaryCache[T]) Fetch

func (s *SecondaryCache[T]) Fetch(secondary string, duration time.Duration, fetch func() (T, error)) (*Item[T], error)

Fetch or set a secondary key. The semantics are the same as for LayeredCache.Fetch

func (*SecondaryCache[T]) Get

func (s *SecondaryCache[T]) Get(secondary string) *Item[T]

Get the secondary key. The semantics are the same as for LayeredCache.Get

func (*SecondaryCache[T]) Has

func (s *SecondaryCache[T]) Has(secondary string) (bool, T)

Has checks if the secondary key exists and returns whether it exists along with its value Returns (false, zero value) if the key doesn't exist or is expired

func (*SecondaryCache[T]) Replace

func (s *SecondaryCache[T]) Replace(secondary string, value T) bool

Replace a secondary key. The semantics are the same as for LayeredCache.Replace

func (*SecondaryCache[T]) Set

func (s *SecondaryCache[T]) Set(secondary string, value T, duration time.Duration) *Item[T]

Set the secondary key to a value. The semantics are the same as for LayeredCache.Set

func (*SecondaryCache[T]) TrackingGet

func (c *SecondaryCache[T]) TrackingGet(secondary string) TrackedItem[T]

Track a secondary key. The semantics are the same as for LayeredCache.TrackingGet

type Sized

type Sized interface {
	Size() int64
}

type TrackedItem

type TrackedItem[T any] interface {
	Value() T
	Release()
	Expired() bool
	TTL() time.Duration
	Expires() time.Time
	Extend(duration time.Duration)
}

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL