Documentation
¶
Index ¶
- Variables
- func AddBias(x, b *Node) (rv *Node, err error)
- func AnyToF64(val interface{}) float64
- func BinaryCrossEntropy(output, target *Node) (*Node, error)
- func CloneSelf(axis int, x *Node, n int) (rv *Node, err error)
- func CrossEntropy(output, target *Node) (*Node, error)
- func DefaultGetInitWFn(master, slaver string) (InitWFn, error)
- func DtypeOf(x *Node) (tensor.Dtype, error)
- func F64ToAny(v float64, dt tensor.Dtype) interface{}
- func F64ToSlice(f64 []float64, dt tensor.Dtype) interface{}
- func Fxwb(act Activation, x, w, b *Node) (*Node, error)
- func GetBackingF64(n *Node) []float64
- func GetInitWFn(str string) (InitWFn, error)
- func Linear(x *Node) (*Node, error)
- func Losses(outputs, targets Nodes, f LossFunc) (cost *Node, err error)
- func MeanSquared(output, target *Node) (*Node, error)
- func NodeFromMap(g *ExprGraph, vs map[string][]float64, dt tensor.Dtype, s tensor.Shape, ...) (*Node, error)
- func OneHotCE(output *Node, targetId int) (*Node, error)
- func OneHotCEBatch(output *Node, targetIds []int) (cost *Node, err error)
- func OneSub(x *Node) (*Node, error)
- func ReshapeToMatrix(x *Node) (*Node, error)
- func WithBacking(f64 []float64) NodeConsOpt
- type Activation
- type FC
- type FCOpts
- type GRU
- type GRUOpts
- type Initializer
- type JsonSaver
- type LSTM
- type LSTMOpts
- type Layer
- type LayerOpts
- type LossFunc
- type Model
- func (m *Model) Forward(x *Node, states States) (rv *Node, err error)
- func (m *Model) GetNode(name string) *Node
- func (m *Model) Init(g *ExprGraph, dt tensor.Dtype) error
- func (m *Model) Learnables() Nodes
- func (m *Model) LearnablesGrad() []ValueGrad
- func (m *Model) StepForward(ns Nodes) (rv Nodes, err error)
- type RNN
- type RNNOpts
- type Saver
- type States
Constants ¶
This section is empty.
Variables ¶
var ( ErrorEmptyInitializer = errors.New("the initializer string is empty") // cache all the initializer functions Initializers initializerMap )
var ( OneF32 = NewConstant(float32(1.0)) OneF64 = NewConstant(float64(1.0)) OneInt = NewConstant(int(1)) OneInt64 = NewConstant(int64(1)) OneInt32 = NewConstant(int32(1)) )
var Activations activationMap
Functions ¶
func AddBias ¶
func AddBias(x, b *Node) (rv *Node, err error)
x supposed to have a batch size. AddBias adds b to evey sample of x.
func BinaryCrossEntropy ¶
func BinaryCrossEntropy(output, target *Node) (*Node, error)
BinaryXent is a convenience function for doing binary crossentropy stuff. This is the loss fucntion of choice for two-class classification problems and sigmoid output units. The formula is as below: BCE(p, t) = -Mean{ t * log(p) + (1 - t) * log(1-p)}
func CrossEntropy ¶
func CrossEntropy(output, target *Node) (*Node, error)
categorical cross entropy, this is the loss fucntion of choice for multi-class classification problems and softmax output units. The formula is as below: CCE(p, t) = -Mean{ t * log(p) }
func DefaultGetInitWFn ¶
func F64ToSlice ¶
func GetBackingF64 ¶
func GetBackingF64(n *Node) []float64
func GetInitWFn ¶
GetInitWFn gets InitWFn from a string like "Gaussian(0, 0.08)"
func MeanSquared ¶
func MeanSquared(output, target *Node) (*Node, error)
mean squared error The formula is as below:
MSE(y, y') = Mean{ (y - y')^2 }
func NodeFromMap ¶
func NodeFromMap(g *ExprGraph, vs map[string][]float64, dt tensor.Dtype, s tensor.Shape, name string) (*Node, error)
if shape is nil, a scalar will be created.
func OneHotCEBatch ¶
Size of targets must equal to batch of output.
func ReshapeToMatrix ¶
func ReshapeToMatrix(x *Node) (*Node, error)
if dims of x > 2, x will be reshaped to a matrix. ReshapeToMatrix is needed because trainning mode have batch size, unlike product mode.
func WithBacking ¶
func WithBacking(f64 []float64) NodeConsOpt
length of backing can longer than node.TotalSize()
Types ¶
type Activation ¶
func NewActivation ¶
func NewActivation(name string, fn func(x *Node) (*Node, error)) Activation
create a Activation by a activate function
type FC ¶
type FC struct {
// contains filtered or unexported fields
}
fully connected layer that has the operation: activate(x*w + b)
type FCOpts ¶
type FCOpts struct { InputSize int OutputSize int // Sigmoid for example, see "active.go" for more activations. // Activation is optional, default is Linear. Activation string //Gaussian(0.0, 0.08), see "initializer.go" for more initializers. // Initializer is optional, default is Uniform(-1,1). Initializer string // Probability of Dropout, it uses randomly zeroes out a *Tensor with a probability // drawn from a uniform distribution. Only float32 or float64 type supported. // Optional, default is zero, means Dropout float64 }
type GRU ¶
type GRU struct {
// contains filtered or unexported fields
}
Gated Recurrent Unit
func (*GRU) Learnables ¶
func (l *GRU) Learnables() Nodes
type GRUOpts ¶
type GRUOpts struct { InputSize int HiddenSize int // Sigmoid for example, see "active.go" for more activations. // Activation is optional, default is Tanh. Activation string //Gaussian(0.0, 0.08), see "initializer.go" for more initializers. // Initializer is optional, default is Uniform(-1,1). InitWh string InitWr string InitWu string // Probability of Dropout, it uses randomly zeroes out a *Tensor with a probability // drawn from a uniform distribution. Only float32 or float64 type supported. // Optional, default is zero, means Dropout float64 }
type Initializer ¶
type JsonSaver ¶
func NewJsonSaver ¶
type LSTM ¶
type LSTM struct {
// contains filtered or unexported fields
}
Long Short Term Memory
func (*LSTM) Learnables ¶
func (l *LSTM) Learnables() Nodes
type LSTMOpts ¶
type LSTMOpts struct { InputSize int HiddenSize int // Sigmoid for example, see "active.go" for more activations. // Activation is optional, default is Tanh. Activation string //Gaussian(0.0, 0.08), see "initializer.go" for more initializers. // Initializer is optional, default is Uniform(-1,1). InitWf string InitWi string InitWo string InitWc string // Probability of Dropout, it uses randomly zeroes out a *Tensor with a probability // drawn from a uniform distribution. Only float32 or float64 type supported. // Optional, default is zero, means Dropout float64 }
type Layer ¶
type Layer interface { Forward(x *Node, states States) (rv *Node, err error) Learnables() Nodes // If vs is nil, the initializer indicated in the Options will be used. Init(g *ExprGraph, dt tensor.Dtype, vs map[string][]float64) error // Get the name of the layer Name() string // Get the options of the layer Options() interface{} }
Layer is a set of neurons and corresponding activation
type Model ¶
type Model struct { Layers []Layer //init data from saver InitData map[string][]float64 // contains filtered or unexported fields }
combin a group of layers
func (*Model) Forward ¶
states must be empty in the beginning. states stores hidden state in the layers if necessary.
func (*Model) LearnablesGrad ¶
func (m *Model) LearnablesGrad() []ValueGrad
func (*Model) StepForward ¶
len(ns) = number of steps
type RNN ¶
type RNN struct {
// contains filtered or unexported fields
}
Basic Recurrent Neural Network
type RNNOpts ¶
type RNNOpts struct { InputSize int HiddenSize int // Sigmoid for example, see "active.go" for more activations. // Activation is optional, default is Tanh. Activation string //Gaussian(0.0, 0.08), see "initializer.go" for more initializers. // Initializer is optional, default is Uniform(-1,1). Initializer string // Probability of Dropout, it uses randomly zeroes out a *Tensor with a probability // drawn from a uniform distribution. Only float32 or float64 type supported. // Optional, default is zero, means Dropout float64 }