All Classes

Class
Description
An interface for activation functions.
A class which contains methods for geting various pre-made activation functions for use in neural network layers.
The Adam (Adaptive Moment Estimation) optimizer.
 
 
 
Base layer interface.
Blocks are used to define a model.
 
layer parameter initializer to produce a constant value.
The DataLoader class contains several methods to load data for models.
A class that provides a method for splitting a dataset into a training and testing dataset.
A fully connected layer with an activation function.
A dropout layer.
Contains methods to encode classes or targets to numerical values.
 
Functional Interface for a loss function.
{layer parameter initializer to produce random values from a truncated normal distribution with standard deviation std = sqrt(2 / (fanIn + fanOut)) and mean of 0 where fanIn is the input dimension for the layer and fanOut is the output dimension for the layer.
layer parameter initializer to produce random values from a clipped uniform distribution in [-lim, lim] where lim = sqrt(6 / (fanIn + fanOut)), fanIn is the input dimension for the layer, and fanOut is the output dimension for the layer.
Vanilla gradient descent optimizer.
layer parameter initializer to produce random values from a normal distribution with standard deviation std = sqrt(2 / fanIn) and mean of 0 where fanIn is the input dimension for the layer.
layer parameter initializer to produce random values from a clipped uniform distribution in [-lim, lim] where lim = sqrt(6 / fanIn), fanIn is the input dimension for the layer.
Interface for bias/weight initialization.
A k-dimensional tree or K-d tree is a binary tree which partitions a space by organizing points in a K-dimensional space.
K Nearest Neighbors (KNN) model.
The linear activation function.
Simple fully-connected linear layer.
 
Model for ordinary least squares linear regression of one variable.

LinearRegression fits a model y = b0 + b1x to the datasets by minimizing the residuals of the sum of squares between the values in the target dataset and the values predicted by the model.
Model for least squares linear regression of one variable by stochastic gradient descent.

LinearRegressionSGD fits a model y = b0 + b1x to the datasets by minimizing the residuals of the sum of squares between the values in the target dataset and the values predicted by the model.
A logistic regression model.
This class contains lambda functions for various loss functions including:
 
Model<X,​Y>
This interface specifies the requirements for a machine learning model.
 
 
The momentum based gradient descent optimizer.
Model for least squares linear regression of multiple variables by least squares.

MultipleLinearRegression fits a model y = b0 + b1x1 + ...
Model for least squares linear regression of multiple variables by stochastic gradient descent.

MultipleLinearRegressionSGD fits a model y = b0 + b1x1 + ...
A class that supports the creation and training of neural networks.
Contains methods for normalizing data.
layer parameter initializer to produce ones.
Interface for an optimizer which can be used to minimize a function by utilizing the gradients of that function.
layer parameter initializer to produce random orthogonal matrix.
A perceptron is a linear model that is equivalent to a single layer neural network.

When a perceptron model is saved, it will be saved as a neural network model.
Model for least squares linear regression of polynomials.

PolynomialRegression fits a model y = b0 + b1x + b2x2 + ...
Model for least squares regression of polynomials using Stochastic Gradient Descent.

PolynomialRegression fits a model y = b0 + b1x + b2x2 + ...
layer parameter initializer to produce random values from a normal distribution.
layer parameter initializer to produce random values from a uniform distribution.
The ReLU (Rectified Linear Unit) activation function.
Learning rate scheduler for optimizers.
The sigmoid activation function.
The softmax activation function.
The stats class is a utility class to compute various statistical information about datasets.
StepLearningRate is a Scheduler which "steps" the learning rate at regular intervals during optimization.
The hyperbolic tangent activation function.
This layer specifies functionality for a layer that has trainable parameters.
 
layer parameter initializer to produce zeros.