Include tiny_dnn.h:
#include "tiny_dnn/tiny_dnn.h"
using namespace tiny_dnn;
using namespace tiny_dnn::layers;
using namespace tiny_dnn::activation;
Declare the model as network
. There are 2 types of network: network<sequential>
and network<graph>
. The sequential model is easier to construct.
Simple image utility class.
Definition image.h:94
Stack layers:
Some layer takes an activation as a template parameter : max_pool<relu>
means "apply a relu activation after the pooling". if the layer has no successive activation, use max_pool<identity>
instead.
Declare the optimizer:
adaptive gradient method
Definition optimizer.h:77
In addition to gradient descent, you can use modern optimizers such as adagrad, adadelta, adam.
Now you can start the training:
Definition loss_function.h:127
bool fit(Optimizer &optimizer, const std::vector< T > &inputs, const std::vector< U > &desired_outputs, size_t batch_size, int epoch, OnBatchEnumerate on_batch_enumerate, OnEpochEnumerate on_epoch_enumerate, const bool reset_weights=false, const int n_threads=CNN_TASK_SIZE, const std::vector< U > &t_cost=std::vector< U >())
trains the network for a fixed number of epochs to generate desired output.
Definition network.h:312
If you don't have the target vector but have the class-id, you can alternatively use train
.
bool train(Optimizer &optimizer, const std::vector< vec_t > &inputs, const std::vector< label_t > &class_labels, size_t batch_size, int epoch, OnBatchEnumerate on_batch_enumerate, OnEpochEnumerate on_epoch_enumerate, const bool reset_weights=false, const int n_threads=CNN_TASK_SIZE, const std::vector< vec_t > &t_cost=std::vector< vec_t >())
trains the network for a fixed number of epochs (for classification task)
Definition network.h:247
Validate the training result:
result test(const std::vector< vec_t > &in, const std::vector< label_t > &t)
test and generate confusion-matrix for classification task
Definition network.h:391
float_t get_loss(const std::vector< vec_t > &in, const std::vector< vec_t > &t)
calculate loss value (the smaller, the better) for regression task
Definition network.h:421
Generate prediction on the new data:
vec_t predict(const vec_t &in)
executes forward-propagation and returns output
Definition network.h:187
Save the trained parameter and models:
For a more in-depth about tiny-dnn, check out MNIST classification where you can see the end-to-end example. You will find tiny-dnn's API in How-to.