tiny_dnn 1.0.0
A header only, dependency-free deep learning framework in C++11
Loading...
Searching...
No Matches
A quick introduction to tiny-dnn

Include tiny_dnn.h:

#include "tiny_dnn/tiny_dnn.h"
using namespace tiny_dnn;
using namespace tiny_dnn::layers;
using namespace tiny_dnn::activation;

Declare the model as network. There are 2 types of network: network<sequential> and network<graph>. The sequential model is easier to construct.

Simple image utility class.
Definition image.h:94

Stack layers:

net << conv<tan_h>(32, 32, 5, 1, 6, padding::same) // in:32x32x1, 5x5conv, 6fmaps
<< max_pool<tan_h>(32, 32, 6, 2) // in:32x32x6, 2x2pooling
<< conv<tan_h>(16, 16, 5, 6, 16, padding::same) // in:16x16x6, 5x5conv, 16fmaps
<< max_pool<tan_h>(16, 16, 16, 2) // in:16x16x16, 2x2pooling
<< fc<tan_h>(8*8*16, 100) // in:8x8x16, out:100
<< fc<softmax>(100, 10); // in:100 out:10

Some layer takes an activation as a template parameter : max_pool<relu> means "apply a relu activation after the pooling". if the layer has no successive activation, use max_pool<identity> instead.

Declare the optimizer:

adaptive gradient method
Definition optimizer.h:77

In addition to gradient descent, you can use modern optimizers such as adagrad, adadelta, adam.

Now you can start the training:

int epochs = 50;
int batch = 20;
Definition loss_function.h:127
bool fit(Optimizer &optimizer, const std::vector< T > &inputs, const std::vector< U > &desired_outputs, size_t batch_size, int epoch, OnBatchEnumerate on_batch_enumerate, OnEpochEnumerate on_epoch_enumerate, const bool reset_weights=false, const int n_threads=CNN_TASK_SIZE, const std::vector< U > &t_cost=std::vector< U >())
trains the network for a fixed number of epochs to generate desired output.
Definition network.h:312

If you don't have the target vector but have the class-id, you can alternatively use train.

bool train(Optimizer &optimizer, const std::vector< vec_t > &inputs, const std::vector< label_t > &class_labels, size_t batch_size, int epoch, OnBatchEnumerate on_batch_enumerate, OnEpochEnumerate on_epoch_enumerate, const bool reset_weights=false, const int n_threads=CNN_TASK_SIZE, const std::vector< vec_t > &t_cost=std::vector< vec_t >())
trains the network for a fixed number of epochs (for classification task)
Definition network.h:247

Validate the training result:

result test(const std::vector< vec_t > &in, const std::vector< label_t > &t)
test and generate confusion-matrix for classification task
Definition network.h:391
float_t get_loss(const std::vector< vec_t > &in, const std::vector< vec_t > &t)
calculate loss value (the smaller, the better) for regression task
Definition network.h:421

Generate prediction on the new data:

auto y_vector = net.predict(x_data);
auto y_label = net.predict_max_label(x_data);
vec_t predict(const vec_t &in)
executes forward-propagation and returns output
Definition network.h:187

Save the trained parameter and models:

net.save("my-network");

For a more in-depth about tiny-dnn, check out MNIST classification where you can see the end-to-end example. You will find tiny-dnn's API in How-to.