Perl's Deep Learning Library-AI::MXNet

Perl's deep learning library includes AI::MXNet. It is a library of deep learning written in C ++ that can be used by binding it with Perl.

First of all, if you want to actually try deep learning, you can easily (?) Use the library. Deep learning is also known as deep learning. If you're looking for a Perl library for deep learning, AI::MXNet is for you.

Official support for MXNet on Amazon AWS

MXNet has official support on Amazon AWS.

Flexibility and choice

MXNet supports a wide range of programming languages ​​such as C ++, JavaScript, Python, R, Matlab, Julia, Scala, Clojure, and Perl , so you already know it. You can start in the language you are in. However, the backend compiles all the code into C ++ for maximum performance regardless of the language used to build the model.

Apache MXNet on AWS

AI::MXNet is one of the few Amazon Perl supports ...

Sample of image generation using deep learning

According to the English blog of AI::MXNet author Sergey V. Kolychev, it seems that you can also generate images using deep learning. The author himself seems to be doing deep learning related to natural language processing in his work.

I hope you enjoy this example and create a lot of nice pictures. Below is an image generated by a sample script made from a classic painting that is different from the photo of Kyubi.

Machine learning in Perl: Kyuubi goes to a (Model) Zoo during The Starry Night.

Original image

Deep learning generated image

Is it something that was generated from the original image by learning something like Van Gogh style painting?

How to use AI::MXNet

Here's how to use it from a sample.

## Convolutional NN for recognizing hand-written digits in MNIST dataset
## It's considered "Hello, World" for Neural Networks
## For more info about the MNIST problem please refer to L <http://neuralnetworksanddeeplearning.com/chap1.html>
 
use strict;
use warnings;
use AI::MXNet qw (mx);
use AI::MXNet::TestUtils qw (GetMNIST_ubyte);
use Test::More tests => 1;
 
# symbol net
my $batch_size = 100;
 
### model
my $data = mx->symbol->Variable('data');
my $conv1 = mx->symbol->Convolution(data => $data, name =>'conv1', num_filter => 32, kernel => [3,3], stride => [2,2]);
my $bn1 = mx->symbol->BatchNorm(data => $conv1, name => "bn1");
my $act1 = mx->symbol->Activation(data => $bn1, name =>'relu1', act_type => "relu");
my $mp1 = mx->symbol->Pooling(data => $act1, name =>'mp1', kernel => [2,2], stride => [2,2], pool_type =>'max') ;
 
my $conv2 = mx->symbol->Convolution(data => $mp1, name =>'conv2', num_filter => 32, kernel => [3,3], stride => [2,2]);
my $bn2 = mx->symbol->BatchNorm(data => $conv2, name => "bn2");
my $act2 = mx->symbol->Activation(data => $bn2, name =>'relu2', act_type => "relu");
my $mp2 = mx->symbol->Pooling(data => $act2, name =>'mp2', kernel => [2,2], stride => [2,2], pool_type =>'max') ;
 
 
my $fl = mx->symbol->Flatten(data => $mp2, name => "flatten");
my $fc1 = mx->symbol->FullyConnected(data => $fl, name => "fc1", num_hidden => 30);
my $act3 = mx->symbol->Activation(data => $fc1, name =>'relu3', act_type => "relu");
my $fc2 = mx->symbol->FullyConnected(data => $act3, name =>'fc2', num_hidden => 10);
my $softmax = mx->symbol->SoftmaxOutput(data => $fc2, name =>'softmax');
 
# check data
GetMNIST_ubyte ();
 
my $train_dataiter = mx->io->MNISTIter({{
    image => "data / train-images-idx3-ubyte",
    label => "data / train-labels-idx1-ubyte",
    data_shape => [1, 28, 28],
    batch_size => $batch_size, shuffle => 1, flat => 0, silent => 0, seed => 10});
my $val_dataiter = mx->io->MNISTIter({{
    image => "data / t10k-images-idx3-ubyte",
    label => "data / t10k-labels-idx1-ubyte",
    data_shape => [1, 28, 28],
    batch_size => $batch_size, shuffle => 1, flat => 0, silent => 0});
 
my $n_epoch = 1;
my $mod = mx->mod->new(symbol => $softmax);
$mod->fit(
    $train_dataiter,
    eval_data => $val_dataiter,
    optimizer_params => {learning_rate => 0.01, momentum => 0.9},
    num_epoch => $n_epoch
);
my $res = $mod->score($val_dataiter, mx->metric->create('acc'));
ok ($res->{accuracy}> 0.8);
 
## Gluon MNIST example
 
my $net = nn->Sequential();
$net->name_scope(sub {
    $net->add(nn->Dense(128, activation =>'relu'));
    $net->add(nn->Dense(64, activation =>'relu'));
    $net->add(nn->Dense(10));
});
$net->hybridize;
 
# data
sub transformer
{
    my ($data, $label) = @_;
    $data = $data->reshape([-1])->astype('float32') / 255;
    return($data, $label);
}
my $train_data = gluon->data->DataLoader(
    gluon->data->vision->MNIST('./ data', train => 1, transform => \ & transformer),
    batch_size => $batch_size, shuffle => 1, last_batch =>'discard'
);
 
## training
sub train
{
    my ($epochs, $ctx) = @_;
    # Collect all parameters from net and its children, then initialize them.
    $net->initialize(mx->init->Xavier(magnitude => 2.24), ctx => $ctx);
    #Trainer is for updating parameters with gradient.
    my $trainer = gluon->Trainer($net->collect_params(),'sgd', {learning_rate => $lr, momentum => $momentum});
    my $metric = mx->metric->Accuracy();
    my $loss = gluon->loss->SoftmaxCrossEntropyLoss();for my $epoch (0 .. $epochs-1)
    {
        # reset data iterator and metric at begining of epoch.
        $metric->reset();
        enumerate (sub {
            my ($i, $d) = @_;
            my ($data, $label) = @$d;
            $data = $data->as_in_context($ctx);
            $label = $label->as_in_context($ctx);
            # Start recording computation graph with record () section.
            #Recorded graphs can then be differentiated with backward.
            my $output;
            autograd->record(sub {
                $output = $net->($data);
                my $L = $loss->($output, $label);
                $L->backward;
            });
            # take a gradient step with batch_size equal to data.shape [0]
            $trainer->step($data->shape->[0]);
            #update metric at last.
            $metric->update([$label], [$output]);
 
            if ($i%$log_interval == 0 and $i> 0)
            {
                my ($name, $acc) = $metric->get();
                print "[Epoch $epoch Batch $i] Training: $name = $acc\n";
            }
        }, \ @{$train_data});
 
        my ($name, $acc) = $metric->get();
        print "[Epoch $epoch] Training: $name = $acc\n";
 
        my ($val_name, $val_acc) = test ($ctx);
        print "[Epoch $epoch] Validation: $val_name = $val_acc\n"
    }
    $net->save_parameters('mnist.params');
}
 
train ($epochs, $cuda? mx->gpu(0): mx->cpu);

Associated Information