Benchmarking a machine learning (flux) code in julia

213 Views Asked by At

I am trying to benchmark the performance of the Flux code mentioned below:

#model
using Flux
vgg19() = Chain(            
    Conv((3, 3), 3 => 64, relu, pad=(1, 1), stride=(1, 1)),
    Conv((3, 3), 64 => 64, relu, pad=(1, 1), stride=(1, 1)),
    MaxPool((2,2)),
    Conv((3, 3), 64 => 128, relu, pad=(1, 1), stride=(1, 1)),
    Conv((3, 3), 128 => 128, relu, pad=(1, 1), stride=(1, 1)),
    MaxPool((2,2)),
    Conv((3, 3), 128 => 256, relu, pad=(1, 1), stride=(1, 1)),
    Conv((3, 3), 256 => 256, relu, pad=(1, 1), stride=(1, 1)),
    Conv((3, 3), 256 => 256, relu, pad=(1, 1), stride=(1, 1)),
    MaxPool((2,2)),
    Conv((3, 3), 256 => 512, relu, pad=(1, 1), stride=(1, 1)),
    Conv((3, 3), 512 => 512, relu, pad=(1, 1), stride=(1, 1)),
    Conv((3, 3), 512 => 512, relu, pad=(1, 1), stride=(1, 1)),
    MaxPool((2,2)),
    Conv((3, 3), 512 => 512, relu, pad=(1, 1), stride=(1, 1)),
    Conv((3, 3), 512 => 512, relu, pad=(1, 1), stride=(1, 1)),
    Conv((3, 3), 512 => 512, relu, pad=(1, 1), stride=(1, 1)),
    BatchNorm(512),
    MaxPool((2,2)),
    flatten,
    Dense(512, 4096, relu),
    Dropout(0.5),
    Dense(4096, 4096, relu),
    Dropout(0.5),
    Dense(4096, 10),
    softmax
)

#data

using MLDatasets: CIFAR10
using Flux: onehotbatch
# Data comes pre-normalized in Julia
trainX, trainY = CIFAR10.traindata(Float32)
testX, testY = CIFAR10.testdata(Float32)
# One hot encode labels
trainY = onehotbatch(trainY, 0:9)
testY = onehotbatch(testY, 0:9)

#training

using Flux: crossentropy, @epochs
using Flux.Data: DataLoader
model = vgg19()
opt = Momentum(.001, .9)
loss(x, y) = crossentropy(model(x), y)
data = DataLoader(trainX, trainY, batchsize=64)
@epochs 100 Flux.train!(loss, params(model), data, opt)

I have tried using the in-built tick() and tock() function to measure the time. But, this gives out a basic time and not efficient to perform the intensive comparison. Numerous developers in the community have recommended using BenchmarkTools.jl package to benchmark the code. But when I try to benchmark the ScikitLearn Model in the REPL it produced a warning;

WARNING: redefinition of constant LogisticRegression. This may fail, cause incorrect answers, or produce other errors.

Similarly, I tried to benchmark the above-mentioned code in the REPL using @btime but it throws this error:

julia> using BenchmarkTools
julia> @btime include("C:/Users/user/code.jl")
[ Info: Epoch 1
WARNING: both Flux and BenchmarkTools export "params"; uses of it in module Main must be qualified
ERROR: LoadError: UndefVarError: params not defined

May I know what is the best way to perform a detailed benchmark of the code?

Thanks in advance.

0

There are 0 best solutions below