Lab 2
- Introduction
- Report
- Revisiting the SkillCraft1 Master Table Dataset
- Incorporating More Features
- Useful Resources
Introduction
In this lab, we will use a neural network to improve our regression model from lab 1. You should have your code from labs 1, which we will use to as the basis of our training loop. This lab is based on chapter 5 of “Deep Learning with PyTorch: Essential Excerpts”, and you may want to refer to it for more details if anything is unclear. You may also want to refer to this tutorial for the basics of defining Neural Networks in PyTorch.
Report
You are required to document your work in a report, that you should write while you work on the lab. Include all requested images, and any other graphs you deem interesting, and describe what you observe. The lab text will prompt you for specific information at times, but you are expected to fill in other text to produce a coherent document. At the end of the lab, send an email with the names and carnés of the students in the group as well as the zip file containing the lab report as a pdf, and all code you wrote to the two professors and all code you wrote to the two professors (markus.eger.ucr@gmail.com, marcela.alfarocordoba@ucr.ac.cr) with the subject “[PF-3115]Lab 2, carné 1, carné 2” before the start of class on 12/5. Do not include the data set in this zip file or email.
Revisiting the SkillCraft1 Master Table Dataset
Take a look at your code from last week, in particular the linear model function model
: You defined it as w * x + b
. (Almost) the simplest possible Neural Network consists of a single neuron representing exactly the same
function. The module torch.nn
contains a class Linear
which defines exactly such a neural network. Update your code to use torch.nn.Linear(1,1)
instead of your own model function (the parameters are the
number of inputs and outputs). Instead of w
and b
, the neural network gives you a parameters()
method, which you should pass to your optimizer. Check if the results are the same, better or worse than your
results from last week.
Now, instead of simply using a linear model in the form of a neural network, let’s make our model a bit more complex. Neural Networks in PyTorch are organized as Module
s: To create a new neural network you have to:
- Create a subclass of
nn.Module
- Initialize the neural network with its layers in the
__init__
method - Implement the
forward
method which takes a tensorx
and sequentially passes it through the layers
There are many helper functions defined by PyTorch, including all common activation functions, and compositions. There is even the nn.Sequential
container
(documentation), which can help you avoid having to define your own class for many simple tasks.
For the purpose of this lab define a simple neural network with a few neurons in one hidden layer, and try several different numbers of hidden neurons (e.g. 1, 3, 5, 10) and types of activations functions (e.g. tanh
,
linear
and ReLU
). Always use a linear layer as the output layer! Report your results on the training set as well as the test sets. You can plot your predicted function by passing a list of ascending values for
x (e.g. generated with np.linspace
) and drawing the resulting curve. Also try and see what happens when you use 2000 neurons with a tanh activation function.
Important Note: Neural Networks in PyTorch always need a 2D-tensor as input. If your input only has a single feature (as in our case), you will have to add an extra dimension to get a 2D tensor (check the shape
of your tensor, it has to have two entries, where the second one is 1, e.g. [30,1]
for the test set with 30 elements).
Incorporating More Features
Change your code to read a variable number of features: Change your function read_csv
to accept a list of column names, and returns a 2-dimensional tensor, with one row per sample and one column per requested feature.
Then read all data for APM
, ActionLatency
, TotalMapExplored
, WorkersMade
, UniqueUnitsMade
, ComplexUnitsMade
and ComplexAbilitiesUsed
into a tensor. Use the first column as your y
tensor, and the other
columns as your x
tensor, and build a neural network that accepts these 6 inputs and predicts the y
-values. Again try several different neural network architectures and report which one produced the best results.