PyTorch is a popular open-source machine learning library built on top of the Torch library. It is prominently used for deep learning. PyTorch serves primarily two purposes as a machine learning framework. First, it provides an interface to perform NumPy-like tensor operations using GPUs and CPUs, and second, the ability to set up machine learning algorithms that require iterative optimization (including deep learning).
This competency area includes usage of data loaders, optimizers, torch.NN module, saving and loading parameters, and implementation of fully connected neural networks.
- Data loader - Ability to implement custom data loaders and awareness of different arguments and features it provides. Loading large datasets onto memory is not efficient.
- Optimizers - PyTorch provides a library of many different gradient optimizers such as Adam Optimizer, SGD, RMSProp, Adagrad, etc.
- torch.NN - A detailed understanding of this module which is the core of building computational graphs for machine learning. It contains a varied number of blocks such as convolutional layers, recurrent layers, transformer layers, loss functions, etc.
- Saving and loading parameters - Ability to store and load models or parameters.
- Fully Connected Neural Networks - Implementing the entire pipeline for a deep neural network (data loading, transformations, neural network layers (torch.nn), loss, optimizer) for both training and testing.