Pytorch Backend/Wrapper

NOTE: in order to use this, the Python environment needs to get set up first

The PytorchWrapper is currently one of two wrappers for using “Deep Learning” / Neural Networks with the Learning Framework. See Using Neural Networks for an overview.

The PytorchWrapper provides two algorithms:

In both cases the default action taken by the wrapper for training is to try to dynamically create a neural network that can process the features as specified in the feature specification file to predict the class labels. Heuristics are used for how the network architecture is created and how hyperparameters like number of embedding dimensions, number of hidden units etc are chosen.

The generated architecture and hyperparameters are shown to the user and logged. This can be used to more easily adapt or implement the Pytorch neural network module specifically to the problem at hand. The pytorch wrapper library also comes with a number of pre-defined special-purpose modules which can be used as starting points for specific tasks (e.g. NER or POS tagging).

Here is an overview of the PytorchWrapper documentation: