Using a server for Model Application

If the serverUrl parameter is specified for the [[LF_ApplyClassification]] or [[LF_ApplyRegression]] PRs, then instead of loading a model from the data directory or running a wrapper, the PR will use the server to obtain the predictions.

This works by extracting the feature vectors for all instances in a document, converting them into a JSON representation, sending that JSON to the serverUrl as a POS request and expecting back JSON that contains the predictions for these feature vectors. See below for the details of the requests and responses that get exchanged.

The PR still needs to know about things that normally get stored in the data directory when a trained model is saved:

Request and Response formats

The request must have the following format and properties:

The response has the following format: