“Machine learning will increase productivity throughout the supply chain.” ~Dave Waters
The training data is labeled by the domain experts. The machine learning model is trained by the labeled data. The data which is ambiguous is evaluated and validated by the domain experts. The training data set is used for learning purposes.
Machine learning plays an important role in solving the complex problems. Machine Learning techniques are applied to develop learning models for forecasting. The machine learning models help in generating business value for the enterprise.
Model evaluated will be used for predictions. The learning model is used for forecasting, reporting, discovery, planning, optimization and analysis purposes in the organization.
Machine learning models assume that the training data is the basis and the unseen data is very important for making the model more effective. To validate and check the predictions, we need more unseen data for making the model trustworthy. The model should not be remembering the training data and making forecasting for future scenarios. The training data sets might be linearly separable or not linearly separable.
Note: The data set which is linearly separable splits the input set by a plane, line or hyperplane. The points of one set are in first half space and the second set is in the other space.
The machine learning models are evaluated based on number of errors and mean squared error measures. The performance of the model is very important for any machine learning engagement. The evaluation of the model is based on the unseen data and out of the sample data predictions. The accuracy of the predictions is an important evaluation measure.
The model’s evaluation is based on two methods:
- Hold out
- Cross Validation
The test data set is a prerequisite for model’s evaluation. The data which is used for developing needs to be different from the test data set. The prediction algorithm will have it in memory the label for the training set point. This scenario is called overfitting. The holdout evaluation is about testing the model on unseen data instead of just the trained data set. The learning model effectiveness is measured based on the unseen data accuracy. In the Hold out method, the data set has three subsets. The subsets are:
- Training Set
- Validation Set
- Test set (Unseen data)
The training data set is used for building the forecasting models. The validation set is used for evaluating and creating the learning model during the training phase Test data or unseen data is used for evaluating the future effectiveness of the model. The hold out method is effective for its performance. The results will have high variableness because the accuracy varies at different stages of the machine learning.
Cross validation is related to separating the observation data from the training data set. The training data set is used for the model learning and training. The unseen data set is used for evaluating the effectiveness of the model.
K-fold cross-validation is one of the cross validation methods. The data set is divided into k sub sets which are referred to as folds. k can vary from 5 to 10. Each of those subsets are used for testing and validating the model. The model performance is based on the average error over k different subsets.
In four fold cross validation; the data is separated into 4 subsets. The models are trained set by step. The first model uses the first data set as the testing one and the other datasets are for training. This happens for 4 separations of the data. The effectiveness of the model is measured by 4 trials with 4 folds (data sets). Every data set point is used for testing once and for training in k-1 trials. The error bias comes down and the data is used for fitting. It reduces the variance and the effectiveness of this method improves by having testing data set as the training data set.
In the next section, we look into different types of Machine learning algorithms such as supervised learning and unsupervised learning in the next blog article.