This report is the course project for Coursera course Practical Machine Learning.

Background

In this project we’ll use data about personal acitivity to predict the manner in which barbell lift participants did their exercises. The data is from here

Data Processing

data_train = read.csv("pml-training.csv")
# remove columns that has too many NAs
# i.e. over 10% of the data is NA or ""
num_nas = apply(data_train,2,function(x) sum(is.na(x)))
num_nulls = apply(data_train,2,function(x) sum(x==""))
columns = names(data_train)[num_nas<nrow(data_train)*0.9 & num_nulls<nrow(data_train)*0.9]
#sds = apply(data_train[,columns],2,function(x) sd(as.numeric(x)))
# the first few columns are also not useful
# as they are indices, names, timestamps, window flags
columns = setdiff(columns,names(data_train)[1:7])
# so the data is
data_train = data_train[,columns]

We will read in the data and do some data cleaning before we go to the model training step. First, remove columns that have too many NA or NULL values.

# remove columns that has too many NAs
# i.e. over 10% of the data is NA or ""
num_nas = apply(data_train,2,function(x) sum(is.na(x)))
num_nulls = apply(data_train,2,function(x) sum(x==""))
columns = names(data_train)[num_nas<nrow(data_train)*0.9 & num_nulls<nrow(data_train)*0.9]
#sds = apply(data_train[,columns],2,function(x) sd(as.numeric(x)))

Then we noticed that out of the remaining feature columns, the first few columns are not useful as they are indices, names, timestamps, window flags

# the first few columns are also not useful
# as they are indices, names, timestamps, window flags
columns = setdiff(columns,names(data_train)[1:7])

# so the data is
data_train = data_train[,columns]

Prediction Model Training

Now we have cleaned our datset, we can perform our model training.

We will try different classifiers (see here for all the available models caret package provide) to predict the “classe” target variable in the dataset.

Let’s create k-fold dataset for cross validation first.

suppressPackageStartupMessages(library(caret))
# set up k-fold cross validation
set.seed(42)
cv_control<- trainControl(method="cv", number=5, savePredictions = TRUE)
# prepross data???
# e.g. center and scale???

Then let’s try different classifiers.

In this report, we’ll try CART, Random Forest, K Nearest Neighbors and Stochastic Gradient Boosting.

1 CART

cart_model <- train(classe~., data=data_train, trControl=cv_control, method="rpart")
confusionMatrix(predict(cart_model,newdata = data_train),data_train$classe)
Confusion Matrix and Statistics

          Reference
Prediction    A    B    C    D    E
         A 5080 1581 1587 1449  524
         B   81 1286  108  568  486
         C  405  930 1727 1199  966
         D    0    0    0    0    0
         E   14    0    0    0 1631

Overall Statistics
                                          
               Accuracy : 0.4956          
                 95% CI : (0.4885, 0.5026)
    No Information Rate : 0.2844          
    P-Value [Acc > NIR] : < 2.2e-16       
                                          
                  Kappa : 0.3407          
                                          
 Mcnemar's Test P-Value : NA              

Statistics by Class:

                     Class: A Class: B Class: C Class: D Class: E
Sensitivity            0.9104  0.33869  0.50468   0.0000  0.45218
Specificity            0.6339  0.92145  0.78395   1.0000  0.99913
Pos Pred Value         0.4970  0.50850  0.33040      NaN  0.99149
Neg Pred Value         0.9468  0.85310  0.88225   0.8361  0.89008
Prevalence             0.2844  0.19351  0.17440   0.1639  0.18382
Detection Rate         0.2589  0.06554  0.08801   0.0000  0.08312
Detection Prevalence   0.5209  0.12889  0.26638   0.0000  0.08383
Balanced Accuracy      0.7721  0.63007  0.64431   0.5000  0.72565

2 Random Forest

rf_model <- train(classe~., data=data_train, trControl=cv_control, method="ranger",verbose=F)
confusionMatrix(predict(rf_model,newdata = data_train),data_train$classe)
Confusion Matrix and Statistics

          Reference
Prediction    A    B    C    D    E
         A 5580    0    0    0    0
         B    0 3797    0    0    0
         C    0    0 3422    0    0
         D    0    0    0 3216    0
         E    0    0    0    0 3607

Overall Statistics
                                     
               Accuracy : 1          
                 95% CI : (0.9998, 1)
    No Information Rate : 0.2844     
    P-Value [Acc > NIR] : < 2.2e-16  
                                     
                  Kappa : 1          
                                     
 Mcnemar's Test P-Value : NA         

Statistics by Class:

                     Class: A Class: B Class: C Class: D Class: E
Sensitivity            1.0000   1.0000   1.0000   1.0000   1.0000
Specificity            1.0000   1.0000   1.0000   1.0000   1.0000
Pos Pred Value         1.0000   1.0000   1.0000   1.0000   1.0000
Neg Pred Value         1.0000   1.0000   1.0000   1.0000   1.0000
Prevalence             0.2844   0.1935   0.1744   0.1639   0.1838
Detection Rate         0.2844   0.1935   0.1744   0.1639   0.1838
Detection Prevalence   0.2844   0.1935   0.1744   0.1639   0.1838
Balanced Accuracy      1.0000   1.0000   1.0000   1.0000   1.0000

3 K Nearest Neighbors

knn_model <- train(classe~., data=data_train, trControl=cv_control, method="knn")
confusionMatrix(predict(knn_model,newdata = data_train),data_train$classe)
Confusion Matrix and Statistics

          Reference
Prediction    A    B    C    D    E
         A 5531   58   11   12    8
         B   16 3638   42    2   30
         C   12   50 3331   81   23
         D   19   27   24 3104   41
         E    2   24   14   17 3505

Overall Statistics
                                         
               Accuracy : 0.9739         
                 95% CI : (0.9715, 0.976)
    No Information Rate : 0.2844         
    P-Value [Acc > NIR] : < 2.2e-16      
                                         
                  Kappa : 0.9669         
                                         
 Mcnemar's Test P-Value : 5.335e-16      

Statistics by Class:

                     Class: A Class: B Class: C Class: D Class: E
Sensitivity            0.9912   0.9581   0.9734   0.9652   0.9717
Specificity            0.9937   0.9943   0.9898   0.9932   0.9964
Pos Pred Value         0.9842   0.9759   0.9525   0.9655   0.9840
Neg Pred Value         0.9965   0.9900   0.9944   0.9932   0.9936
Prevalence             0.2844   0.1935   0.1744   0.1639   0.1838
Detection Rate         0.2819   0.1854   0.1698   0.1582   0.1786
Detection Prevalence   0.2864   0.1900   0.1782   0.1638   0.1815
Balanced Accuracy      0.9924   0.9762   0.9816   0.9792   0.9841

4 Stochastic Gradient Boosting

gbm_model <- train(classe~., data=data_train, trControl=cv_control, method="gbm",verbose = F)
confusionMatrix(predict(gbm_model,newdata = data_train),data_train$classe)
Confusion Matrix and Statistics

          Reference
Prediction    A    B    C    D    E
         A 5520   71    0    3    5
         B   43 3648   72    8   24
         C   10   76 3311  100   20
         D    5    2   34 3087   36
         E    2    0    5   18 3522

Overall Statistics
                                         
               Accuracy : 0.9728         
                 95% CI : (0.9704, 0.975)
    No Information Rate : 0.2844         
    P-Value [Acc > NIR] : < 2.2e-16      
                                         
                  Kappa : 0.9656         
                                         
 Mcnemar's Test P-Value : 9.084e-16      

Statistics by Class:

                     Class: A Class: B Class: C Class: D Class: E
Sensitivity            0.9892   0.9608   0.9676   0.9599   0.9764
Specificity            0.9944   0.9907   0.9873   0.9953   0.9984
Pos Pred Value         0.9859   0.9613   0.9414   0.9757   0.9930
Neg Pred Value         0.9957   0.9906   0.9931   0.9922   0.9947
Prevalence             0.2844   0.1935   0.1744   0.1639   0.1838
Detection Rate         0.2813   0.1859   0.1687   0.1573   0.1795
Detection Prevalence   0.2853   0.1934   0.1792   0.1612   0.1808
Balanced Accuracy      0.9918   0.9757   0.9774   0.9776   0.9874

As we can see from the 5-folded cross validatd model training, the random forest does the best, which 100% accuracy and (0.9998, 1) on the 95% confidence interval. So we’ll choose the random forest model to predict on our dataset.

Prediction on the test set

data_test = read.csv("pml-testing.csv")
# so the data is
feature_columns = setdiff(columns,'classe')
data_test = data_test[,feature_columns]
# do the predictions
test_preds = predict(rf_model,data_test)

So the predictions are:

index classe
1 B
2 A
3 B
4 A
5 A
6 E
7 D
8 B
9 A
10 A
11 B
12 C
13 B
14 A
15 E
16 E
17 A
18 B
19 B
20 B

Reference

  1. Ugulino, W.; Cardador, D.; Vega, K.; Velloso, E.; Milidiu, R.; Fuks, H. Wearable Computing: Accelerometers’ Data Classification of Body Postures and Movements. Proceedings of 21st Brazilian Symposium on Artificial Intelligence. Advances in Artificial Intelligence - SBIA 2012. In: Lecture Notes in Computer Science. , pp. 52-61. Curitiba, PR: Springer Berlin / Heidelberg, 2012. ISBN 978-3-642-34458-9. DOI: 10.1007/978-3-642-34459-6_6.
LS0tCnRpdGxlOiAiQ291cnNlcmEgLSBQcmFjdGljYWwgTWFjaGluZSBMZWFybmluZyAtIENvdXJzZSBQcm9qZWN0IgphdXRob3I6CiAgLSBRaW9uZyBXdQpvdXRwdXQ6IGh0bWxfbm90ZWJvb2sKLS0tCgpUaGlzIHJlcG9ydCBpcyB0aGUgY291cnNlIHByb2plY3QgZm9yICpDb3Vyc2VyYSogY291cnNlICpQcmFjdGljYWwgTWFjaGluZSBMZWFybmluZyouCgojIEJhY2tncm91bmQKCkluIHRoaXMgcHJvamVjdCB3ZSdsbCB1c2UgZGF0YSBhYm91dCBwZXJzb25hbCBhY2l0aXZpdHkgdG8gcHJlZGljdCB0aGUgbWFubmVyIGluIHdoaWNoIGJhcmJlbGwgbGlmdCBwYXJ0aWNpcGFudHMgZGlkIHRoZWlyIGV4ZXJjaXNlcy4gVGhlIGRhdGEgaXMgZnJvbSBbaGVyZV0oaHR0cDovL2dyb3Vwd2FyZS5sZXMuaW5mLnB1Yy1yaW8uYnIvaGFyKQoKIyBEYXRhIFByb2Nlc3NpbmcKCmBgYHtyfQpkYXRhX3RyYWluID0gcmVhZC5jc3YoInBtbC10cmFpbmluZy5jc3YiKQpgYGAKV2Ugd2lsbCByZWFkIGluIHRoZSBkYXRhIGFuZCBkbyBzb21lIGRhdGEgY2xlYW5pbmcgYmVmb3JlIHdlIGdvIHRvIHRoZSBtb2RlbCB0cmFpbmluZyBzdGVwLiAKRmlyc3QsIHJlbW92ZSBjb2x1bW5zIHRoYXQgaGF2ZSB0b28gbWFueSBOQSBvciBOVUxMIHZhbHVlcy4KCmBgYHtyfQojIHJlbW92ZSBjb2x1bW5zIHRoYXQgaGFzIHRvbyBtYW55IE5BcwojIGkuZS4gb3ZlciAxMCUgb2YgdGhlIGRhdGEgaXMgTkEgb3IgIiIKbnVtX25hcyA9IGFwcGx5KGRhdGFfdHJhaW4sMixmdW5jdGlvbih4KSBzdW0oaXMubmEoeCkpKQpudW1fbnVsbHMgPSBhcHBseShkYXRhX3RyYWluLDIsZnVuY3Rpb24oeCkgc3VtKHg9PSIiKSkKY29sdW1ucyA9IG5hbWVzKGRhdGFfdHJhaW4pW251bV9uYXM8bnJvdyhkYXRhX3RyYWluKSowLjkgJiBudW1fbnVsbHM8bnJvdyhkYXRhX3RyYWluKSowLjldCiNzZHMgPSBhcHBseShkYXRhX3RyYWluWyxjb2x1bW5zXSwyLGZ1bmN0aW9uKHgpIHNkKGFzLm51bWVyaWMoeCkpKQpgYGAKClRoZW4gd2Ugbm90aWNlZCB0aGF0IG91dCBvZiB0aGUgcmVtYWluaW5nIGZlYXR1cmUgY29sdW1ucywgdGhlIGZpcnN0IGZldyBjb2x1bW5zIGFyZSBub3QgdXNlZnVsIGFzIHRoZXkgYXJlIGluZGljZXMsIG5hbWVzLCB0aW1lc3RhbXBzLCB3aW5kb3cgZmxhZ3MKCmBgYHtyfQojIHRoZSBmaXJzdCBmZXcgY29sdW1ucyBhcmUgYWxzbyBub3QgdXNlZnVsCiMgYXMgdGhleSBhcmUgaW5kaWNlcywgbmFtZXMsIHRpbWVzdGFtcHMsIHdpbmRvdyBmbGFncwpjb2x1bW5zID0gc2V0ZGlmZihjb2x1bW5zLG5hbWVzKGRhdGFfdHJhaW4pWzE6N10pCgojIHNvIHRoZSBkYXRhIGlzCmRhdGFfdHJhaW4gPSBkYXRhX3RyYWluWyxjb2x1bW5zXQpgYGAKCgojIFByZWRpY3Rpb24gTW9kZWwgVHJhaW5pbmcgCgpOb3cgd2UgaGF2ZSBjbGVhbmVkIG91ciBkYXRzZXQsIHdlIGNhbiBwZXJmb3JtIG91ciBtb2RlbCB0cmFpbmluZy4KCldlIHdpbGwgdHJ5IGRpZmZlcmVudCBjbGFzc2lmaWVycyAoc2VlIFtoZXJlXShodHRwczovL3RvcGVwby5naXRodWIuaW8vY2FyZXQvYXZhaWxhYmxlLW1vZGVscy5odG1sKSBmb3IgYWxsIHRoZSBhdmFpbGFibGUgbW9kZWxzIGNhcmV0IHBhY2thZ2UgcHJvdmlkZSkgdG8gcHJlZGljdCB0aGUgImNsYXNzZSIgdGFyZ2V0IHZhcmlhYmxlIGluIHRoZSBkYXRhc2V0LgoKTGV0J3MgY3JlYXRlIGstZm9sZCBkYXRhc2V0IGZvciBjcm9zcyB2YWxpZGF0aW9uIGZpcnN0LgoKYGBge3J9CnN1cHByZXNzUGFja2FnZVN0YXJ0dXBNZXNzYWdlcyhsaWJyYXJ5KGNhcmV0KSkKIyBzZXQgdXAgay1mb2xkIGNyb3NzIHZhbGlkYXRpb24Kc2V0LnNlZWQoNDIpCmN2X2NvbnRyb2w8LSB0cmFpbkNvbnRyb2wobWV0aG9kPSJjdiIsIG51bWJlcj01LCBzYXZlUHJlZGljdGlvbnMgPSBUUlVFKQojIHByZXByb3NzIGRhdGE/Pz8KIyBlLmcuIGNlbnRlciBhbmQgc2NhbGU/Pz8KYGBgCgpUaGVuIGxldCdzIHRyeSBkaWZmZXJlbnQgY2xhc3NpZmllcnMuCgpJbiB0aGlzIHJlcG9ydCwgd2UnbGwgdHJ5IENBUlQsIFJhbmRvbSBGb3Jlc3QsIEsgTmVhcmVzdCBOZWlnaGJvcnMgYW5kIFN0b2NoYXN0aWMgR3JhZGllbnQgQm9vc3RpbmcuCgojIyMgMSBDQVJUCmBgYHtyfQpjYXJ0X21vZGVsIDwtIHRyYWluKGNsYXNzZX4uLCBkYXRhPWRhdGFfdHJhaW4sIHRyQ29udHJvbD1jdl9jb250cm9sLCBtZXRob2Q9InJwYXJ0IikKY29uZnVzaW9uTWF0cml4KHByZWRpY3QoY2FydF9tb2RlbCxuZXdkYXRhID0gZGF0YV90cmFpbiksZGF0YV90cmFpbiRjbGFzc2UpCmBgYAoKCiMjIyAyIFJhbmRvbSBGb3Jlc3QKYGBge3J9CnJmX21vZGVsIDwtIHRyYWluKGNsYXNzZX4uLCBkYXRhPWRhdGFfdHJhaW4sIHRyQ29udHJvbD1jdl9jb250cm9sLCBtZXRob2Q9InJhbmdlciIsdmVyYm9zZT1GKQpjb25mdXNpb25NYXRyaXgocHJlZGljdChyZl9tb2RlbCxuZXdkYXRhID0gZGF0YV90cmFpbiksZGF0YV90cmFpbiRjbGFzc2UpCgpgYGAKCiMjIyAzIEsgTmVhcmVzdCBOZWlnaGJvcnMKYGBge3J9Cmtubl9tb2RlbCA8LSB0cmFpbihjbGFzc2V+LiwgZGF0YT1kYXRhX3RyYWluLCB0ckNvbnRyb2w9Y3ZfY29udHJvbCwgbWV0aG9kPSJrbm4iKQpjb25mdXNpb25NYXRyaXgocHJlZGljdChrbm5fbW9kZWwsbmV3ZGF0YSA9IGRhdGFfdHJhaW4pLGRhdGFfdHJhaW4kY2xhc3NlKQoKYGBgCgoKCiMjIyA0IFN0b2NoYXN0aWMgR3JhZGllbnQgQm9vc3RpbmcKYGBge3J9CmdibV9tb2RlbCA8LSB0cmFpbihjbGFzc2V+LiwgZGF0YT1kYXRhX3RyYWluLCB0ckNvbnRyb2w9Y3ZfY29udHJvbCwgbWV0aG9kPSJnYm0iLHZlcmJvc2UgPSBGKQpjb25mdXNpb25NYXRyaXgocHJlZGljdChnYm1fbW9kZWwsbmV3ZGF0YSA9IGRhdGFfdHJhaW4pLGRhdGFfdHJhaW4kY2xhc3NlKQoKYGBgCgoKQXMgd2UgY2FuIHNlZSBmcm9tIHRoZSA1LWZvbGRlZCBjcm9zcyB2YWxpZGF0ZCBtb2RlbCB0cmFpbmluZywgdGhlIHJhbmRvbSBmb3Jlc3QgZG9lcyB0aGUgYmVzdCwgd2hpY2ggMTAwJSBhY2N1cmFjeSBhbmQgKDAuOTk5OCwgMSkgb24gdGhlIDk1JSBjb25maWRlbmNlIGludGVydmFsLiBTbyB3ZSdsbCBjaG9vc2UgdGhlIHJhbmRvbSBmb3Jlc3QgbW9kZWwgdG8gcHJlZGljdCBvbiBvdXIgZGF0YXNldC4KCiMgUHJlZGljdGlvbiBvbiB0aGUgdGVzdCBzZXQKCmBgYHtyfQpkYXRhX3Rlc3QgPSByZWFkLmNzdigicG1sLXRlc3RpbmcuY3N2IikKIyBzbyB0aGUgZGF0YSBpcwpmZWF0dXJlX2NvbHVtbnMgPSBzZXRkaWZmKGNvbHVtbnMsJ2NsYXNzZScpCmRhdGFfdGVzdCA9IGRhdGFfdGVzdFssZmVhdHVyZV9jb2x1bW5zXQojIGRvIHRoZSBwcmVkaWN0aW9ucwp0ZXN0X3ByZWRzID0gcHJlZGljdChyZl9tb2RlbCxkYXRhX3Rlc3QpCmBgYAoKU28gdGhlIHByZWRpY3Rpb25zIGFyZToKCmBgYHtyfQpsaWJyYXJ5KGtuaXRyKQp0ZXN0X3ByZWRzPWRhdGEuZnJhbWUoaW5kZXg9MTpsZW5ndGgodGVzdF9wcmVkcyksY2xhc3NlPXRlc3RfcHJlZHMpCmthYmxlKHRlc3RfcHJlZHMpCmBgYAoKCiMgUmVmZXJlbmNlCjEuIFVndWxpbm8sIFcuOyBDYXJkYWRvciwgRC47IFZlZ2EsIEsuOyBWZWxsb3NvLCBFLjsgTWlsaWRpdSwgUi47IEZ1a3MsIEguIFdlYXJhYmxlIENvbXB1dGluZzogQWNjZWxlcm9tZXRlcnMnIERhdGEgQ2xhc3NpZmljYXRpb24gb2YgQm9keSBQb3N0dXJlcyBhbmQgTW92ZW1lbnRzLiBQcm9jZWVkaW5ncyBvZiAyMXN0IEJyYXppbGlhbiBTeW1wb3NpdW0gb24gQXJ0aWZpY2lhbCBJbnRlbGxpZ2VuY2UuIEFkdmFuY2VzIGluIEFydGlmaWNpYWwgSW50ZWxsaWdlbmNlIC0gU0JJQSAyMDEyLiBJbjogTGVjdHVyZSBOb3RlcyBpbiBDb21wdXRlciBTY2llbmNlLiAsIHBwLiA1Mi02MS4gQ3VyaXRpYmEsIFBSOiBTcHJpbmdlciBCZXJsaW4gLyBIZWlkZWxiZXJnLCAyMDEyLiBJU0JOIDk3OC0zLTY0Mi0zNDQ1OC05LiBET0k6IDEwLjEwMDcvOTc4LTMtNjQyLTM0NDU5LTZfNi4KCgoKCgoK