Introduction:
Dataset: gapminder.csv
Predictors: 'internetuserate','urbanrate','employrate','lifeexpectancy','alcconsumption',
'armedforcesrate','breastcancerper100th','co2emissions','femaleemployrate','hivrate'
Targets: 'polityscore'
"polityscore" reflects the democracy level of a country. The score ranges from -10 to 10. 10 marks means the country is the most democratic. I divided it into 2 levels :[-10,0),[0,10),which return as 0 and 1 respectively,
Results:
Data Partitioning:
-predictors in training dataset: 10 variables and 76 observations
-predictors in test dataset: 10 variables and 52 observations
-target in training dataser: 1 variable and 76 observations
-target in test dataset: 1 variable and 52 observations
Training-test ratio: 0.6
Confusion matrix for the target_test sample:
[[ 8, 3],
[ 7, 34]]
Accuracy=0.80769230769230771
Feature-importance score:
[ 0.08430852 0.08336156 0.09066508 0.14997917 0.0512591 0.07579398
0.11722497 0.07404398 0.15731402 0.11604963]
The 'femaleemployrate' 's score is highest (=0.15731402)
Accuracy Scores with different number of trees:
Python Code:
from pandas import Series, DataFrame
import pandas as pd
import numpy as np
import matplotlib.pylab as plt
from sklearn.cross_validation import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import classification_report
import sklearn.metrics
from sklearn import datasets
from sklearn.ensemble import ExtraTreesClassifier
data = pd.read_csv("gapminder.csv")
data['polityscore'] = data['polityscore'].convert_objects(convert_numeric=True)
data['internetuserate'] = data['internetuserate'].convert_objects(convert_numeric=True)
data['urbanrate'] = data['urbanrate'].convert_objects(convert_numeric=True)
data['employrate'] = data['employrate'].convert_objects(convert_numeric=True)
data['lifeexpectancy'] = data['lifeexpectancy'].convert_objects(convert_numeric=True)
data['alcconsumption'] = data['alcconsumption'].convert_objects(convert_numeric=True)
data['armedforcesrate'] = data['armedforcesrate'].convert_objects(convert_numeric=True)
data['breastcancerper100th'] = data['breastcancerper100th'].convert_objects(convert_numeric=True)
data['co2emissions'] = data['co2emissions'].convert_objects(convert_numeric=True)
data['femaleemployrate'] = data['femaleemployrate'].convert_objects(convert_numeric=True)
data['hivrate'] = data['hivrate'].convert_objects(convert_numeric=True)
data_clean = data.dropna()
data_clean.dtypes
data_clean.describe()
def politysco (row):
if row['polityscore'] <= 0 :
return 0
elif row['polityscore'] <= 10:
return 1
data_clean['polityscore'] = data_clean.apply (lambda row: politysco (row),axis=1)
predictors = data_clean[['internetuserate','urbanrate','employrate','lifeexpectancy','alcconsumption',
'armedforcesrate','breastcancerper100th','co2emissions','femaleemployrate','hivrate']]
targets =data_clean['polityscore']
pred_train, pred_test, tar_train, tar_test = train_test_split(predictors, targets, test_size=.4)
print(pred_train.shape)
print(pred_test.shape)
print(tar_train.shape)
print(tar_test.shape)
#Build model on training data
from sklearn.ensemble import RandomForestClassifier
classifier=RandomForestClassifier(n_estimators=9)
classifier=classifier.fit(pred_train,tar_train)
predictions=classifier.predict(pred_test)
sklearn.metrics.confusion_matrix(tar_test,predictions)
sklearn.metrics.accuracy_score(tar_test, predictions)
#Displaying the decision tree
model = ExtraTreesClassifier()
model.fit(pred_train,tar_train)
# display the relative importance of each attribute
print(model.feature_importances_)
trees=range(9)
accuracy=np.zeros(9)
for idx in range(len(trees)):
classifier=RandomForestClassifier(n_estimators=idx + 1)
classifier=classifier.fit(pred_train,tar_train)
predictions=classifier.predict(pred_test)
accuracy[idx]=sklearn.metrics.accuracy_score(tar_test, predictions)
print('Accuracy Scores with different number of trees' )
plt.cla()
plt.plot(trees, accuracy)
No comments:
Post a Comment