应用scikit-learn做文本分类

文本挖掘的paper没找到统一的benchmark,只好自己跑程序,走过路过的前辈如果知道20newsgroups或者其它好用的公共数据集的分类(最好要所有类分类结果,全部或取部分特征无所谓)麻烦留言告知下现在的benchmark,万谢!
嗯,说正文。20newsgroups官网上给出了3个数据集,这里我们用最原始的20news-19997.tar.gz。



分为以下几个过程:

  • 加载数据集
  • 提feature
  • 分类
    • Naive Bayes
    • KNN
    • SVM
  • 聚类
说明: scipy官网 上有参考,但是看着有点乱,而且有bug。本文中我们分块来看。
Environment: Python 2.7 + Scipy (scikit-learn)
1.加载数据集
从20news-19997.tar.gz下载数据集,解压到scikit_learn_data文件夹下,加载数据,详见code注释。 [python] view plain copy
  1. #firstextractthe20news_groupdatasetto/scikit_learn_data
  2. fromsklearn.datasetsimportfetch_20newsgroups
  3. #allcategories
  4. #newsgroup_train=fetch_20newsgroups(subset='train')
  5. #partcategories
  6. categories=['comp.graphics',
  7. 'comp.os.ms-windows.misc',
  8. 'comp.sys.ibm.pc.hardware',
  9. 'comp.sys.mac.hardware',
  10. 'comp.windows.x'];
  11. newsgroup_train=fetch_20newsgroups(subset='train',categories=categories);


可以检验是否load好了: [python] view plain copy
  1. #printcategorynames
  2. frompprintimportpprint
  3. pprint(list(newsgroup_train.target_names))

结果: ['comp.graphics',
'comp.os.ms-windows.misc',
'comp.sys.ibm.pc.hardware',
'comp.sys.mac.hardware',
'comp.windows.x']







2. 提feature: 刚才load进来的newsgroup_train就是一篇篇document,我们要从中提取feature,即词频啊神马的,用fit_transform
Method 1. HashingVectorizer,规定feature个数
[python] view plain copy
  1. #newsgroup_train.dataistheoriginaldocuments,butweneedtoextractthe
  2. #featurevectorsinordertomodelthetextdata
  3. fromsklearn.feature_extraction.textimportHashingVectorizer
  4. vectorizer=HashingVectorizer(stop_words='english',non_negative=True,
  5. n_features=10000)
  6. fea_train=vectorizer.fit_transform(newsgroup_train.data)
  7. fea_test=vectorizer.fit_transform(newsgroups_test.data);
  8. #returnfeaturevector'fea_train'[n_samples,n_features]
  9. print'Sizeoffea_train:'+repr(fea_train.shape)
  10. print'Sizeoffea_train:'+repr(fea_test.shape)
  11. #11314documents,130107vectorsforallcategories
  12. print'Theaveragefeaturesparsityis{0:.3f}%'.format(
  13. fea_train.nnz/float(fea_train.shape[0]*fea_train.shape[1])*100);

结果: Size of fea_train:(2936, 10000)
Size of fea_train:(1955, 10000)
The average feature sparsity is 1.002%
因为我们只取了10000个词,即10000维feature,稀疏度还不算低。而实际上用TfidfVectorizer统计可得到上万维的feature,我统计的全部样本是13w多维,就是一个相当稀疏的矩阵了。

************************************************************************************************************************** 上面代码注释说TF-IDF在train和test上提取的feature维度不同,那么怎么让它们相同呢?有两种方法:



Method 2.CountVectorizer+TfidfTransformer

让两个CountVectorizer共享vocabulary: [python] view plain copy
  1. #----------------------------------------------------
  2. #method1:CountVectorizer+TfidfTransformer
  3. print'*************************\nCountVectorizer+TfidfTransformer\n*************************'
  4. fromsklearn.feature_extraction.textimportCountVectorizer,TfidfTransformer
  5. count_v1=CountVectorizer(stop_words='english',max_df=0.5);
  6. counts_train=count_v1.fit_transform(newsgroup_train.data);
  7. print"theshapeoftrainis"+repr(counts_train.shape)
  8. count_v2=CountVectorizer(vocabulary=count_v1.vocabulary_);
  9. counts_test=count_v2.fit_transform(newsgroups_test.data);
  10. print"theshapeoftestis"+repr(counts_test.shape)
  11. tfidftransformer=TfidfTransformer();
  12. tfidf_train=tfidftransformer.fit(counts_train).transform(counts_train);
  13. tfidf_test=tfidftransformer.fit(counts_test).transform(counts_test);

结果: ************************* CountVectorizer+TfidfTransformer
*************************
the shape of train is (2936, 66433)
the shape of test is (1955, 66433)




Method 3.TfidfVectorizer

让两个TfidfVectorizer共享vocabulary: [python] view plain copy
  1. #method2:TfidfVectorizer
  2. print'*************************\nTfidfVectorizer\n*************************'
  3. fromsklearn.feature_extraction.textimportTfidfVectorizer
  4. tv=TfidfVectorizer(sublinear_tf=True,
  5. max_df=0.5,
  6. stop_words='english');
  7. tfidf_train_2=tv.fit_transform(newsgroup_train.data);
  8. tv2=TfidfVectorizer(vocabulary=tv.vocabulary_);
  9. tfidf_test_2=tv2.fit_transform(newsgroups_test.data);
  10. print"theshapeoftrainis"+repr(tfidf_train_2.shape)
  11. print"theshapeoftestis"+repr(tfidf_test_2.shape)
  12. analyze=tv.build_analyzer()
  13. tv.get_feature_names()#statisticalfeatures/terms


结果: *************************
TfidfVectorizer
*************************
the shape of train is (2936, 66433)
the shape of test is (1955, 66433)
此外,还有sklearn里封装好的抓feature函数,fetch_20newsgroups_vectorized



Method 4.fetch_20newsgroups_vectorized
但是这种方法不能挑出几个类的feature,只能全部20个类的feature全部弄出来:
[python] view plain copy
  1. print'*************************\nfetch_20newsgroups_vectorized\n*************************'
  2. fromsklearn.datasetsimportfetch_20newsgroups_vectorized
  3. tfidf_train_3=fetch_20newsgroups_vectorized(subset='train');
  4. tfidf_test_3=fetch_20newsgroups_vectorized(subset='test');
  5. print"theshapeoftrainis"+repr(tfidf_train_3.data.shape)
  6. print"theshapeoftestis"+repr(tfidf_test_3.data.shape)


结果: *************************
fetch_20newsgroups_vectorized
*************************
the shape of train is (11314, 130107)
the shape of test is (7532, 130107)




3. 分类 3.1 Multinomial Naive Bayes Classifier 见代码&comment,不解释 [python] view plain copy
  1. ######################################################
  2. #MultinomialNaiveBayesClassifier
  3. print'*************************\nNaiveBayes\n*************************'
  4. fromsklearn.naive_bayesimportMultinomialNB
  5. fromsklearnimportmetrics
  6. newsgroups_test=fetch_20newsgroups(subset='test',
  7. categories=categories);
  8. fea_test=vectorizer.fit_transform(newsgroups_test.data);
  9. #createtheMultinomialNaiveBayesianClassifier
  10. clf=MultinomialNB(alpha=0.01)
  11. clf.fit(fea_train,newsgroup_train.target);
  12. pred=clf.predict(fea_test);
  13. calculate_result(newsgroups_test.target,pred);
  14. #noticeherewecanseethatf1_scoreisnotequalto2*precision*recall/(precision+recall)
  15. #becausethem_precisionandm_recallwegetisaveraged,however,metrics.f1_score()calculates
  16. #weithedaverage,i.e.,takesintothenumberofeachclassintoconsideration.

注意我最后的3行注释,为什么f1≠2*(准确率*召回率)/(准确率+召回率)
其中,函数calculate_result计算f1:


[python] view plain copy
  1. defcalculate_result(actual,pred):
  2. m_precision=metrics.precision_score(actual,pred);
  3. m_recall=metrics.recall_score(actual,pred);
  4. print'predictinfo:'
  5. print'precision:{0:.3f}'.format(m_precision)
  6. print'recall:{0:0.3f}'.format(m_recall);
  7. print'f1-score:{0:.3f}'.format(metrics.f1_score(actual,pred));



3.2 KNN:

[python] view plain copy
  1. ######################################################
  2. #KNNClassifier
  3. fromsklearn.neighborsimportKNeighborsClassifier
  4. print'*************************\nKNN\n*************************'
  5. knnclf=KNeighborsClassifier()#defaultwithk=5
  6. knnclf.fit(fea_train,newsgroup_train.target)
  7. pred=knnclf.predict(fea_test);
  8. calculate_result(newsgroups_test.target,pred);



3.3 SVM:

[cpp] view plain copy
  1. ######################################################
  2. #SVMClassifier
  3. fromsklearn.svmimportSVC
  4. print'*************************\nSVM\n*************************'
  5. svclf=SVC(kernel='linear')#defaultwith'rbf'
  6. svclf.fit(fea_train,newsgroup_train.target)
  7. pred=svclf.predict(fea_test);
  8. calculate_result(newsgroups_test.target,pred);



结果:
*************************
Naive Bayes
*************************
predict info:
precision:0.764
recall:0.759
f1-score:0.760
*************************
KNN
*************************
predict info:
precision:0.642
recall:0.635
f1-score:0.636
*************************
SVM
*************************
predict info:
precision:0.777
recall:0.774
f1-score:0.774





4. 聚类

[cpp] view plain copy
  1. ######################################################
  2. #KMeansCluster
  3. fromsklearn.clusterimportKMeans
  4. print'*************************\nKMeans\n*************************'
  5. pred=KMeans(n_clusters=5)
  6. pred.fit(fea_test)
  7. calculate_result(newsgroups_test.target,pred.labels_);



结果:

*************************
KMeans
*************************
predict info:
precision:0.264
recall:0.226
f1-score:0.213





本文全部代码下载:在此


貌似准确率好低……那我们用全部特征吧……结果如下:
【应用scikit-learn做文本分类】 *************************
Naive Bayes
*************************
predict info:
precision:0.771
recall:0.770
f1-score:0.769
*************************
KNN
*************************
predict info:
precision:0.652
recall:0.645
f1-score:0.645
*************************
SVM
*************************
predict info:
precision:0.819
recall:0.816
f1-score:0.816
*************************
KMeans
*************************
predict info:
precision:0.289
recall:0.313
f1-score:0.266


    推荐阅读