一、任务
- 任务主题:论文分类(数据建模任务),利用已有数据建模,对新论文进行类别分类
- 任务内容:使用论文标题完成类别分类
- 任务成果:学会文本分类的基本方法、TF-IDF等
data= https://www.it610.com/article/[]
with open("arxiv-metadata-oai-snapshot.json", 'r') as f:
for idx, line in enumerate(f):
d = json.loads(line)
d = {
'title': d['title'], 'categories': d['categories'], 'abstract': d['abstract']}
data.append(d)
# 选择前20万条数据
if idx > 199999:
break
df = pd.DataFrame(data)
df
【#|【Task04】前沿学术数据分析AcademicTrends-论文种类分类】
文章图片
处理
title
和abstract列
:df['text'] = df['title'] + df['abstract']
df['text'] = df['text'].apply(lambda x: x.replace('\n',' '))
df['text'] = df['text'].apply(lambda x: x.lower())
df_demo = df[['categories','text']]
df_demo
文章图片
处理
categories
列# 多个类别,包含子分类
df_demo['categories'] = df_demo['categories'].apply(lambda x : x.split(' '))# 单个类别,不包含子分类
df_demo['categories_big'] = df_demo['categories'].apply(lambda x : [xx.split('.')[0] for xx in x])
df_demo
文章图片
三、文本分类的基本方法 使用
pip install scikit-learn
下载sklearn库文章图片
将类别进行编码
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
df_label = mlb.fit_transform(df_demo['categories_big'].iloc[:])
df_label
array([[0, 0, 0, ..., 0, 0, 0],
[0, 0, 1, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 1, 0, ..., 0, 0, 0],
[0, 1, 0, ..., 0, 0, 0]])
df_label.shape
(200001, 19)
1.使用TFIDF提取特征
#2021年1月23日00:01:06 开始
#2021年1月23日00:01:24 结束
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn.multioutput import MultiOutputClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import classification_reportvectorizer = TfidfVectorizer(max_features=4000)
df_tfidf = vectorizer.fit_transform(df_demo['text'].iloc[:])# 划分训练集和验证集
x_train, x_test, y_train, y_test = train_test_split(df_tfidf, df_label,test_size = 0.2,random_state = 1)# 构建多标签分类模型
clf = MultiOutputClassifier(MultinomialNB()).fit(x_train, y_train)print(classification_report(y_test, clf.predict(x_test)))
文章图片
2.深度学习 使用
pip install keras
下载sklearn库:文章图片
同时需要使用
pip install tensorflow
安装tensorflow环境以支撑keras的使用:文章图片
from sklearn.model_selection import train_test_split
from keras.preprocessing.text import Tokenizer
from keras.preprocessing import sequence
from keras.layers import Dense,Input,LSTM,Bidirectional,Activation,Conv1D,GRU
from keras.layers import Dropout,Embedding,GlobalMaxPooling1D, MaxPooling1D, Add, Flatten
from keras.layers import GlobalAveragePooling1D, GlobalMaxPooling1D, concatenate, SpatialDropout1D# Keras Callback Functions:
from keras.callbacks import Callback
from keras.callbacks import EarlyStopping,ModelCheckpoint
from keras import initializers, regularizers, constraints, optimizers, layers, callbacks
from keras.models import Model
from keras.optimizers import Adam#划分训练集和测试集
x_train, x_test, y_train, y_test = train_test_split(df_demo['text'].iloc[:], df_label,test_size = 0.2,random_state = 1)
#设置参数
max_features= 500
max_len= 150
embed_size=100
batch_size = 128
epochs = 5tokens = Tokenizer(num_words = max_features)
tokens.fit_on_texts(list(x_train)+list(x_test))x_sub_train = tokens.texts_to_sequences(x_train)
x_sub_test = tokens.texts_to_sequences(x_test)x_sub_train=sequence.pad_sequences(x_sub_train, maxlen=max_len)
x_sub_test=sequence.pad_sequences(x_sub_test, maxlen=max_len)sequence_input = Input(shape=(max_len, ))
x = Embedding(max_features, embed_size,trainable = False)(sequence_input)
x = SpatialDropout1D(0.2)(x)
x = Bidirectional(GRU(128, return_sequences=True,dropout=0.1,recurrent_dropout=0.1))(x)
x = Conv1D(64, kernel_size = 3, padding = "valid", kernel_initializer = "glorot_uniform")(x)
avg_pool = GlobalAveragePooling1D()(x)
max_pool = GlobalMaxPooling1D()(x)
x = concatenate([avg_pool, max_pool])
preds = Dense(19, activation="sigmoid")(x)model = Model(sequence_input, preds)
model.compile(loss='binary_crossentropy',optimizer=Adam(lr=1e-3),metrics=['accuracy'])
model.fit(x_sub_train, y_train, batch_size=batch_size, epochs=epochs)
本次任务完。
推荐阅读
- 【数据分析-学术前沿趋势分析】 Task4 论文种类分类
- python|Python数据分析课程笔记·嵩天
- Java|2.10 Java中Queue和Deque接口
- #|【快速入门大数据】前沿技术拓展Spark,Flink,Beam
- #|新闻主题分类任务——torchtext 库进行文本分类
- Smartbi从交通工具演变看BI数据化运营历程
- 数据分析也能Freestyle | 不一样的Smartbi Insight
- python|数学建模(相关性分析学习——皮尔逊(pearson)相关系数与斯皮尔曼(spearman)相关系数)
- TensorFlow|Tensorflow 神经网络训练加速