python 文字情感分類

2021-08-02 21:04:00 字數 4123 閱讀 1255

對於乙個簡單的文字情感分類來說,其實就是乙個二分類,這篇部落格主要講述的是使用scikit-learn

來做文字情感分類。分類主要分為兩步:1)訓練,主要根據訓練集來學習分類模型的規則。2)分類,先用已知的測試集評估分類的準確率等,如果效果還可以,那麼該模型對無標註的待測樣本進行**。

下面實現了svm,nb,邏輯回歸,決策樹,邏輯森林,knn 等幾種分類方法,主要**如下:

#coding:utf-8

from matplotlib import pyplot

import scipy as sp

import numpy as np

from sklearn.cross_validation import train_test_split

from sklearn.feature_extraction.text import countvectorizer

from sklearn.feature_extraction.text import tfidfvectorizer

from sklearn.metrics import precision_recall_curve

from sklearn.metrics import classification_report

from numpy import *

#*****===svm*****===#

def svmclass(x_train, y_train):

from sklearn.svm import svc

#調分類器

clf = svc(kernel = 'linear',probability=true)#default with 'rbf'

clf.fit(x_train, y_train)#訓練,對於監督模型來說是 fit(x, y),對於非監督模型是 fit(x)

return clf

#*****nb*****====#

def nbclass(x_train, y_train):

from sklearn.*****_bayes import multinomialnb

clf=multinomialnb(alpha=0.01).fit(x_train, y_train)

return clf

#*****===logistic regression*****===#

def logisticclass(x_train, y_train):

from sklearn.linear_model import logisticregression

clf = logisticregression(penalty='l2')

clf.fit(x_train, y_train)

return clf

#*****===knn*****===#

def knnclass(x_train,y_train):

from sklearn.neighbors import kneighborsclassifier

clf=kneighborsclassifier()

clf.fit(x_train,y_train)

return clf

#*****===decision tree *****===#

def dccisionclass(x_train,y_train):

from sklearn import tree

clf=tree.decisiontreeclassifier()

clf.fit(x_train,y_train)

return clf

#*****===random forest classifier *****===#

def random_forest_class(x_train,y_train):

from sklearn.ensemble import randomforestclassifier

clf= randomforestclassifier(n_estimators=8)#引數n_estimators設定弱分類器的數量

clf.fit(x_train,y_train)

return clf

#*****===準確率召回率 *****===#

def precision(clf):

doc_class_predicted = clf.predict(x_test)

print(np.mean(doc_class_predicted == y_test))#**結果和真實標籤

#準確率與召回率

precision, recall, thresholds = precision_recall_curve(y_test, clf.predict(x_test))

answer = clf.predict_proba(x_test)[:,1]

report = answer > 0.5

print(classification_report(y_test, report, target_names = ['neg', 'pos']))

print("--------------------")

from sklearn.metrics import accuracy_score

print('準確率: %.2f' % accuracy_score(y_test, doc_class_predicted))

if __name__ == '__main__':

data=

labels=

with open ("train2.txt","r")as file:

for line in file:

line=line[0:1]

with open("train2.txt","r")as file:

for line in file:

line=line[1:]

x=np.array(data)

labels=np.array(labels)

labels=[int (i)for i in labels]

movie_target=labels

#轉換成空間向量

count_vec = tfidfvectorizer(binary = false)

#載入資料集,切分資料集80%訓練,20%測試

x_train, x_test, y_train, y_test= train_test_split(x, movie_target, test_size = 0.2)

x_train = count_vec.fit_transform(x_train)

x_test = count_vec.transform(x_test)

print('**************支援向量機************ ')

precision(svmclass(x_train, y_train))

print('**************樸素貝葉斯************ ')

precision(nbclass(x_train, y_train))

print('**************最近鄰knn************ ')

precision(knnclass(x_train,y_train))

print('**************邏輯回歸************ ')

precision(logisticclass(x_train, y_train))

print('**************決策樹************ ')

precision(dccisionclass(x_train,y_train))

print('**************邏輯森林************ ')

precision(random_forest_class(x_train,y_train))

結果如下:

aspect level 的文字情感分類試驗結果1

前段時間準備了資料,試了一下 基於attention model的aspect level文字情感分類 用python keras實現 這篇文章裡面的模型。結果和文章裡差不多,驗證集準確率在75 80 左右。但仔細去看模型 的結果,這個資料其實並不好。剔除掉單個aspect的句子,多aspect句子...

基於LSTM分類文字情感分析

文字情感分析作為nlp的常見任務,具有很高的實際應用價值。本文將採用lstm模型,訓練乙個能夠識別文字postive,neutral,negative三種情感的分類器。本文的目的是快速熟悉lstm做情感分析任務,所以本文提到的只是乙個baseline,並在最後分析了其優劣。對於真正的文字情感分析,在...

文字情感分類(二) 深度學習模型

原文寫的不錯 源 有改動。coding utf 8 created on sep 6,2016 author zhangdapeng from future import absolute import 匯入3.x的特徵函式 from future import print function imp...