多標記KNN演算法實現(Python3 6)

2021-08-10 23:40:46 字數 4757 閱讀 9977

對於乙個新例項,取其最近的k個例項,然後得到由k個例項組成的標籤集合,最後通過先驗概率與最大後驗概率來確定新例項的標籤集合。詳細的內容參看周志華老師與張敏靈老師的《多標記學習》

演算法實現

資料預處理

mlknndemo.py
#load file

data = sio.loadmat('scene.mat')

train_bags = data['train_bags']

test_bags = data['test_bags']

train_targets = data['train_targets']

test_targets = data['test_targets']

#train_bags:9*15 to 1*135

trainbagline = len(train_bags)

train_data =

for i in range(trainbagline):

linshi = train_bags[i,0].flatten().tolist()

train_data=np.array(train_data)

#test_bags:9*15 to 1*135

testbagline = len(test_bags)

test_data =

for i in range(testbagline):

linshi = test_bags[i,0].flatten().tolist()

test_data=np.array(test_data)

#const;num is k's value

num = 10

smooth = 1

#training;

prior,priorn,cond,condn = mlknntrain.trainclass(train_data,train_targets,num,smooth)

#testing

outputs,prelabels = mlknntest.testclass(train_data,train_targets,test_data,test_targets,num,prior,priorn,cond,condn)

訓練

mlknntrain.py

#train 

def trainclass(train_data,train_targets,num,smooth):

#get size of matrix

num_class,num_training = np.mat(train_targets).shape

dist_matrix = np.diagflat(ones((1,num_training))*sys.maxsize)

#cumputing distance

for i in range((num_training-1)):

vector1 = train_data[i,:]

for j in range((i+1),(num_training)):

vector2 = train_data[j,:]

dist_matrix[i,j] = sum((vector1-vector2)**2)**0.5

dist_matrix[j,i] = dist_matrix[i,j]

#prior and priorn

prior = zeros((num_class,1))

priorn = zeros((num_class,1))

for i in range(num_class):

tempci = sum((train_targets[i,:]==ones((1,num_training))))

prior[i,0] = (tempci+1)/(smooth*2+num_training)

priorn[i,0] = 1-prior[i,0]

#cond and condn

#sort by distance and get index

#find neighbors

dismatindex = argsort(dist_matrix)

tempci = zeros((num_class,num+1))

tempnci = zeros((num_class,num+1))

for i in range(num_training):

temp = zeros((1,num_class))

neighborlabels =

for j in range(num):

neighborlabels = np.mat(neighborlabels)

neighborlabels = np.transpose(neighborlabels)

for j in range(num_class):

temp[0,j] = sum((neighborlabels[j,:] == ones((1,num))))

for j in range(num_class):

t = int((temp[0,j]))

if(train_targets[j,i] == 1):

tempci[j,t]=tempci[j,t]+1

else:

tempnci[j,t] = tempnci[j,t]+1

#get 5*11 matrix

cond = zeros((num_class,num+1))

condn = zeros((num_class,num+1))

for i in range(num_class):

temp1 = sum((tempci[i,:]))

temp2 = sum((tempnci[i,:]))

for j in range(num+1):

cond[i,j] = (smooth+tempci[i,j])/(smooth*(num+1)+temp1)

condn[i,j] = (smooth+tempnci[i,j])/(smooth*(num+1)+temp2)

return prior,priorn,cond,condn

測試mlknntest.py

def testclass(train_data,train_targets,test_data,test_targets,num,prior,priorn,cond,condn):

num_class,num_training = np.mat(train_targets).shape

num_class,num_testing = np.mat(test_targets).shape

#init matrix about distance

distmatrix = zeros((num_testing,num_training))

for i in range(num_testing):

vector1 = test_data[i,:]

for j in range(num_training):

vector2 = train_data[j,:]

distmatrix[i,j] = sum((vector1-vector2)**2)**0.5

#sort by distance and get index

#find neighbors

dismatindex = argsort(distmatrix)

#computing outputs

outputs = zeros((num_class,num_testing))

for i in range(num_testing):

temp = zeros((1,num_class))

neighborlabels =

for j in range(num):

neighborlabels = np.mat(neighborlabels)

#transposition

neighborlabels = np.transpose(neighborlabels)

for j in range(num_class):

temp[0,j] = sum((neighborlabels[j,:] == ones((1,num))))

for j in range(num_class):

t = int((temp[0,j]))

prob_in=prior[j]*cond[j,t]

prob_out=priorn[j]*condn[j,t]

if((prob_in+prob_out)==0):

outputs[j,i]=prior[j]

else:

outputs[j,i]=prob_in/[prob_in+prob_out]

#evaluation

prelabels=zeros((num_class,num_testing))

for i in range(num_testing):

for j in range(num_class):

if(outputs[j,i]>=0.5): #閾值為0.5

prelabels[j,i]=1

else:

prelabels[j,i]=-1

return outputs,prelabels

注:上述內容僅為個人學習過程中的筆記,如有不當的地方還望指正

knn演算法實現

knn演算法 自己實現 鳶尾花資料集 一.題目描述 題目 自己實現knn演算法 用鳶尾花資料集 knn演算法描述 在訓練集中資料和標籤已知的情況下,輸入測試資料,將測試資料的特徵與訓練集中對應的特徵進行相互比較,找到訓練集中與之最為相似的前k個資料,則該測試資料對應的類別就是k個資料 現次數最多的那...

KNN演算法實現

knn k 近鄰 knn,k nearestneighbor 演算法是一種基本分類與回歸方法,我們這裡只討論分類問題中的 k 近鄰演算法。k 近鄰演算法的輸入為例項的特徵向量,對應於特徵空間的點 輸出為例項的類別,可以取多類。k 鄰演算法假設給定乙個訓練資料集,其中的例項類別已定。分類時,對新的例項...

python實現演算法 kmp演算法python實現

kmp演算法python實現 kmp演算法 kmp演算法用於字串的模式匹配,也就是找到模式字串在目標字串的第一次出現的位置 比如abababc 那麼bab在其位置1處,bc在其位置5處 我們首先想到的最簡單的辦法就是蠻力的乙個字元乙個字元的匹配,但那樣的時間複雜度會是o m n kmp演算法保證了時...