深度學習中常用numpy操作 2 索引

2021-08-15 11:26:57 字數 4554 閱讀 5678

# indexing

a = np.arange(1, 13).reshape(3, 4)

b = a[:2, 1:3]

print("a:\n", a, '\n')

print("b:\n", b, '\n')

b[0, 0] = 100

print("a_aft_changeb:\n", a, '\n') # b取a的部分值去操作,b改變相應的a處值也會變,注意=和copy的區別

# 用整數方式有可能得到降維的陣列,而用切片方式得到的陣列永遠是原陣列的子陣列

c = np.arange(1, 13).reshape(3, 4)

c_1 = c[1, :]

c_2 = c[1:2, :]

print("c_1:\n", c_1, '\n') # (4,)

print("c_2:\n", c_2, '\n') # (1,4)

# 陣列索引陣列

twod = np.arange(1, 10).reshape(3, 3)

twod_index = twod[[0, 2], [2, 1]] # (0,2)(2,1)

print("twod_index:\n", twod_index, '\n')

threed = np.arange(1, 9).reshape(2, 2, 2)

threed_index = threed[[0, 1, 0], [0, 0, 0], [1, 1, 0]] # (0,0,1)(1,0,1)(0,0,0,)

print("threed_index\n", threed_index, '\n')

# 用整數陣列給原陣列的元素加上乙個數值

d = np.arange(1, 13).reshape(4, 3)

d[np.arange(3), np.arange(3)] += 10 # (0,0)(1,1)(2,2)上的值均加10

print("d:\n", d, '\n')

# 選出陣列中符合某種特定條件的元素

e = np.arange(1, 7).reshape(3, 2)

bool_idx = (e > 2)

print("bool_idx:\n", bool_idx, '\n')

print(e[bool_idx], '\n')

f = np.array([[1, 3], [3, 4]])

print(f[f > 2], '\n')

x = np.arange(10)

print("x[1:7:2]:\n", x[1:7:2], '\n')

print("x[-2:10]:\n", x[-2:10], '\n')

print("x[-3:3:-1]:\n", x[-3:3:-1], '\n')

y = np.arange(1, 7).reshape(2, 3, 1)

print("y[1:2]:\n", y[1:2], '\n')

print("y[1]:\n", y[1], '\n')

print("y[..., 0]:\n", y[..., 0], '\n')

print("y[:,np.newaxis,:,:]:\n", y[:,np.newaxis,:,:], '\n') # 213

1rows = np.array([0, 3], dtype=np.intp)

cols = np.array([0, 2], dtype=np.intp)

z = np.arange(12).reshape(4, 3)

rows_new = rows[:, np.newaxis] # [[0], [3]]

print("z[rows_new, cols]:\n", z[rows_new, cols], '\n')

print("z[np.ix_(rows, cols)]:\n", z[np.ix_(rows, cols)], '\n') # 這兩結果一樣

print("z[1:2, 1:3]:\n", z[1:2, 1:3], '\n')

print("z[1:2, [1, 2]]:\n", z[1:2, [1, 2]], '\n') # 這兩結果一樣

xx = np.array([[1., 2.], [np.nan, 3.], [np.nan, np.nan]])

print("xx[~np.isnan(xx)]:\n", xx[~np.isnan(xx)], '\n') # all not nan

yy = np.array([[1., -1.], [-2., 3]])

yy[yy < 0] += 100 # add a constant to all negative elements

print("yy:\n", yy, '\n')

zz = np.array([[0, 1], [2, 2], [1, 1]])

rowsum = zz.sum(-1) # shape=(3,)

print("zz[rowsum <= 2, :]:\n", zz[rowsum <= 2, :], '\n')

zz = np.array([[0, 1], [1, 1], [2, 2]])

rowsum_dims = zz.sum(-1, keepdims=true) # shape=(3,1)

# print(rowsum_dims.shape)

# print("zz[rowsum_dims <= 2]:\n",zz[rowsum_dims <= 2], '\n') # python3報錯,python2 warning

xy = np.arange(12).reshape(4, 3)

rows = (xy.sum(-1) % 2) == 0 # 和是否整除2

print("rows:\n", rows, '\n')

cols = [0, 2]

print("xy[np.ix_(rows, cols)]:\n", xy[np.ix_(rows, cols)], '\n')

rows1 = rows.nonzero()[0]

print("rows1:\n", rows1, '\n')

print("xy[rows1[:, np.newaxis], cols]:\n", xy[rows1[:, np.newaxis], cols], '\n')

輸出:

a:

[[ 1 2 3 4]

[ 5 6 7 8]

[ 9 10 11 12]]

b: [[2 3]

[6 7]]

a_aft_changeb:

[[ 1 100 3 4]

[ 5 6 7 8]

[ 9 10 11 12]]

c_1:

[5678]

c_2:

[[5 6 7 8]]

twod_index:

[38]

threed_index

[261] d:

[[11 2 3]

[ 4 15 6]

[ 7 8 19]

[10 11 12]]

bool_idx:

[[false false]

[ true true]

[ true true]] [34

56] [33

4] x[1:7:2]:

[135] x[-2:10]:

[89]

x[-3:3:-1]:

[7654]

y[1:2]:

[[[4]

[5][6]]]

y[1]:

[[4]

[5][6]]

y[..., 0]:

[[1 2 3]

[4 5 6]]

y[:,np.newaxis,:,:]:

[[[[1]

[2][3]]]

[[[4]

[5][6]]]]

z[rows_new, cols]:

[[ 0 2]

[ 9 11]]

z[np.ix_(rows, cols)]:

[[ 0 2]

[ 9 11]]

z[1:2, 1:3]:

[[4 5]]

z[1:2, [1, 2]]:

[[4 5]]

xx[~np.isnan(xx)]:

[ 1.

2.3.]

yy: [[ 1. 99.]

[ 98. 3.]]

zz[rowsum <= 2, :]:

[[0 1]

[1 1]]

rows:

[false true false true]

xy[np.ix_(rows, cols)]:

[[ 3 5]

[ 9 11]]

rows1:

[13]

xy[rows1[:, np.newaxis], cols]:

[[ 3 5]

[ 9 11]]

ref:

numpy中常用的函式

詳細的random模組中的其他函式 np.random.rand 3,2 array 0.14022471,0.96360618 random 0.37601032,0.25528411 random 0.49313049,0.94909878 random np.random.randint 2,...

numpy中常用陣列介紹

numpy提供了高效儲存和操作密集資料快取的介面,numpy陣列幾乎是整個python資料科學工具生態系統的核心,因此學習如何有效地使用numpy是非常值得的。如果你安裝的是anaconda,那麼你已經安裝好了numpy,可以使用了。通過import numpy as np來匯入numpy 這裡as...

深度學習中常用的代價函式

1.二次代價函式 quadratic cost 其中,c表示代價函式,x表示樣本,y表示實際值,a表示輸出值,n表示樣本的總數。為簡單起見,使用乙個樣 本為例進行說明,此時二次代價函式為 假如我們使用梯度下降法 gradient descent 來調整權值引數的大小,權值w和偏置b的梯度推導如下 其...