requests和lxml實現爬蟲的方法

2022-10-04 21:09:30 字數 3760 閱讀 7317

如下所示:

# requests模組來請求頁面

# lxml模組的html構建selector選擇器(格式化響應response)

# from lxml import html

# import requests

# response = requests.get(url).content

# selector = html.formatstring(response)

# hrefs = selector.xpath('/html/body//div[@class='feed-item _j_feed_item']/a/@href')

# 以url = ''為例子

# python 2.7

import requests

from lxml import html

import os

# 獲取首頁中子頁的url鏈結

def get_nfvfivokwpage_urls(url):

response = requests.get(url).content

# 通過lxml的html來構建選擇器

selector = html.fromstring(response)

urls =

for i in selector.xpath("/html/body//div[@class='feed-item _j_feed_item']/a/@href"):

urls.append(i)

return urls

# get title from a child's html(div[@class='title'])

def get_page_a_title(url):

'''url is ziyouxing's a@href'''

response = requests.get(url).content

selector = html.fro程式設計客棧mstring(response)

# get xpath by chrome's tool --> /html/body//div[@class='title']/text()

a_title = selector.xpath("/html/body//div[@class='title']/text()")

return a_title

# 獲取頁面選擇器(通過lxml的html構建)

def get_selector(url):

response = requests.get(url).content

selector = html.fromstring(response)

return selector

# 通過chrome的開發者工具分析html頁面結構後發現,我們需要獲取的文字內容主要顯示在div[@class='l-topwww.cppcns.comic']和div[@class='p-section']中

# 獲取所需的文字內容

def get_page_content(selector):

# /html/body/div[2]/div[2]/div[1]/div[@class='l-topic']/p/text()

page_title = selector.xpath("//div[@class='l-topic']/p/text()")

# /html/body/div[2]/div[2]/div[1]/div[2]/div[15]/div[@class='p-section']/text()

page_content = selector.xpath("//div[@class='p-section']/text()")

return page_title,page_content

# 獲取頁面中的url位址

def get_image_urls(selector):

imagesrcs = selector.xpath("//img[@class='_j_lazyload']/@src")

return imagesrcs

# 獲取的標題

def get_image_title(selector, num)

# num 是從2開始的

url = "/html/body/div[2]/div[2]/div[1]/div[2]/div["+num+"]/span[@class='img-an']/text()"

if selector.xpath(url) is not none:

image_title = selector.xpath(url)

else:

image_title = "map"+str(num) # 沒有就起乙個

return image_title

# **

def downloadimages(selector,number):

'''number是用來計數的'''

print "已經**了%s張圖" %num

# 入口,啟動並把獲取的資料存入檔案中

if __name__ =='__main__':

url = ''

urls = get_page_urls(url)

# turn to get response from html

number = 1

for i in urls:

selector = get_selector(i)

# download images

downloadimages(selector,number)

# get text and write into a file

page_title, page_content = get_page_content(selector)

result = page_title+'\n'+page_content+'\n\n'

path = "/home/workspace/tour/words/result"+num+"/"

if not os.path.exists(filename):

os.makedirs(fi程式設計客棧lename)

filename = path + "num"+".txt"

with open(filename,'wb') as f:

f.write(result)

print result

到此就結束了該爬蟲,爬取頁面前一定要認真分析html結構,有些頁面是由js生成,該頁面比較簡單,沒涉及到js的處理,日後的隨筆中會有相關分享

本文標題: requests和lxml實現爬蟲的方法

本文位址: /jiaoben/python/193317.html

lxml和xpath結合使用

lxml和xpath結合使用主要有以下5個方面內容 1.獲取所有的tr標籤 2.獲取第2個標籤 3.獲取所有class 等於event的tr標籤 4.獲取所有a標籤下的href屬性 5.或許所有的職位資訊 純文字 6.get方法也可以得到屬性 img.get data original 獲取img的...

XPath語法和lxml模組

xpath xml path language 是一門在xml和html文件中查詢資訊的語言,可用來在xml和html文件中對元素和屬性進行遍歷。chrome外掛程式xpath helper。安裝方法 開啟外掛程式伴侶,選擇外掛程式 選擇提取外掛程式內容到桌面,桌面上會多乙個資料夾 把資料夾放入想要...

XPath語法和lxml模組

xpath xml path language 是一門在xml和html文件中查詢資訊的語言,可用來在xml和html文件中對元素和屬性進行遍歷。chrome外掛程式xpath helper。firefox外掛程式try xpath。xpath 使用路徑表示式來選取 xml 文件中的節點或者節點集。...