首頁 > 軟體

用python實現爬取奧特曼圖片範例

2022-02-11 16:00:21

爬取網址:http://www.ultramanclub.com/allultraman/

使用工具:pycharm,requests

進入網頁

開啟開發者工具

點選 Network

 重新整理網頁,獲取資訊

其中的Request URL就是我們所爬取的網址

滑到最下有一個User-Agent,複製

 向伺服器傳送請求

200意味著請求成功

使用 response.text 獲取文字資料

 可以看到有些亂碼

使用encode轉換

import requests
 
url = 'http://www.ultramanclub.com/allultraman/'
 
headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.82 Safari/537.36'
}
 
response = requests.get(url = url,headers=headers)
html = response.text
Html=html.encode('iso-8859-1').decode('gbk')
print(Html)

 接下來開始爬取需要的資料

使用Xpath獲得網頁連結

要使用Xpath必須先匯入parsel包

import requests
import parsel
 
def get_response(html_url):
    headers = {
        'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.82 Safari/537.36'
    }
 
    response = requests.get(url = html_url,headers=headers)
    return response
 
url = 'http://www.ultramanclub.com/allultraman/'
response = get_response(url)
html=response.text.encode('iso-8859-1').decode('gbk')
selector = parsel.Selector(html)
 
period_hrefs = selector.xpath('//div[@class="btn"]/a/@href')  #獲取三個時代的網頁連結
 
for period_href in period_hrefs:
    print(period_href.get())
 

可以看到網頁連結不完整,我們手動給它新增上去period_href = 'http://www.ultramanclub.com/allultraman/' + period_href.get()

 進入其中一個網頁

跟之前的操作一樣,用Xpath獲取奧特曼的網頁資訊

for period_href in period_hrefs:
    period_href = 'http://www.ultramanclub.com/allultraman/' + period_href.get()
    # print(period_href)
    period_response = get_response(period_href).text
    period_html = parsel.Selector(period_response)
    lis = period_html.xpath('//div[@class="ultraheros-Contents_Generations"]/div/ul/li/a/@href')
    for li in lis:
        print(li.get())

執行後同樣發現連結不完整

li = 'http://www.ultramanclub.com/allultraman/' + li.get().replace('./','')

拿到網址後繼續套娃操作,就可以拿到圖片資料

png_url = 'http://www.ultramanclub.com/allultraman/' + li_selector.xpath('//div[@class="left"]/figure/img/@src').get().replace('../','')

完整程式碼

import requests
import parsel
import os
 
dirname = "奧特曼"
if not os.path.exists(dirname):     #判斷是否存在名稱為奧特曼的資料夾,沒有就建立
    os.mkdir(dirname)
 
 
def get_response(html_url):
    headers = {
        'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.82 Safari/537.36'
    }
 
    response = requests.get(url = html_url,headers=headers)
    return response
 
url = 'http://www.ultramanclub.com/allultraman/'
response = get_response(url)
html=response.text.encode('iso-8859-1').decode('gbk')
selector = parsel.Selector(html)
 
period_hrefs = selector.xpath('//div[@class="btn"]/a/@href')  #獲取三個時代的網頁連結
 
for period_href in period_hrefs:
    period_href = 'http://www.ultramanclub.com/allultraman/' + period_href.get()
 
    period_html = get_response(period_href).text
    period_selector = parsel.Selector(period_html)
    lis = period_selector.xpath('//div[@class="ultraheros-Contents_Generations"]/div/ul/li/a/@href')
    for li in lis:
        li = 'http://www.ultramanclub.com/allultraman/' + li.get().replace('./','')     #獲取每個奧特曼的網址
        # print(li)
        li_html = get_response(li).text
        li_selector = parsel.Selector(li_html)
        url = li_selector.xpath('//div[@class="left"]/figure/img/@src').get()
        # print(url)
 
        if url:
            png_url = 'http://www.ultramanclub.com/allultraman/' + url.replace('.', '')
            png_title =li_selector.xpath('//ul[@class="lists"]/li[3]/text()').get()
            png_title = png_title.encode('iso-8859-1').decode('gbk')
            # print(li,png_title)
            png_content = get_response(png_url).content
            with open(f'{dirname}\{png_title}.png','wb') as f:
                f.write(png_content)
            print(png_title,'圖片下載完成')
        else:
            continue
 

當爬到 奈克斯特奧特曼的時候,就會返回None,調了半天,也沒搞懂,所以用if url:語句跳過了奈克斯特奧特曼,有沒有大佬知道原因

url = li_selector.xpath('//div[@class="left"]/figure/img/@src').get()

到此這篇關於用python實現爬取奧特曼圖片範例的文章就介紹到這了,更多相關python爬取奧特曼圖片內容請搜尋it145.com以前的文章或繼續瀏覽下面的相關文章希望大家以後多多支援it145.com!


IT145.com E-mail:sddin#qq.com