标签:trie odi main urlopen url ret 下载 div .com
这个网站http://www.hbc333.com/是一个壁纸图片网站,提供各种分辨率的图片的下载,因此想写一个爬虫脚本批量下载这些图片。
经观察,2560*1600分辨率的图片的网址格式为:http://www.hbc333.com/size/2560x1600/n/ (n是页数),
每张预览图片的地址是:/data/out/253/46659416-watch-dogs-wallpaper.jpg,
而原图的链接则为:http://www.hbc333.com/data/out/253/46659416-watch-dogs-wallpaper.jpg
#coding=utf-8 from urllib import request import urllib import re url1 = "http://www.hbc333.com/size/2560x1600/" #壁纸网站开始页面 def getMainPage(url): page = request.urlopen(url) html = page.read() return html #每页的网址为:http://www.hbc333.com/size/2560x1600/n/ n为页数 count = 2 oriUrl = ‘http://www.hbc333.com‘ while count < 3: newUrl = url1 + str(count) + ‘/‘ count = count + 1 print(newUrl) html = getMainPage(newUrl) print(html) dir = [] html = html.decode(‘utf-8‘) # python3 dir = re.findall(r‘/data/out/[0-9]+/[0-9]+-[a-z]+-[a-z]+.jpg‘, html) #dir = re.findall(r‘/data/out/[0-9a-zA-Z]+‘, html) x = 1 for u in dir: #urllib.request.urlretrieve(oriUrl+u,‘%s.jpg‘ %x) print(oriUrl+u) x = x+1
标签:trie odi main urlopen url ret 下载 div .com
原文地址:http://www.cnblogs.com/zjlyyq/p/6445088.html