标签:tar 追踪 find hide fan filter let coding ima
近年来,随着网络应用的逐渐扩展和深入,如何高效的获取网上数据成为了无数公司和个人的追求,在大数据时代,谁掌握了更多的数据,谁就可以获得更高的利益,而网络爬虫是其中最为常用的一种从网上爬取数据的手段。
网络爬虫,即Web Spider,是一个很形象的名字。如果把互联网比喻成一个蜘蛛网,那么Spider就是在网上爬来爬去的蜘蛛。网络蜘蛛是通过网页的链接地址来寻找网页的。从网站某一个页面(通常是首页)开始,读取网页的内容,找到在网页中的其它链接地址,然后通过这些链接地址寻找下一个网页,这样一直循环下去,直到把这个网站所有的网页都抓取完为止。如果把整个互联网当成一个网站,那么网络蜘蛛就可以用这个原理把互联网上所有的网页都抓取下来。
互联网中最有价值的便是数据,比如天猫商城的商品信息,链家网的租房信息,雪球网的证券投资信息等等,这些数据都代表了各个行业的真金白银,可以说,谁掌握了行业内的第一手数据,谁就成了整个行业的主宰,如果把整个互联网的数据比喻为一座宝藏,那我们的爬虫课程就是来教大家如何来高效地挖掘这些宝藏,掌握了爬虫技能, 你就成了所有互联网信息公司幕后的老板,换言之,它们都在免费为你提供有价值的数据。
requests模块支持的请求:
import requests requests.get("http://httpbin.org/get") requests.post("http://httpbin.org/post") requests.put("http://httpbin.org/put") requests.delete("http://httpbin.org/delete") requests.head("http://httpbin.org/get") requests.options("http://httpbin.org/get")
1 基本请求
import requests response=requests.get(‘https://www.jd.com/‘,) with open("jd.html","wb") as f: f.write(response.content)
2 含参数请求
import requests response=requests.get(‘https://s.taobao.com/search?q=手机‘) response=requests.get(‘https://s.taobao.com/search‘,params={"q":"美女"})
3 含请求头请求
示例1: user-agent的使用,headers()
import requests response=requests.get(‘https://dig.chouti.com/‘, headers={ ‘User-Agent‘:‘Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.75 Safari/537.36‘, } )
示例2:
import requests res=requests.post(‘https://www.lagou.com/jobs/positionAjax.json‘, headers={ ‘Referer‘:"https://www.lagou.com/jobs/list_python", ‘User-Agent‘: ‘Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36‘, }, data={ ‘first‘:True, ‘pn‘:2, ‘kd‘:‘java高级开发‘ }, params={ ‘gj‘: ‘3年及以下‘, ‘px‘: ‘default‘, ‘yx‘: ‘25k-50k‘, ‘city‘: ‘北京‘, ‘needAddtionalResult‘: False, ‘isSchoolJob‘: 0 } ) comapines_list=r6.json() print(comapines_list)
4 含cookies请求:
import uuid import requests url = ‘http://httpbin.org/cookies‘ cookies = dict(sbid=str(uuid.uuid4())) res = requests.get(url, cookies=cookies) print(res.json())
5 request.session()
import requests # res=requests.get("https://www.zhihu.com/explore") # print(res.cookies.get_dict()) session=requests.session() res1=session.get("https://www.zhihu.com/explore") print(session.cookies.get_dict()) res2=session.get("https://www.zhihu.com/question/30565354/answer/463324517",cookies={"abs":"123"}
1 data参数
requests.post()用法与requests.get()完全一致,特殊的是requests.post()多了一个data参数,用来存放请求体数据
response=requests.post("http://httpbin.org/post",params={"a":"10"}, data={"name":"yuan"})
2 发送json数据
import requests<br> res1=requests.post(url=‘http://httpbin.org/post‘, data={‘name‘:‘yuan‘}) #没有指定请求头,#默认的请求头:application/x-www-form-urlencoed print(res1.json()) res2=requests.post(url=‘http://httpbin.org/post‘,json={‘age‘:"22",}) #默认的请求头:application/json) print(res2.json())
import requests respone=requests.get(‘https://sh.lianjia.com/ershoufang/‘) # respone属性 print(respone.text) print(respone.content) print(respone.status_code) print(respone.headers) print(respone.cookies) print(respone.cookies.get_dict()) print(respone.cookies.items()) print(respone.url) print(respone.history) print(respone.encoding)
import requests response=requests.get(‘http://www.autohome.com/news‘) #response.encoding=‘gbk‘ #汽车之家网站返回的页面内容为gb2312编码的,而requests的默认编码为ISO-8859-1,如果不设置成gbk则中文乱码 with open("res.html","w") as f: f.write(response.text)
import requests response=requests.get(‘http://bangimg1.dahe.cn/forum/201612/10/200447p36yk96im76vatyk.jpg‘) with open("res.png","wb") as f: # f.write(response.content) # 比如下载视频时,如果视频100G,用response.content然后一下子写到文件中是不合理的 for line in response.iter_content(): f.write(line)
import requests import json response=requests.get(‘http://httpbin.org/get‘) res1=json.loads(response.text) #太麻烦 res2=response.json() #直接获取json数据 print(res1==res2)
默认情况下,除了 HEAD, Requests 会自动处理所有重定向。可以使用响应对象的 history
方法来追踪重定向。Response.history
是一个 Response
对象的列表,为了完成请求而创建了这些对象。这个对象列表按照从最老到最近的请求进行排序。
>>> r = requests.get(‘http://github.com‘)
>>> r.url
‘https://github.com/‘
>>> r.status_code
>>> r.history
[<Response [301]>]
另外,还可以通过 allow_redirects
参数禁用重定向处理:
>>> r = requests.get(‘http://github.com‘, allow_redirects=False)
>>> r.status_code
301
>>> r.history
[]
import requests import re #请求1: r1=requests.get(‘https://github.com/login‘) r1_cookie=r1.cookies.get_dict() #拿到初始cookie(未被授权) authenticity_token=re.findall(r‘name="authenticity_token".*?value="(.*?)"‘,r1.text)[0] #从页面中拿到CSRF TOKEN print("authenticity_token",authenticity_token) #第二次请求:带着初始cookie和TOKEN发送POST请求给登录页面,带上账号密码 data={ ‘commit‘:‘Sign in‘, ‘utf8‘:‘?‘, ‘authenticity_token‘:authenticity_token, ‘login‘:‘yuanchenqi0316@163.com‘, ‘password‘:‘yuanchenqi0316‘ } #请求2: r2=requests.post(‘https://github.com/session‘, data=data, cookies=r1_cookie, # allow_redirects=False ) print(r2.status_code) #200 print(r2.url) #看到的是跳转后的页面:https://github.com/ print(r2.history) #看到的是跳转前的response:[<Response [302]>] print(r2.history[0].text) #看到的是跳转前的response.text with open("result.html","wb") as f: f.write(r2.content)
import requests import re def get_html(url): """ 请求数据资源 :param url: :return: """ response = requests.get(url) return response.text def parser_html(res_html): """ 爬取数据,并解析 :param data: :return: """ ret = re.findall( ‘<div class="item">.*?<a href="(.*?)">.*?<span class="rating_num".*?>(.*?)</span>‘, res_html, re.S ) return ret def store(data): """ 持久化 :param data: :return: """ with open("douban.txt", "a", encoding="utf8") as f: for item in data: s = " ".join(list(item)) line = s + "\n" print(line) f.write(line) if __name__ == ‘__main__‘: index = 0 for i in range(10): url = "https://movie.douban.com/top250?start=%s&filter=" % index # 爬去资源 res_html = get_html(url) # 解析资源 data = parser_html(res_html) # 持久化 store(data) index += 25
import requests session = requests.session() session.get("https://www.lagou.com/") session.get("https://www.lagou.com/jobs/list_python") res = session.post( "https://www.lagou.com/jobs/positionAjax.json?px=default&city=%E5%8C%97%E4%BA%AC&needAddtionalResult=false", headers={ "Referer": "https://www.lagou.com/jobs/list_python?labelWords=&fromSearch=true&suginput=", "X-Anit-Forge-Code": "0", "X-Anit-Forge-Token": None, "X-Requested-With": "XMLHttpRequest", "User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" }, data={ "first": "true", "pn": 1, "kd": "python", } ) print(res.text)
标签:tar 追踪 find hide fan filter let coding ima
原文地址:https://www.cnblogs.com/xiaohei001/p/9715691.html