码迷,mamicode.com
首页 > 微信 > 详细

爬虫--微信公众平台登录

时间:2017-02-11 12:23:55      阅读:2114      评论:0      收藏:0      [点我收藏+]

标签:注册   fan   before   default   字节   lock   with   start   a标签   

一 注册登录分析模式

第一步:打开https://mp.weixin.qq.com/进入登陆界面

第二步:输入账号密码点击登陆

第三步:等待跳转后扫码验证

第四步:进入主页面

通过简单手法的查看chrome浏览器的network发现发送的密码经过了加密。

二 相关知识点

requests模块

新建一个py文件后导入requests模块Ctrl右键进入源码:

Requests is an HTTP library, written in Python, for human beings. Basic GET
usage:

   >>> import requests
   >>> r = requests.get(https://www.python.org)
   >>> r.status_code
   200
   >>> Python is a programming language in r.content
   True

... or POST:

   >>> payload = dict(key1=value1, key2=value2)
   >>> r = requests.post(http://httpbin.org/post, data=payload)
   >>> print(r.text)
   {
     ...
     "form": {
       "key2": "value2",
       "key1": "value1"
     },
     ...
   }

上述就是requests的基本用法。

还有其他的请求方式,查看源码发现都是返回了request方法。其中解释了很多的参数设置。

技术分享
:param method: method for the new :class:`Request` object.
    :param url: URL for the new :class:`Request` object.
    :param params: (optional) Dictionary or bytes to be sent in the query string for the :class:`Request`.
    :param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`.
    :param json: (optional) json data to send in the body of the :class:`Request`.
    :param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`.
    :param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`.
    :param files: (optional) Dictionary of ``name: file-like-objects`` (or ``{name: file-tuple}``) for multipart encoding upload.
        ``file-tuple`` can be a 2-tuple ``(filename, fileobj)``, 3-tuple ``(filename, fileobj, content_type)``
        or a 4-tuple ``(filename, fileobj, content_type, custom_headers)``, where ``content-type`` is a string
        defining the content type of the given file and ``custom_headers`` a dict-like object containing additional headers
        to add for the file.
    :param auth: (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth.
    :param timeout: (optional) How long to wait for the server to send data
        before giving up, as a float, or a :ref:`(connect timeout, read
        timeout) <timeouts>` tuple.
    :type timeout: float or tuple
    :param allow_redirects: (optional) Boolean. Enable/disable GET/OPTIONS/POST/PUT/PATCH/DELETE/HEAD redirection. Defaults to ``True``.
    :type allow_redirects: bool
    :param proxies: (optional) Dictionary mapping protocol to the URL of the proxy.
    :param verify: (optional) whether the SSL cert will be verified. A CA_BUNDLE path can also be provided. Defaults to ``True``.
    :param stream: (optional) if ``False``, the response content will be immediately downloaded.
    :param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, (cert, key) pair.
    :return: :class:`Response <Response>` object
    :rtype: requests.Response
request方法参数

BeautifulSoup

BeautifulSoup是一个模块,该模块用于接收一个HTML或XML字符串,然后将其进行格式化,之后遍可以使用他提供的方法进行快速查找指定元素,从而使得在HTML或XML中查找指定元素变得简单。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
from bs4 import BeautifulSoup
 
html_doc = """
<html><head><title>The Dormouse‘s story</title></head>
<body>
asdf
    <div class="title">
        <b>The Dormouse‘s story总共</b>
        <h1>f</h1>
    </div>
<div class="story">Once upon a time there were three little sisters; and their names were
    <a  class="sister0" id="link1">Els<span>f</span>ie</a>,
    <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
    <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</div>
ad<br/>sf
<p class="story">...</p>
</body>
</html>
"""
 
soup = BeautifulSoup(html_doc, features="lxml")
# 找到第一个a标签
tag1 = soup.find(name=‘a‘)
# 找到所有的a标签
tag2 = soup.find_all(name=‘a‘)
# 找到id=link2的标签
tag3 = soup.select(‘#link2‘)

安装:

1
pip3 install beautifulsoup4

使用示例:

1
2
3
4
5
6
7
8
9
10
11
from bs4 import BeautifulSoup
 
html_doc = """
<html><head><title>The Dormouse‘s story</title></head>
<body>
    ...
</body>
</html>
"""
 
soup = BeautifulSoup(html_doc, features="lxml")

1. name,标签名称

1
2
3
4
5
# tag = soup.find(‘a‘)
# name = tag.name # 获取
# print(name)
# tag.name = ‘span‘ # 设置
# print(soup)

2. attr,标签属性

1
2
3
4
5
6
# tag = soup.find(‘a‘)
# attrs = tag.attrs    # 获取
# print(attrs)
# tag.attrs = {‘ik‘:123} # 设置
# tag.attrs[‘id‘] = ‘iiiii‘ # 设置
# print(soup)

3. children,所有子标签

1
2
# body = soup.find(‘body‘)
# v = body.children

4. children,所有子子孙孙标签

1
2
# body = soup.find(‘body‘)
# v = body.descendants

5. clear,将标签的所有子标签全部清空(保留标签名)

1
2
3
# tag = soup.find(‘body‘)
# tag.clear()
# print(soup)

6. decompose,递归的删除所有的标签

1
2
3
# body = soup.find(‘body‘)
# body.decompose()
# print(soup)

7. extract,递归的删除所有的标签,并获取删除的标签

1
2
3
# body = soup.find(‘body‘)
# v = body.extract()
# print(soup)

8. decode,转换为字符串(含当前标签);decode_contents(不含当前标签)

1
2
3
4
# body = soup.find(‘body‘)
# v = body.decode()
# v = body.decode_contents()
# print(v)

9. encode,转换为字节(含当前标签);encode_contents(不含当前标签)

1
2
3
4
# body = soup.find(‘body‘)
# v = body.encode()
# v = body.encode_contents()
# print(v)

10. find,获取匹配的第一个标签

1
2
3
4
5
# tag = soup.find(‘a‘)
# print(tag)
# tag = soup.find(name=‘a‘, attrs={‘class‘: ‘sister‘}, recursive=True, text=‘Lacie‘)
# tag = soup.find(name=‘a‘, class_=‘sister‘, recursive=True, text=‘Lacie‘)
# print(tag)

11. find_all,获取匹配的所有标签

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
# tags = soup.find_all(‘a‘)
# print(tags)
 
# tags = soup.find_all(‘a‘,limit=1)
# print(tags)
 
# tags = soup.find_all(name=‘a‘, attrs={‘class‘: ‘sister‘}, recursive=True, text=‘Lacie‘)
# # tags = soup.find(name=‘a‘, class_=‘sister‘, recursive=True, text=‘Lacie‘)
# print(tags)
 
 
# ####### 列表 #######
# v = soup.find_all(name=[‘a‘,‘div‘])
# print(v)
 
# v = soup.find_all(class_=[‘sister0‘, ‘sister‘])
# print(v)
 
# v = soup.find_all(text=[‘Tillie‘])
# print(v, type(v[0]))
 
 
# v = soup.find_all(id=[‘link1‘,‘link2‘])
# print(v)
 
# v = soup.find_all(href=[‘link1‘,‘link2‘])
# print(v)
 
# ####### 正则 #######
import re
# rep = re.compile(‘p‘)
# rep = re.compile(‘^p‘)
# v = soup.find_all(name=rep)
# print(v)
 
# rep = re.compile(‘sister.*‘)
# v = soup.find_all(class_=rep)
# print(v)
 
# rep = re.compile(‘http://www.oldboy.com/static/.*‘)
# v = soup.find_all(href=rep)
# print(v)
 
# ####### 方法筛选 #######
# def func(tag):
# return tag.has_attr(‘class‘) and tag.has_attr(‘id‘)
# v = soup.find_all(name=func)
# print(v)
 
 
# ## get,获取标签属性
# tag = soup.find(‘a‘)
# v = tag.get(‘id‘)
# print(v)

12. has_attr,检查标签是否具有该属性

1
2
3
# tag = soup.find(‘a‘)
# v = tag.has_attr(‘id‘)
# print(v)

13. get_text,获取标签内部文本内容

1
2
3
# tag = soup.find(‘a‘)
# v = tag.get_text(‘id‘)
# print(v)

14. index,检查标签在某标签中的索引位置

1
2
3
4
5
6
7
# tag = soup.find(‘body‘)
# v = tag.index(tag.find(‘div‘))
# print(v)
 
# tag = soup.find(‘body‘)
# for i,v in enumerate(tag):
# print(i,v)

15. is_empty_element,是否是空标签(是否可以是空)或者自闭合标签,

     判断是否是如下标签:‘br‘ , ‘hr‘, ‘input‘, ‘img‘, ‘meta‘,‘spacer‘, ‘link‘, ‘frame‘, ‘base‘

1
2
3
# tag = soup.find(‘br‘)
# v = tag.is_empty_element
# print(v)

16. 当前的关联标签

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# soup.next
# soup.next_element
# soup.next_elements
# soup.next_sibling
# soup.next_siblings
 
#
# tag.previous
# tag.previous_element
# tag.previous_elements
# tag.previous_sibling
# tag.previous_siblings
 
#
# tag.parent
# tag.parents

17. 查找某标签的关联标签

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# tag.find_next(...)
# tag.find_all_next(...)
# tag.find_next_sibling(...)
# tag.find_next_siblings(...)
 
# tag.find_previous(...)
# tag.find_all_previous(...)
# tag.find_previous_sibling(...)
# tag.find_previous_siblings(...)
 
# tag.find_parent(...)
# tag.find_parents(...)
 
# 参数同find_all

18. select,select_one, CSS选择器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
soup.select("title")
 
soup.select("p nth-of-type(3)")
 
soup.select("body a")
 
soup.select("html head title")
 
tag = soup.select("span,a")
 
soup.select("head > title")
 
soup.select("p > a")
 
soup.select("p > a:nth-of-type(2)")
 
soup.select("p > #link1")
 
soup.select("body > a")
 
soup.select("#link1 ~ .sister")
 
soup.select("#link1 + .sister")
 
soup.select(".sister")
 
soup.select("[class~=sister]")
 
soup.select("#link1")
 
soup.select("a#link2")
 
soup.select(‘a[href]‘)
 
soup.select(‘a[href="http://example.com/elsie"]‘)
 
soup.select(‘a[href^="http://example.com/"]‘)
 
soup.select(‘a[href$="tillie"]‘)
 
soup.select(‘a[href*=".com/el"]‘)
 
 
from bs4.element import Tag
 
def default_candidate_generator(tag):
    for child in tag.descendants:
        if not isinstance(child, Tag):
            continue
        if not child.has_attr(‘href‘):
            continue
        yield child
 
tags = soup.find(‘body‘).select("a", _candidate_generator=default_candidate_generator)
print(type(tags), tags)
 
from bs4.element import Tag
def default_candidate_generator(tag):
    for child in tag.descendants:
        if not isinstance(child, Tag):
            continue
        if not child.has_attr(‘href‘):
            continue
        yield child
 
tags = soup.find(‘body‘).select("a", _candidate_generator=default_candidate_generator, limit=1)
print(type(tags), tags)

19. 标签的内容

1
2
3
4
5
6
7
8
9
10
11
12
13
# tag = soup.find(‘span‘)
# print(tag.string)          # 获取
# tag.string = ‘new content‘ # 设置
# print(soup)
 
# tag = soup.find(‘body‘)
# print(tag.string)
# tag.string = ‘xxx‘
# print(soup)
 
# tag = soup.find(‘body‘)
# v = tag.stripped_strings  # 递归内部获取所有标签的文本
# print(v)

20.append在当前标签内部追加一个标签

1
2
3
4
5
6
7
8
9
10
# tag = soup.find(‘body‘)
# tag.append(soup.find(‘a‘))
# print(soup)
#
# from bs4.element import Tag
# obj = Tag(name=‘i‘,attrs={‘id‘: ‘it‘})
# obj.string = ‘我是一个新来的‘
# tag = soup.find(‘body‘)
# tag.append(obj)
# print(soup)

21.insert在当前标签内部指定位置插入一个标签

1
2
3
4
5
6
# from bs4.element import Tag
# obj = Tag(name=‘i‘, attrs={‘id‘: ‘it‘})
# obj.string = ‘我是一个新来的‘
# tag = soup.find(‘body‘)
# tag.insert(2, obj)
# print(soup)

22. insert_after,insert_before 在当前标签后面或前面插入

1
2
3
4
5
6
7
# from bs4.element import Tag
# obj = Tag(name=‘i‘, attrs={‘id‘: ‘it‘})
# obj.string = ‘我是一个新来的‘
# tag = soup.find(‘body‘)
# # tag.insert_before(obj)
# tag.insert_after(obj)
# print(soup)

23. replace_with 在当前标签替换为指定标签

1
2
3
4
5
6
# from bs4.element import Tag
# obj = Tag(name=‘i‘, attrs={‘id‘: ‘it‘})
# obj.string = ‘我是一个新来的‘
# tag = soup.find(‘div‘)
# tag.replace_with(obj)
# print(soup)

24. 创建标签之间的关系

1
2
3
4
# tag = soup.find(‘div‘)
# a = soup.find(‘a‘)
# tag.setup(previous_sibling=a)
# print(tag.previous_sibling)

25. wrap,将指定标签把当前标签包裹起来

1
2
3
4
5
6
7
8
9
10
11
# from bs4.element import Tag
# obj1 = Tag(name=‘div‘, attrs={‘id‘: ‘it‘})
# obj1.string = ‘我是一个新来的‘
#
# tag = soup.find(‘a‘)
# v = tag.wrap(obj1)
# print(soup)
 
# tag = soup.find(‘a‘)
# v = tag.wrap(soup.find(‘p‘))
# print(soup)

26. unwrap,去掉当前标签,将保留其包裹的标签

1
2
3
# tag = soup.find(‘a‘)
# v = tag.unwrap()
# print(soup)

三 设计request方式

1 通过requests.session()访问主页

session方式请求的优点是不用再来回分析cookies带来的影响。

 r0 = se.get( url="https://mp.weixin.qq.com" ) 

返回值是整个html页面。

2 通过post方式将身份信息传回

在浏览器上分析后发现登陆的url是:

https://mp.weixin.qq.com/cgi-bin/bizlogin?action=startlogin

再起requesthead中发送的data模式为:

技术分享

{
"username":‘********@qq.com‘,
"pwd":pwd,
"imgcode":None,
"f":"json",
}

查看返回的内容:

{"base_resp":{"err_msg":"ok","ret":0},"redirect_url":"/cgi-bin/readtemplate?t=user/validate_wx_tmpl&lang=zh_CN&account=1972124257%40qq.com&appticket=18193cc664f191a1a93e&bindalias=cg2***16&mobile=132******69&wx_protect=1&grey=1"}

返回了一个redirect连接。经过和“https://www.mp.weixin,com”的拼接为第三次请求的url。

其中注意:1通过对登陆事件的查看发现了在发送前,有经过密码的md5的简单处理。JS源码:

pwd:$.md5(o.substr(0,16)),取前16位进行MD5加密。2在之后的请求中都要在头中加上Referer信息:即之前访问的url。

3 get请求第二步拼接的url

查看返回:

是一个整的html页面即扫码验证的页面。

技术分享

 查看其network发现页面通过js一直一秒一个朝着一个url发送请求:判断其是在ask服务器扫码是否通过。

这里涉及到的requests模块拿到的html页面无法得到js渲染的内容。所以这里的二维码的img的标签想要通过soup格式化是拿不到的:

<a class="qrcode js_qrcode"src=""></a>

所以想要拿到这个二维码必须通过其他模块来模拟,这里是通过分析二维码的url来自己生成url(因为二维码验证和验证码是同一个原理,只要我们先发送一个带有keys的图片给服务器,服务器会得到这个keys记录在cookie中,并等待手机端的扫码。手机端的扫码也是通过分析图像得到keys发送给服务器,服务器验证后在返回给我们一个status为1.在后几步的实现中会有所体现)。

so,在浏览器提取到二维码的src是一下格式:

/cgi-bin/loginqrcode?action=getqrcode&param=4300&rd=72

每次不同的图片对应不同的rd随机数。所以这里只要随便get一个类似url就行(后台的反应大概是拿到path和后面的rd随机数,并将二维码keys绑定在cookie中等待验证):

r3 = se.get(
    "https://mp.weixin.qq.com/cgi-bin/loginqrcode?action=getqrcode&param=4300&rd=154"
)
re_r3 = r3.content
with open("reqqq.jpg","wb") as f:
    f.write(re_r3)

上述代码将二维码照片保存在了本地,通过自己的手机扫码验证。(因为涉及到了安卓这里手动扫)并且通过一个input使脚本block住防止丢失session。

4 在正常的浏览器登陆中扫码后自动登陆

通过两种方式可以知道这其中到底发生了什么:

1 查看源码:发现js一秒一次发送的请求返回的都是status都是0,只有扫码验证后返回的是1。后又post请求一个:

https://mp.weixin.qq.com/cgi-bin/bizlogin?action=login&token=&lang=zh_CN

2 通过chrome浏览器的preserveLog方式找到这一次的请求。查看url。

其实这一次的post请求时在告诉服务器万事俱备,只差token。服务器返回主页面的url:(放在redirect中)

https://mp.weixin.qq.com/cgi-bin/home?t=home/index&lang=zh_CN&token=54546879

5 最后一次请求上述的url后进入主页面

r6 = se.get(
        url="https://mp.weixin.qq.com/cgi-bin/home?t=home/index&lang=zh_CN&token=%s"%(token),
        headers={
            "Referer": redirect,
            "Upgrade-Insecure-Requests": "1",
        },
    )

技术分享

登陆done,之后会有beautifulsoup分析页面发送消息。。。

源码

技术分享
  1 # _*_ coding:utf-8 _*_
  2 # _author:khal_Cgg
  3 # _date:2017/2/10
  4 import hashlib
  5 def create_md5(need_date):
  6     m = hashlib.md5()
  7     m.update(bytes(str(need_date), encoding=utf-8))
  8     return m.hexdigest()
  9 
 10 pwd = create_md5("*******")
 11 
 12 import requests
 13 se = requests.Session()
 14 r0 = se.get(
 15     url="https://mp.weixin.qq.com"
 16 )
 17 print("===================>",r0.text)
 18 r1 = se.post(url="https://mp.weixin.qq.com/cgi-bin/bizlogin?action=startlogin",
 19              data={
 20                 "username":*******@qq.com,
 21                 "pwd":pwd,
 22                 "imgcode":None,
 23                 "f":"json",
 24 },
 25              headers={
 26                 "Referer":"https://mp.weixin.qq.com",
 27                 "User-Agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.87 Safari/537.36"
 28              },
 29              )
 30 
 31 print("===================>",r1.text)
 32 redirect = r1.json()["redirect_url"]
 33 redirect="https://mp.weixin.qq.com"+redirect
 34 r2 = se.request(
 35     method="get",
 36     url=redirect,
 37 )
 38 print("===================>",r2.text)
 39 r3 = se.get(
 40     "https://mp.weixin.qq.com/cgi-bin/loginqrcode?action=getqrcode&param=4300&rd=154"
 41 )
 42 re_r3 = r3.content
 43 with open("reqqq.jpg","wb") as f:
 44     f.write(re_r3)
 45 from bs4 import BeautifulSoup
 46 import re
 47 # soup = BeautifulSoup(erweima,"html.parser")
 48 # tag_erweima = soup.find_all(name="img",attrs={"class":"qrcode js_qrcode"})
 49 # # print(tag_erweima)
 50 # # print(r2.text)
 51 print(redirect)
 52 # aim_text = r1.text
 53 # print(aim_text)
 54 # soup = BeautifulSoup(aim_text,"html.parser")
 55 # tete = soup.find_all(name="div",attrs={"class":"user_info"})
 56 # print(tete)
 57 # token = re.findall(".*token=(\d+)",aim_text)
 58 # token = "1528871467"
 59 
 60 # print(token)
 61 # user_send_form = {
 62 #     "token":token,
 63 #     "lang":"zh_cn",
 64 #     "f":"json",
 65 #     "ajax":"1",
 66 #     "random":"0.7277543939038833",
 67 #     "user_opnid":‘oDm6kwV1TS913EeqE7gxMyTLrBcU‘
 68 # }
 69 # r3 =se.post(
 70 #     url="https://mp.weixin.qq.com/cgi-bin/user_tag?action=get_fans_info",
 71 #     data=user_send_form,
 72 #     headers={
 73 #         "Referer":"https://mp.weixin.qq.com",
 74 #     }
 75 # )
 76 # print(
 77 #     r3.text,
 78 # )
 79 yanzheng = input("===>")
 80 if yanzheng=="1":
 81     r4 =se.get(
 82         url="https://mp.weixin.qq.com/cgi-bin/loginqrcode?action=ask&token=&lang=zh_CN&token=&lang=zh_CN&f=json&ajax=1&random=0.28636331791065484",
 83         headers = {
 84                   "Referer": redirect,
 85                   "Upgrade-Insecure-Requests": "1"
 86               },
 87     )
 88     # print(r4.text)
 89     # print(r4.cookies)
 90     r5 = se.post(
 91         url="https://mp.weixin.qq.com/cgi-bin/bizlogin?action=login&token=&lang=zh_CN",
 92         headers={
 93             "Referer":redirect,
 94             "Upgrade-Insecure-Requests":"1"
 95         },
 96     )
 97     # print(r4.text)
 98     end = r5.text
 99     token = re.findall(".*token=(\d+)", end)[0]
100     print(token,type(token))
101     r6 = se.get(
102         url="https://mp.weixin.qq.com/cgi-bin/home?t=home/index&lang=zh_CN&token=%s"%(token),
103         headers={
104             "Referer": redirect,
105             "Upgrade-Insecure-Requests": "1",
106         },
107     )
108     print(r6.text)
源码

不足

  • 为什么不能allow_redirect?加上也不能跳转,对requests模块不足够了解。
  • 对cookies的分析,哪个有用与没用。

 

爬虫--微信公众平台登录

标签:注册   fan   before   default   字节   lock   with   start   a标签   

原文地址:http://www.cnblogs.com/khal-Cgg/p/6388893.html

(0)
(1)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!