码迷,mamicode.com
首页 > 编程语言 > 详细

python3用BeautifulSoup抓取id='xiaodeng',且正则包含‘elsie’的标签

时间:2016-11-13 22:07:48      阅读:272      评论:0      收藏:0      [点我收藏+]

标签:tor   urllib   doc   int   html_   网页   webp   style   使用   

# -*- coding:utf-8 -*-
#python 2.7
#XiaoDeng
#http://tieba.baidu.com/p/2460150866
#使用多个指定名字的参数可以同时过滤tag的多个属性


from bs4 import BeautifulSoup
import urllib.request
import re


#如果是网址,可以用这个办法来读取网页
#html_doc = "http://tieba.baidu.com/p/2460150866"
#req = urllib.request.Request(html_doc)  
#webpage = urllib.request.urlopen(req)  
#html = webpage.read()


html="""
<html><head><title>The Dormouse‘s story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse‘s story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="xiaodeng"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
<a href="http://example.com/lacie" class="sister" id="xiaodeng">Lacie</a>
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html, html.parser)


#抓取id=‘xiaodeng‘,且正则包含‘elsie’的标签
content=soup.find_all(href=re.compile("elsie"), id=xiaodeng)
#print(content)  #[<a class="sister" href="http://example.com/elsie" id="xiaodeng"><!-- Elsie --></a>]

for  k  in content:
    print(k.get(href))  #抓取href的内容
    #http://example.com/elsie

 

python3用BeautifulSoup抓取id='xiaodeng',且正则包含‘elsie’的标签

标签:tor   urllib   doc   int   html_   网页   webp   style   使用   

原文地址:http://www.cnblogs.com/dengyg200891/p/6059952.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!