码迷,mamicode.com
首页 > 其他好文 > 详细

一个简单的爬虫case2

时间:2018-12-01 23:46:10      阅读:183      评论:0      收藏:0      [点我收藏+]

标签:parser   def   import   html   content   crawl   +=   stanford   ini   

目标是把这里的ppt什么的给下下来:https://web.stanford.edu/~jurafsky/slp3/

import requests
from bs4 import BeautifulSoup
import os
import socket
import time

socket.setdefaulttimeout(20)
url = ‘https://web.stanford.edu/~jurafsky/slp3/‘
r = requests.get(url)
root = ‘./crawl‘
soup = BeautifulSoup(r.text,"html.parser")
# print(soup)
count = 0
for link in soup.find_all("a"):
    name = link.get(‘href‘)
    if name.split(‘.‘)[-1] in [‘pdf‘,‘ppt‘,‘pptx‘]:
        print(url + name)
        path = root + ‘/‘+ name.split(‘/‘)[-1]
        if not os.path.exists(root):
            os.mkdir(root)
        if not os.path.exists(path):
            r = requests.get(url + name)
            with open(path, "wb") as f:
                f.write(r.content)
                f.close()
        count += 1
        print(count)
        r.close()
        time.sleep(1)
print("finished")

一个简单的爬虫case2

标签:parser   def   import   html   content   crawl   +=   stanford   ini   

原文地址:https://www.cnblogs.com/bernieloveslife/p/10051273.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!