码迷,mamicode.com
首页 > 其他好文 > 详细

爬虫第一课

时间:2018-03-26 16:53:22      阅读:107      评论:0      收藏:0      [点我收藏+]

标签:__init__   运行   text   www.   ade   open   list   写入   爬虫   

 一、小说下载

小说网址是:http://www.biqukan.com

import requests
from bs4 import BeautifulSoup

class downloader(object):
    
    def __init__(self):
        self.url = http://www.biqukan.com/1_1408/
        self.serve = http://www.biqukan.com
        self.page_url = []
        self.page_name = []
 
    #获取每个章节的链接和章节名字
    def get_page_url(self):
        html = requests.get(self.url)
        soup = BeautifulSoup(html.text,lxml)
        url_list = soup.find_all(div,class_="listmain")
        url_list = BeautifulSoup(str(url_list[0]))
        a = url_list.find_all(a)
        for each in a[12:]:
            self.page_url.append(self.serve + each.get(href))
            self.page_name.append(each.string)
  
    #小说页面的内容
    def get_html(self,url):
        html = requests.get(url)
        soup = BeautifulSoup(html.text,lxml)
        content = soup.find_all(div,class_="showtxt")
        content = content[0].text
        content = content.replace(<br/><br/>,\n\n)
        return content
    
    #写入txt文件中
    def writer(self,path,name,text):
        with open(path,a,encoding=utf-8) as f:
            f.write(name+\n)
            f.write(text)
            f.write(\n\n)            
        
        
if __name__ == __main__:
    dl = downloader()  #实例化类
    dl.get_page_url()   #运行获取章节名称,url的函数
    name = dl.page_name  #获取到的章节名称和url赋值给name,url
    url = dl.page_url
    for i in range(len(name)):
        dl.writer(小说.txt,name[i],dl.get_html(url[i]))

 

爬虫第一课

标签:__init__   运行   text   www.   ade   open   list   写入   爬虫   

原文地址:https://www.cnblogs.com/slowlyslowly/p/8651082.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!