码迷,mamicode.com
首页 > 其他好文 > 详细

爬取全部的校园新闻

时间:2019-04-11 23:23:16      阅读:156      评论:0      收藏:0      [点我收藏+]

标签:结果   bs4   mamicode   tail   sele   download   格式   %s   保存   

题目:

1.从新闻url获取新闻详情: 字典,anews

2.从列表页的url获取新闻url:列表append(字典) alist

3.生成所页列表页的url并获取全部新闻 :列表extend(列表) allnews

*每个同学爬学号尾数开始的10个列表页

4.设置合理的爬取间隔

import time

import random

time.sleep(random.random()*3)

5.用pandas做简单的数据处理并保存

保存到csv或excel文件 

newsdf.to_csv(r‘F:\duym\爬虫\gzccnews.csv‘)

保存到数据库

import sqlite3
with sqlite3.connect(‘gzccnewsdb.sqlite‘) as db:
    newsdf.to_sql(‘gzccnewsdb‘,db)


import requests
from bs4 import BeautifulSoup
from datetime import datetime
import re
import pandas as pd
import time
import random
import sqlite3


def click(url):
    id = re.findall((\d{1,5}), url)[-1]
    clickUrl = http://oa.gzcc.cn/api.php?op=count&id={}&modelid=80.format(id)
    resClick = requests.get(clickUrl)
    newsClick = int(resClick.text.split(.html)[-1].lstrip("(‘").rstrip("‘);"))
    return newsClick


def newsdt(showinfo):
    newsDate = showinfo.split()[0].split(:)[1]
    newsTime = showinfo.split()[1]
    newsDT = newsDate +   + newsTime
    dt = datetime.strptime(newsDT, %Y-%m-%d %H:%M:%S)
    return dt


def anews(url):  # 从新闻url获取新闻详情: 字典,anews
    newsDetail = {}
    res = requests.get(url)
    res.encoding = utf-8
    soup = BeautifulSoup(res.text, html.parser)
    newsDetail[newsTitle] = soup.select(.show-title)[0].text
    showinfo = soup.select(.show-info)[0].text
    newsDetail[newsDT] = newsdt(showinfo)
    newsDetail[newsClick] = click(newsUrl)
    return newsDetail


def alist(url):  # 从列表页的url获取新闻url:列表append(字典) alist
    res = requests.get(listUrl)
    res.encoding = utf-8
    soup = BeautifulSoup(res.text, html.parser)
    newsList = []
    for news in soup.select(li):
        if len(news.select(.news-list-title)) > 0:
            newsUrl = news.select(a)[0][href]
            newsDesc = news.select(.news-list-description)[0].text
            newsDict = anews(newsUrl)
            newsDict[description] = newsDesc
            newsList.append(newsDict)
    return newsList


newsUrl = http://news.gzcc.cn/html/2005/xiaoyuanxinwen_0710/4.html
listUrl = http://news.gzcc.cn/html/xiaoyuanxinwen/
alist(listUrl)
alist(newsUrl)
res = requests.get(http://news.gzcc.cn/html/xiaoyuanxinwen/)
res.encoding = utf-8
soup = BeautifulSoup(res.text, html.parser)

for news in soup.select(li):
    if len(news.select(.news-list-title)) > 0:
        newsUrl = news.select(a)[0][href]
        print(anews(newsUrl))

allnews = []
for i in range(97, 107):  # 爬取学号尾数开始的10个列表页
    listUrl = http://news.gzcc.cn/html/xiaoyuanxinwen/{}.html.format(i)
    allnews.extend(alist(listUrl))

print("allnewsLength={}".format(len(allnews)))
print(allnews)

res = requests.get(http://news.gzcc.cn/html/xiaoyuanxinwen/)
res.encoding = utf-8
soup = BeautifulSoup(res.text, html.parser)
for news in soup.select(li):
    if len(news.select(.news-list-title)) > 0:
        newsUrl = news.select(a)[0][href]
        print(anews(newsUrl))

s1 = pd.Series([100, 23, bugingcode])
print(s1)
pd.Series(anews)
newsdf = pd.DataFrame(allnews)
for i in range(5):
    print(i)
    time.sleep(random.random() * 3)  # 设置爬取的时间间隔
    print(newsdf)

newsdf.to_csv(rD:\Download\gzcc.csv, encoding=utf_8_sig)  # 保存成csv格式,为避免乱码,设置编码格式为utf_8_sig

with sqlite3.connect(rD:\Download\gzccnewsdb2.sqlite) as db:  # 保存文件为sql
    newsdf.to_sql(gzccnewsdb2, db)

效果

技术图片

保存的文件

技术图片

 

css打开的结果

技术图片

 

爬取全部的校园新闻

标签:结果   bs4   mamicode   tail   sele   download   格式   %s   保存   

原文地址:https://www.cnblogs.com/hesz/p/10693323.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!