码迷,mamicode.com
首页 > Web开发 > 详细

shell命令curl抓取网页内容-下载xls文件

时间:2015-04-15 19:42:27      阅读:212      评论:0      收藏:0      [点我收藏+]

标签:shell   curl   spider   爬虫   

通过curl命令抓取网页内容,关键点如下:

1.curl要模拟浏览器,最好是有个代理,因为很多站点都会搞个反爬虫什么的。

2.我的需求很简单,就是在国家外汇管理局上下载各种货币对美元的汇率。

http://www.safe.gov.cn/wps/portal/sy/tjsj_dmzsl

3.主要命令:curl,grep,awk,xls2txt,msql(LOAD DATA )。

curl:提取url

xls2txt:shell下的xls操作命令


xls2txt-0.14.tar.gz

下载地址:http://wizard.ae.krakow.pl/~jb/xls2txt/


msql(LOAD DATA ):导入到mysql数据库


备注:没有解决的问题就是下载的时候,下载下来的文件名不是转码后的文件名,而是utf8的字符串。
废话少说,贴一个粗糙版本,从抓取到操作xls


[root@DevelopServer http_curl]# cat shellspider.sh 

THE_DATE=`date +"%Y-%m-%d %H:%M:%S"`
echo "[$THE_DATE]:Begin  $1 $2  ......"

base_url="http://www.safe.gov.cn"
list_url="http://www.safe.gov.cn/wps/portal/sy/tjsj_dmzsl"
curl -S  -e "$list_url" -w %{http_code}"\\n"   -A "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:27.0) Gecko/20100101 Firefox/27.0" -H "Host:www.safe.gov.cn"  -D header1.txt -b "JSESSIONID=x3nfVnDP0cGNnNf2d6GPWZ7NnGFVJCJ3pdPLl0pDjjMC31XB0YZ3\!504534437" -c servercookie1.txt   "$list_url"  |grep 'href="/wps/portal/' |grep  hbdm_store |awk -F '"' '{print $4}' > list.lst

echo "http_code=$http_code"
THE_DATE=`date +"%Y-%m-%d %H:%M:%S"`
echo "[$THE_DATE]:get list_url  ......"
sleep 3

index=`ls -lt *.xls|wc -l`
if [ 0 -ne ];then
   index=0
fi
 for i in `cat list.lst `
 do 
  # index=$((${index}+1))
  # xls_file="$index.xls"   

   tmp_url="$base_url$i"
   download_page_url=${tmp_url//\!/\\!}
   #echo "$download_page_url"
   THE_DATE=`date +"%Y-%m-%d %H:%M:%S"`
   echo "[$THE_DATE]:get download_page_url=$download_page_url  ......"

   tmp_url=`curl -S  -e "$list_url" -w %{http_code} -A "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:27.0) Gecko/20100101 Firefox/27.0" -H "Host:www.safe.gov.cn"  -D header2.txt -b "JSESSIONID=x3nfVnDP0cGNnNf2d6GPWZ7NnGFVJCJ3pdPLl0pDjjMC31XB0YZ3\!504534436" -c servercookie2.txt "$download_page_url"|grep urlArr|grep  '/wps/wcm/connect/' |grep '.xls' |awk -F "'" '{print $2}'`
   echo "http_code=$http_code"

   download_url=${tmp_url//\!/\\!}
   download_url="$base_url$download_url"
    #echo "$download_url"
   THE_DATE=`date +"%Y-%m-%d %H:%M:%S"`
   echo "[$THE_DATE]:get download_url=$download_url  ......"

    sleep 1
    curl -S  -e "$list_url" -w %{http_code}  -A "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:27.0) Gecko/20100101 Firefox/27.0" -H "Host:www.safe.gov.cn"  -D header3.txt -b "JSESSIONID=x3nfVnDP0cGNnNf2d6GPWZ7NnGFVJCJ3pdPLl0pDjjMC31XB0YZ3\!504534435" -c servercookie3.txt -O "$download_url" >ok.code
    cat ok.code

    echo "http_code=$http_code"
   THE_DATE=`date +"%Y-%m-%d %H:%M:%S"`
   echo "[$THE_DATE]:end download_url=$download_url  ......"

   sleep 3
 done

  for i in `ls *\%*`
  do
   index=$((${index}+1))
   xls_file="$index.xls"    
      ls -lt $i
      mv -f $i $xls_file
      xls2txt -n 0  $xls_file |head
  done

THE_DATE=`date +"%Y-%m-%d %H:%M:%S"`
echo "[$THE_DATE]:End  $1 $2  ......"

[root@DevelopServer http_curl]# 


shell命令curl抓取网页内容-下载xls文件

标签:shell   curl   spider   爬虫   

原文地址:http://blog.csdn.net/rookie_ceo/article/details/45061779

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!