今日头条爬虫

网络爬虫

浏览数:182

2019-8-26

AD:资源代下载服务

最近一直在学习python的scrapy框架。写了比较多的小例子。工欲善其事必先利其器。今天描述的就是爬取今日头条的科技板块新闻。练练这把利器。
教程依赖scrapy,pymongo模块,可以直接先下载相应的环境依赖。

  • 1.分析今日头条新闻的API接口
      {
    "has_more": false,
    "message": "success",
    "data": [
      {
        "chinese_tag": "财经",
        "media_avatar_url": "//p3.pstatp.com/large/1233000741099c9f4a59",
        "is_feed_ad": false,
        "tag_url": "news_finance",
        "title": "【特写】数字货币的信徒们",
        "single_mode": true,
        "middle_mode": true,
        "abstract": "在九月初在中国发文整治ICO后,硅谷的区块链项目创业者林吓洪把筹集的资金全部还给了中国投资者们。在那次整治中,监管部门宣布,首次代币发行(Initial Coin Offering,简称ICO)属于非法行为,所有平台必须返还筹集的资金。",
        "tag": "news_finance",
        "label": [
          "数字货币",
          "风投",
          "比特币",
          "投资",
          "经济"
        ],
        "behot_time": 1506326903,
        "source_url": "/group/6469550301866803469/",
        "source": "界面新闻",
        "more_mode": false,
        "article_genre": "article",
        "image_url": "//p1.pstatp.com/list/190x124/317200041ea1cf451f52",
        "has_gallery": false,
        "group_source": 1,
        "comments_count": 10,
        "group_id": "6469550301866803469",
        "media_url": "/c/user/52857496566/"
      },
      {
        "image_url": "//p3.pstatp.com/list/190x124/31770009f2c887fdb867",
        "single_mode": true,
        "abstract": "早,来看看今天的新闻。小米就校招风波道歉@DoNews【小米就校招风波道歉 对涉事员工通报批评】近日,一名自称在河南郑州大学日语专业学习的大学生表示,她与同学在一次校园招聘宣讲会上无故被来自小米公司的主管人员讽刺。导致自己和本专业的同学愤然离开。",
        "middle_mode": false,
        "more_mode": true,
        "tag": "news_tech",
        "label": [
          "小米科技",
          "亚马逊公司",
          "Uber",
          "美国",
          "乐视"
        ],
        "tag_url": "news_tech",
        "title": "小米就校招风波道歉;ofo正寻求新一轮融资",
        "chinese_tag": "科技",
        "source": "虎嗅APP",
        "group_source": 1,
        "has_gallery": false,
        "media_url": "/c/user/3358265611/",
        "media_avatar_url": "//p2.pstatp.com/large/18a50010126f235bf938",
        "image_list": [
          {
            "url": "//p3.pstatp.com/list/31770009f2c887fdb867"
          },
          {
            "url": "//p1.pstatp.com/list/317b00061c410d6d0352"
          },
          {
            "url": "//p3.pstatp.com/list/3172000337e0332b337f"
          }
        ],
        "source_url": "/group/6469472579270672654/",
        "article_genre": "article",
        "is_feed_ad": false,
        "behot_time": 1506326303,
        "comments_count": 114,
        "group_id": "6469472579270672654"
      },
      {
        "image_url": "//p3.pstatp.com/list/190x124/3c64000074857b07c81d",
        "single_mode": true,
        "abstract": "蓝燕,经常关注香港电影的人应该不陌生,在2011年靠着香港三级影片《3D肉蒲团之极乐宝鉴》走红,并逐渐出现人们的视线中。被称为新一代的“艳星”。可走红后的她并没有获得很好的资源,所接拍的影片大多数是一些不知名的配角。",
        "middle_mode": false,
        "more_mode": true,
        "tag": "news_entertainment",
        "label": [
          "蓝燕 ",
          "肉蒲团",
          "投资",
          "娱乐"
        ],
        "tag_url": "news_entertainment",
        "title": "艳星蓝燕美照曝光 靠着《3D肉蒲团》走红",
        "chinese_tag": "娱乐",
        "source": "陪你乐不停",
        "group_source": 2,
        "has_gallery": false,
        "media_url": "/c/user/61497461135/",
        "media_avatar_url": "//p3.pstatp.com/large/382f000f5dd459d0eb74",
        "image_list": [
          {
            "url": "//p3.pstatp.com/list/3c64000074857b07c81d"
          },
          {
            "url": "//p3.pstatp.com/list/3c6000022fcec3f4ca48"
          },
          {
            "url": "//p3.pstatp.com/list/3c60000230155491a84d"
          }
        ],
        "source_url": "/group/6469578595697164813/",
        "article_genre": "article",
        "is_feed_ad": false,
        "behot_time": 1506325703,
        "comments_count": 2,
        "group_id": "6469578595697164813"
      },
      {
        "log_extra": "{\"ad_price\":\"Wci5d__iJRJZyLl3_-IlEuQYjwGdUeJEIl99Ew\",\"convert_id\":0,\"external_action\":0,\"req_id\":\"201709251608231720180471641841E3\",\"rit\":1}",
        "image_url": "//p3.pstatp.com/large/26c00009898dbc9c5a52",
        "read_count": 12196,
        "ban_comment": 1,
        "single_mode": true,
        "abstract": "",
        "image_list": [],
        "has_video": false,
        "article_type": 1,
        "tag": "ad",
        "display_info": "股市迎来重磅利好消息,这些股或将上涨翻倍,微信领取",
        "has_m3u8_video": 0,
        "label": "广告",
        "user_verified": 0,
        "aggr_type": 1,
        "expire_seconds": 314754930,
        "cell_type": 0,
        "article_sub_type": 0,
        "group_flags": 4096,
        "bury_count": 0,
        "title": "股市迎来重磅利好消息,这些股或将上涨翻倍,微信领取",
        "ignore_web_transform": 1,
        "source_icon_style": 3,
        "tip": 0,
        "hot": 0,
        "share_url": "http://m.toutiao.com/group/6465452273144168717/?iid=0&app=news_article",
        "has_mp4_video": 0,
        "source": "联讯证券",
        "comment_count": 0,
        "article_url": "http://cq3.ilyae.cn/toutiao2/index.html",
        "filter_words": [
          {
            "id": "1:74",
            "name": "股票",
            "is_selected": false
          },
          {
            "id": "1:6",
            "name": "金融保险",
            "is_selected": false
          },
          {
            "id": "2:0",
            "name": "来源:联讯证券",
            "is_selected": false
          },
          {
            "id": "4:2",
            "name": "看过了",
            "is_selected": false
          }
        ],
        "has_gallery": false,
        "publish_time": 1505355414,
        "ad_id": 69048936405,
        "action_list": [
          {
            "action": 1,
            "extra": {},
            "desc": ""
          },
          {
            "action": 3,
            "extra": {},
            "desc": ""
          },
          {
            "action": 7,
            "extra": {},
            "desc": ""
          },
          {
            "action": 9,
            "extra": {},
            "desc": ""
          }
        ],
        "has_image": false,
        "cell_layout_style": 1,
        "tag_id": 6465452273144168717,
        "source_url": "http://cq3.ilyae.cn/toutiao2/index.html",
        "video_style": 0,
        "verified_content": "",
        "is_feed_ad": true,
        "large_image_list": [],
        "item_id": 6465452273144168717,
        "natant_level": 2,
        "tag_url": "search/?keyword=None",
        "article_genre": "ad",
        "level": 0,
        "cell_flag": 10,
        "source_open_url": "sslocal://search?from=channel_source&keyword=%E8%81%94%E8%AE%AF%E8%AF%81%E5%88%B8",
        "display_url": "http://cq3.ilyae.cn/toutiao2/index.html",
        "digg_count": 0,
        "behot_time": 1506325103,
        "article_alt_url": "http://m.toutiao.com/group/article/6465452273144168717/",
        "cursor": 1506325103999,
        "url": "http://cq3.ilyae.cn/toutiao2/index.html",
        "preload_web": 0,
        "ad_label": "广告",
        "user_repin": 0,
        "label_style": 3,
        "item_version": 0,
        "group_id": "6465452273144168717",
        "middle_image": {
          "url": "http://p3.pstatp.com/large/26c00009898dbc9c5a52",
          "width": 456,
          "url_list": [
            {
              "url": "http://p3.pstatp.com/large/26c00009898dbc9c5a52"
            },
            {
              "url": "http://pb9.pstatp.com/large/26c00009898dbc9c5a52"
            },
            {
              "url": "http://pb1.pstatp.com/large/26c00009898dbc9c5a52"
            }
          ],
          "uri": "large/26c00009898dbc9c5a52",
          "height": 256
        }
      },
      {
        "image_url": "//p3.pstatp.com/list/190x124/3b050002710aff2b3422",
        "single_mode": true,
        "abstract": "如今2017年微信的月活跃用户达9亿,微信成了中国最大用户群体的手机APP,它集通讯、娱乐、支付等于一体。很多朋友习惯每天打开微信收发信息、查看朋友圈动态。",
        "middle_mode": false,
        "more_mode": true,
        "tag": "news_tech",
        "label": [
          "移动互联网",
          "微信",
          "泽西岛",
          "美女",
          "欧洲"
        ],
        "tag_url": "news_tech",
        "title": "为什么微信中那么多美女来自安道尔或泽西岛?这是一种暗语吗",
        "chinese_tag": "科技",
        "source": "狮子夜光杯",
        "group_source": 2,
        "has_gallery": false,
        "media_url": "/c/user/53397416061/",
        "media_avatar_url": "//p3.pstatp.com/large/12330013573aaa4c18b1",
        "image_list": [
          {
            "url": "//p3.pstatp.com/list/3b050002710aff2b3422"
          },
          {
            "url": "//p3.pstatp.com/list/3b05000271096e15298e"
          },
          {
            "url": "//p9.pstatp.com/list/3b080000bdf469bf7330"
          }
        ],
        "source_url": "/group/6467319367565574670/",
        "article_genre": "article",
        "is_feed_ad": false,
        "behot_time": 1506324503,
        "comments_count": 46,
        "group_id": "6467319367565574670"
      },
      {
        "image_url": "//p3.pstatp.com/list/190x124/3b0f0003c132eb485453",
        "single_mode": true,
        "abstract": "最近几周,各大互联网科技公司都开始秋季招聘了这些是正经的公司的招聘笔试题:关于c++的inline关键字,以下说法正确的是()对N个数进行排序,在各自最优条件下以下算法复杂度最低的是()为百度设计一款新产品,可以结合百度现有的优势和资源,专注解决大学生用户的某个需求痛点,请给出主",
        "middle_mode": false,
        "more_mode": true,
        "tag": "news_design",
        "label": [
          "电子商务",
          "京东",
          "面试",
          "刘强东",
          "计算复杂性理论"
        ],
        "tag_url": "search/?keyword=%E8%AE%BE%E8%AE%A1",
        "title": "京东校招笔试题“如何用0.01元买到一瓶可乐”?竟被苏宁秀了一脸",
        "chinese_tag": "设计",
        "source": "小禾科技",
        "group_source": 2,
        "has_gallery": false,
        "media_url": "/c/user/59954335187/",
        "media_avatar_url": "//p9.pstatp.com/large/39b10003f6cddd5128fa",
        "image_list": [
          {
            "url": "//p3.pstatp.com/list/3b0f0003c132eb485453"
          },
          {
            "url": "//p3.pstatp.com/list/3b110000ab4c79a56483"
          },
          {
            "url": "//p9.pstatp.com/list/3b1600007cde1cf9bdd0"
          }
        ],
        "source_url": "/group/6468140283245625870/",
        "article_genre": "article",
        "is_feed_ad": false,
        "behot_time": 1506323903,
        "comments_count": 87,
        "group_id": "6468140283245625870"
      },
      {
        "chinese_tag": "科技",
        "media_avatar_url": "//p9.pstatp.com/large/2c6600049c7144303824",
        "is_feed_ad": false,
        "tag_url": "news_tech",
        "title": "为什么家里的WIFI时快时慢?竟然是因为……",
        "single_mode": true,
        "middle_mode": false,
        "abstract": "现在还是个信息的时代,不仅手机、电脑非常普遍,而且现在的人们都喜欢用无线网络之WiFi,因为这样更加便捷。在家使用手机的时候,不用打开手机的数据流量,只要使用WiFi就可以了,无限的流量使用,太方便了。但是很多用户都会有这样的体验,WiFi速度时快时慢的,很是烦恼。",
        "group_source": 2,
        "image_list": [
          {
            "url": "//p3.pstatp.com/list/3b1600009ba8a7500c7e"
          },
          {
            "url": "//p1.pstatp.com/list/3b1600009bb32db8a78a"
          },
          {
            "url": "//p3.pstatp.com/list/3b120000c5dac40ae0fe"
          }
        ],
        "label": [
          "Wi-Fi",
          "科技"
        ],
        "behot_time": 1506323303,
        "source_url": "/group/6468146583144759822/",
        "source": "水电小知识",
        "more_mode": true,
        "article_genre": "article",
        "image_url": "//p3.pstatp.com/list/190x124/3b1600009ba8a7500c7e",
        "tag": "news_tech",
        "has_gallery": false,
        "group_id": "6468146583144759822",
        "media_url": "/c/user/61795844218/"
      }
    ],
    "next": {
      "max_behot_time": 1506323303
      }
    }
    
  • 2.分析请求的参数以及请求循环性:
    • 科技新闻的数据接口使用的是GET请求,传递下面几个查询参数:
      category:news_tech
      utm_source:toutiao
      widen:1
      max_behot_time:0
      max_behot_time_tmp:0
      tadrequire:true
      as:A155493CA8EBB0F
      cp:59C84BEB601F7E1
    
    • 滑动网页,再次发出异步请求,观察请求参数,可以发现只有几个查询参数是改变的。从上一次获取的数据有个字段next->max_behot_time刚好是max_behot_time和max_behot_time_tmp的值。至于as与及cp参数对GET请求影响不大,可以直接取某一次分析的参数值就是max_behot_time参数,作者认为是当前的时间戳,现在数据已经展示给我们,我们就没必要去猜测,有时候抓包分析就是一种猜测API参数意义的过程,大家可以去验证:
      max_behot_time:1506326351
      max_behot_time_tmp:1506326351
      as:A115996C383BD3C
      cp:59C82BAD839CBE1
    
  • 3.构造请请求地址:
    • scrapy项目的目录结构如下所示:
      结构图
    • settings.py源码如下:
  # -*- coding: utf-8 -*-
# Scrapy settings for todayNews project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'todayNews'

SPIDER_MODULES = ['todayNews.spiders']
NEWSPIDER_MODULE = 'todayNews.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'todayNews (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
    'Accept':'text/javascript, text/html, application/xml, text/xml, */*',
    'Accept-Encoding':'gzip, deflate, sdch, br',
    'Accept-Language':'zh-CN,zh;q=0.8',
    'Cache-Control':'no-cache',
    'Connection':'keep-alive',
    'Content-Type':'application/x-www-form-urlencoded',
    'Cookie':'uuid="w:3db0708ea2c549fab1a5371c56f16176"; UM_distinctid=15c7147fecd8d-0a4277451-4349052c-100200-15c7147fecf6f; csrftoken=af9a5a0d4cd30794e6c04511ca9f31eb; _ga=GA1.2.312467779.1496549163; __guid=32687416.738502311042654200.1505560389379.9048; tt_track_id=c7baa73a99ec9787ead7a2f6b01ff56b; _ba=BA0.2-20170923-51d9e-ErxmsyZIIoxNOzZgf6Us; tt_webid=6427627096743282178; WEATHER_CITY=%E5%8C%97%E4%BA%AC; CNZZDATA1259612802=610804389-1496543540-null%7C1506261975; __tasessionId=0vta7k1uc1506263833592; tt_webid=6427627096743282178',
    'Host':'www.toutiao.com',
    'Pragma':'no-cache',
    'Referer':'https://www.toutiao.com/ch/news_tech/',
    'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36',
    'X-Requested-With':'XMLHttpRequest'
}

# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'todayNews.middlewares.TodaynewsSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'todayNews.middlewares.MyCustomDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {  
   'todayNews.pipelines.MongoPipeline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
DOWNLOAD_DELAY = 1   
MONGO_URI="localhost"
MONGO_DATABASE="toutiao"
MONGO_USER="username"
MONGO_PASS="password"
  • pipelines源码如下:
  # -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html


import pymongo

class MongoPipeline(object):
  collection_name="science"
  def __init__(self,mongo_uri,mongo_db,mongo_user,mongo_pass):
      self.mongo_uri=mongo_uri
      self.mongo_db=mongo_db
      self.mongo_user=mongo_user
      self.mongo_pass=mongo_pass
  @classmethod
  def from_crawler(cls,crawler):
      return cls(mongo_uri=crawler.settings.get('MONGO_URI'),mongo_db=crawler.settings.get('MONGO_DATABASE'),mongo_user=crawler.settings.get("MONGO_USER"),mongo_pass=crawler.settings.get("MONGO_PASS"))
  def open_spider(self, spider):
      self.client = pymongo.MongoClient(self.mongo_uri)
      self.db = self.client[self.mongo_db]
      self.db.authenticate(self.mongo_user,self.mongo_pass)
      
  def close_spider(self, spider):
      self.client.close()

  def process_item(self, item, spider):
      # self.db[self.collection_name].update({'url_token': item['url_token']}, {'$set': dict(item)}, True)
      # return item
      self.db[self.collection_name].insert(dict(item))
      return item
  • toutiao.py源码如下:
  # -*- coding: utf-8 -*-
from scrapy import Spider,Request
import json
import logging
from todayNews.items import TodaynewsItem
class ToutiaoSpider(Spider):
  name = "toutiao"
  allowed_domains = ["www.toutiao.com"]
  start_urls = ['https://www.toutiao.com/api/pc/feed/?min_behot_time=0&category=__all__&utm_source=toutiao&widen=1&tadrequire=true&as=A1D5394CB72C38F&cp=59C71C03883F0E1']
  url='https://www.toutiao.com/api/pc/feed/?category=news_tech&utm_source=toutiao&widen=1&max_behot_time={behot_time}&max_behot_time_tmp={behot_time_tmp}&tadrequire=true&as=A165E92C97CC487&cp=59C74CC4E8F7BE1'
  def parse(self, response):
      jsonData=json.loads(response.body.decode("utf-8"))
      MainData=jsonData["data"]
      nextTime=jsonData["next"]["max_behot_time"]
      if jsonData["message"]=='success':
          for rowData in MainData:
              yield rowData
          yield Request(url=self.url.format(behot_time=nextTime,behot_time_tmp=nextTime),callback=self.parse)
      else:
          logging.info("The Data is null")
      
  • items定义数据结构化的提取,因为今日头条返回的json格式并不是规范(可以查阅上面展示的数据),所以并没有定义提取的item值。而是直接把items传递到pipeline梳理保存在MongoDB上面。
  • 4.启动爬虫程序,并查看爬取到数据

    保存的数据

完工

作者:Evtion