链接提取LinkExtractor与全站爬取利器CrawlSpider
LinkExtractor
对于提取链接,之前提到过可以通过Selector
来提取,但Selector
比较适合于爬去的连接比较简单其模式比较固定的情况。scrapy提供了另一个链接提取的方法scrapy.linkextractors.LinkExtractor
,这种方法比较适合于爬去整站链接,并且只需声明一次就可使用多次。先来看看LinkExtractor
构造的参数:
LinkExtractor(allow=(), deny=(), allow_domains=(), deny_domains=(), deny_extensions=None, restrict_xpaths=(), restrict_css=(), tags=('a', 'area'), attrs=('href', ), canonicalize=False, unique=True, process_value=None, strip=True)
下面看看各个参数并用实例讲解:
body = """ <!DOCTYPE html> <html> <head> </head> <body> <div class='scrapyweb'> <p>Scrapy主站</p> <a href="https://scrapy.org/download/">Download</a> <a href="https://scrapy.org/doc/">Doc</a> <a href="https://scrapy.org/resources/">Resources</a> </div> <div class='scrapydoc'> <p>Scrapy 开发文档</p> <a href="https://docs.scrapy.org/en/latest/intro/overview.html">Scrapy at a glance</a> <a href="https://docs.scrapy.org/en/latest/intro/install.html">Installation guide</a> <a href="https://docs.scrapy.org/en/latest/intro/tutorial.html">Scrapy Tutorial</a> <a href="https://github.com/scrapy/scrapy/blob/1.5/docs/topics/media-pipeline.rst">Docs in github</a> </div> </body> </html> """.encode('utf8') response = scrapy.http.HtmlResponse(url='', body = body)
allow
:一个正则表达式或正则表达式的列表,只有匹配正则表达式的才会被提取出来,如果没有提供,就会爬取所有链接。
>>> pattern = r'/intro/\w+$' >>> link_extractor = LinkExtractor(allow = pattern) >>> >>> links = link_extractor.extract_links(response) >>> links [Link(url='https://docs.scrapy.org/en/latest/intro/overview.html', text='Scrapy at a glance', fragment='', nofollow=False), Link(url='https://docs.scrapy.org/en/latest/intro/install.html', text='Installation guide', fragment='', nofollow=False), Link(url='https://docs.scrapy.org/en/latest/intro/tutorial.html', text='Scrapy Tutorial', fragment='', nofollow=False)]
deny
:一个正则表达式或正则表达式的列表,与allow
相反,匹配该正则表达式的链接不会被提取。
>>> pattern = r'/intro/\w+$' >>> link_extractor = LinkExtractor(deny = pattern) >>> links = link_extractor.extract_links(response) >>> links [Link(url='https://scrapy.org/download/', text='Download', fragment='', nofollow=False), Link(url='https://scrapy.org/doc/', text='Doc', fragment='', nofollow=False), Link(url='https://scrapy.org/resources/', text='Resources', fragment='', nofollow=False), Link(url='https://docs.scrapy.org/en/latest/intro/overview.html', text='Scrapy at a glance', fragment='', nofollow=False), Link(url='https://docs.scrapy.org/en/latest/intro/install.html', text='Installation guide', fragment='', nofollow=False), Link(url='https://docs.scrapy.org/en/latest/intro/tutorial.html', text='Scrapy Tutorial', fragment='', nofollow=False)]
allow_domains
:域名或域名列表,该域名下的链接会被爬取;
>>> allow_domain = 'docs.scrapy.org' >>> link_extractor = LinkExtractor(allow_domains = allow_domain) >>> links = link_extractor.extract_links(response) >>> links [Link(url='https://docs.scrapy.org/en/latest/intro/overview.html', text='Scrapy at a glance', fragment='', nofollow=False), Link(url='https://docs.scrapy.org/en/latest/intro/install.html', text='Installation guide', fragment='', nofollow=False), Link(url='https://docs.scrapy.org/en/latest/intro/tutorial.html', text='Scrapy Tutorial', fragment='', nofollow=False)]
deny_domains
:域名或域名列表,该域名下的链接不会被爬取;
>>> deny_domain = 'docs.scrapy.org' >>> link_extractor = LinkExtractor(deny_domains = deny_domain) >>> links = link_extractor.extract_links(response) >>> links [Link(url='https://scrapy.org/download/', text='Download', fragment='', nofollow=False), Link(url='https://scrapy.org/doc/', text='Doc', fragment='', nofollow=False), Link(url='https://scrapy.org/resources/', text='Resources', fragment='', nofollow=False)]
deny_extensions
:字符串或字符串的列表,属于该后缀名的链接不会被爬取,若不提供的话,会使用默认选项;
>>> deny_extensions = 'html' >>> link_extractor = LinkExtractor(deny_extensions = deny_extensions) >>> links = link_extractor.extract_links(response) >>> links [Link(url='https://scrapy.org/download/', text='Download', fragment='', nofollow=False), Link(url='https://scrapy.org/doc/', text='Doc', fragment='', nofollow=False), Link(url='https://scrapy.org/resources/', text='Resources', fragment='', nofollow=False), Link(url='https://github.com/scrapy/scrapy/blob/1.5/docs/topics/media-pipeline.rst', text='Docs in github', fragment='', nofollow=False)]
restrict_xpahs
:xpath或xpath的列表,符合该xpath的列表才会被爬取;
>>> xpath = '//div[@class="scrapyweb"]' >>> link_extractor = LinkExtractor(restrict_xpaths=xpath) >>> links = link_extractor.extract_links(response) >>> links [Link(url='https://scrapy.org/download/', text='Download', fragment='', nofollow=False), Link(url='https://scrapy.org/doc/', text='Doc', fragment='', nofollow=False), Link(url='https://scrapy.org/resources/', text='Resources', fragment='', nofollow=False)]
restrict_css
:同上;
>>> css = 'div.scrapyweb' >>> link_extractor = LinkExtractor(restrict_css=css) >>> links = link_extractor.extract_links(response) >>> links [Link(url='https://scrapy.org/download/', text='Download', fragment='', nofollow=False), Link(url='https://scrapy.org/doc/', text='Doc', fragment='', nofollow=False), Link(url='https://scrapy.org/resources/', text='Resources', fragment='', nofollow=False)]
tags
:tag或tag的list,提取指定标签中的链接,默认为[a,area]
;
attrs
:属性或属性的列表,提取指定属性内的链接,默认为’href’;
>>> body = b"""<img src="http://p0.so.qhmsg.com/bdr/326__/t010ebf2ec5ab7eed55.jpg"/>""" >>> response = scrapy.http.HtmlResponse(url='', body = body) >>> tag = 'img' >>> attr='src' >>> link_extractor = LinkExtractor(tags = tag, attrs=attr, deny_extensions='') #默认jpg是不会爬到的 >>> links = link_extractor.extract_links(response) >>> links [Link(url='http://p0.so.qhmsg.com/bdr/326__/t010ebf2ec5ab7eed55.jpg', text='', fragment='', nofollow=False)]
process_value
:回调函数,该函数会对每一个链接进行处理,回调函数要么返回一个处理后的链接,要么返回None表示忽略该链接,默认函数为lambda x:x
。
>>> from urllib.parse import urljoin >>> def process(href): ... return urljoin('http://example.com', href) ... >>> body = b"""<a href="example.html"/>""" >>> response = scrapy.http.HtmlResponse(url='', body = body) >>> link_extractor = LinkExtractor(process_value = process) >>> links = link_extractor.extract_links(response) >>> links [Link(url='http://example.com/example.html', text='', fragment='', nofollow=False)]
下面我们用LinkExtractor
来提取第二篇博客如何编写一个Spider的下一页链接为例,看怎么在scrapy中应用LinkExtractor
;
修改后的quotes
如下:
# -*- coding: utf-8 -*- import scrapy from ..items import QuoteItem from scrapy.linkextractors import LinkExtractor class QuotesSpider(scrapy.Spider): name = 'quotes' allowed_domains = ['quotes.toscrape.com'] start_urls = ['http://quotes.toscrape.com/'] link_extractor = LinkExtractor(allow=r'/page/\d+/',restrict_css='li.next') #声明一个LinkExtractor对象 # def start_requests(self): # url = "http://quotes.toscrape.com/" # yield scrapy.Request(url, callback = self.parse) def parse(self, response): quote_selector_list = response.css('body > div > div:nth-child(2) > div.col-md-8 div.quote') for quote_selector in quote_selector_list: quote = quote_selector.css('span.text::text').extract_first() author = quote_selector.css('span small.author::text').extract_first() tags = quote_selector.css('div.tags a.tag::text').extract() yield QuoteItem({'quote':quote, 'author':author, 'tags':tags}) links = self.link_extractor.extract_links(response) #爬取链接 if links: yield scrapy.Request(links[0].url, callback = self.parse)
LinkExtrator
与CrawlSpider
结合用的比较多,后面提到CrawlSpider
的时候回讲到如何应用。
CrawlSpider
scrapy除了提供基础的spider
类,还提供了一个更为强大的类CrawlSpider
,CrawlSpider
是基于Spider
改造的,是为全站爬取而生的,非常适合爬取京东、知乎这张有规律的网站。CrawlSpider
基于ExtractorLink
制定了跟进url
的规则,如果打算从网页中获得url
之后继续爬取,非常适合使用 CrawlSpider
。
先来看下scrapy.spiders.CrawlSpider
,CrawlSpider
有2个新的属性:
-
rules
:Rule
对象的列表,定义了爬取link的规则及处理方式; -
parse_start_url(response)
:用来爬取起始相应,默认返回空列表,子类可重写,可返回Item对象或Request对象,或者它们的可迭代对象。
在来看下Rule
,
scrapy.spiders.Rule(link_extractor, callback=None, cb_kwargs=None, follow=None, process_links=None, process_request=None)
-
link_extractor
:LinkExtractor对象; -
callback
:爬取后连接的回调函数,该回调函数接收Response对象,并返回Item/Response()或它们的子类(不要使用parse作为其回调,CrawlSpider
使用parse方法实现了自己的逻辑); -
cb_kwargs
:字典,用于作为**kwargs
参数,传递给callback; -
follow
:是否跟进,若callback=None
,则follow
默认为True
,否则默认为False
; -
process_links
:可调用对象,针对每一个link_extractor
提取的链接会调用该对象,通常作为链接的预处理用; -
process_request
:可调用对象,针对每一个链接构成的Request
对象会调用,返回一个Request
对象或None
。
下面以爬取http://books.toscrape.com/的网站获取书名和对象的价格来看下怎么使用CrawlSpider
;代码如下:
# -*- coding: utf-8 -*- import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule class BookToscrapeSpider(CrawlSpider): name = 'book_toscrape' allowed_domains = ['books.toscrape.com'] start_urls = ['http://books.toscrape.com/'] rules = ( Rule(LinkExtractor(allow=r'catalogue/[\w\-\d]+/index.html'), callback='parse_item', follow=False), #爬取详情页,不follow Rule(LinkExtractor(allow=r'page-\d+.html')), #爬取下一页,默认follow ) def parse_item(self, response): title = response.css('div.product_main h1::text').extract_first() price = response.css('div.product_main p.price_color::text').extract_first() #简单返回 yield { 'title':title, 'price':price, }
运行scrapy crawl book_toscrape -o sell.csv
,可以获取到该网站的所有书名与对应价格。
总结
本篇简单介绍了用于爬取固定模板链接的LinkExtractor,然后讲了与之经常使用的爬虫类CrawlSpider
,使用CrawlSpider
可以用来爬取模式比较固定的网站。下篇我们来看看scrapy中间件的使用。
原文地址:https://www.jianshu.com/p/0775a4df1fe4
相关推荐
-
微信公众号文章爬虫 网络爬虫
2019-8-26
-
python爬虫-36kr网+Django+Echarts图表 网络爬虫
2019-8-29
-
Python图片爬取方法总结 网络爬虫
2019-4-27
-
python爬虫学习之使用XPath解析开奖网站 网络爬虫
2019-10-8
-
python程序员爬取百套美女写真集,同样是爬虫,他为何如此突出 网络爬虫
2019-8-25
-
Python爬虫(3):Requests的高级用法 网络爬虫
2018-3-13
-
python爬虫环境准备-安装anaconda 网络爬虫
2019-8-26
-
全自动获取图片,并留下福利图片 网络爬虫
2019-2-24
-
爬取张佳玮138w+知乎关注者:数据可视化 网络爬虫
2019-3-6
-
Python爬虫(13):Scrapy实战抓取网易云音乐 网络爬虫
2018-3-13