爬取猫眼电影top100,request、beautifulsoup运用
这是第三篇爬虫实战,运用request请求,beautifulsoup解析,mysql储存。
如果你正在学习爬虫,本文是比较好的选择,建议在学习的时候打开猫眼电影top100进行标签的选择,具体分析步骤就省略啦,具体的方法可看我以前的总结。
练习猫眼top100是为了复习request和beautifulsoup以及sql中表的创建、字段类型的设置。好了附上源代码
import requests from bs4 import BeautifulSoup from requests.exceptions import RequestException headers = { 'user-agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36"} def get_page_index(url): try: respones = requests.get(url, headers=headers) # get请求 if respones.status_code == 200: return respones.text return None except RequestException: print("请求有错误") return None def parse_one_page(html): soup = BeautifulSoup(html, 'lxml') board_wrapper = soup.find_all('dl', class_='board-wrapper')[0] board_index = board_wrapper.find_all('dd') for link in board_index: title = link.find_all('p', class_='name')[0].text board_index = link.find_all('i')[0].text star = link.find_all('p', class_='star')[0].text.strip()[3:] releasetime = link.find_all('p', class_='releasetime')[0].text.strip()[5:] score = link.find_all('p', class_='score')[0].text print(board_index, title, star, releasetime, score) def main(): t = [0, 10, 20, 30, 40, 50, 60, 70, 80, 90] urls = ['https://maoyan.com/board/4?offset={}'.format(i) for i in t] for url in urls: html = get_page_index(url) parse_one_page(html) if __name__ == "__main__": main()
爬取内容如下:
存入数据库
存入数据库01:新建表 ALTER TABLE maoyantop100 CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci; create table maoyantop100 (board_index int not null, title varchar(20) not null, star varchar(50) not null, releasetime varchar(50) not null, score float not null);
存入数据库02 import pymysql import requests import json from bs4 import BeautifulSoup from requests.exceptions import RequestException headers = { 'user-agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36"} def get_page_index(url): try: respones = requests.get(url, headers=headers) # get请求 if respones.status_code == 200: return respones.text return None except RequestException: print("请求有错误") return None def parse_one_page(html): soup = BeautifulSoup(html, 'lxml') board_wrapper = soup.find_all('dl', class_='board-wrapper')[0] board_index = board_wrapper.find_all('dd') for link in board_index: title = link.find_all('p', class_='name')[0].text board_index = link.find_all('i')[0].text star = link.find_all('p', class_='star')[0].text.strip()[3:] releasetime = link.find_all('p', class_='releasetime')[0].text.strip()[5:] score = link.find_all('p', class_='score')[0].text return board_index, title, star, releasetime, score def save_to_mysql(board_index,title,star, releasetime,score): cur = conn.cursor() #用来获得python执行Mysql命令的方法,操作游标。cur = conn.sursor才是真的执行 insert_data = "INSERT INTO maoyantop100(board_index,title,star, releasetime,score)" "VALUES(%s,%s,%s,%s,%s)" val = (board_index,title,star, releasetime,score) cur.execute(insert_data, val) conn.commit() conn=pymysql.connect(host="localhost",user="root", password='180****3940',db='58yuesao',port=3306, charset="utf8") def main(): t = [0, 10, 20, 30, 40, 50, 60, 70, 80, 90] urls = ['https://maoyan.com/board/4?offset={}'.format(i) for i in t] for url in urls: html = get_page_index(url) board_index, title, star, releasetime, score = parse_one_page(html) save_to_mysql(board_index, title, star, releasetime, score) if __name__ == "__main__": main()
存入数据库
本文存在一个小bug,在存入数据库时很崩溃,只爬取了每页的第一条信息。
我也不知道为什么,没存入数据库时爬取内容是完整的,存入之后就…………解决后会更新到本文。
原文地址:https://www.jianshu.com/p/aa19130543fa
相关推荐
-
Fiddler抓包配置具体步骤 网络爬虫
2019-5-17
-
python爬虫从入门到放弃前奏之学习方法 网络爬虫
2019-8-19
-
爬虫框架Scrapy的安装与基本使用 网络爬虫
2019-8-25
-
Scrapy抓取Ajax动态页面 网络爬虫
2019-1-29
-
爬虫入门系列(二):优雅的HTTP库requests 网络爬虫
2018-2-20
-
python3 scrapy_redis 分布式爬取房天下存mongodb 网络爬虫
2019-8-26
-
下载 tumblr 标记为喜欢的内容 网络爬虫
2019-8-26
-
爬虫入门系列(六):正则表达式完全指南(下) 网络爬虫
2018-2-20
-
使用FilesPipeline和ImagesPipeline 网络爬虫
2019-8-26
-
Python爬虫(11):Scrapy框架的安装和基本使用 网络爬虫
2018-3-13