python爬虫开发之使用Python爬虫库requests多线程抓取猫眼电影TOP100实例
作者:real向往 时间:2021-02-25 11:13:42
使用Python爬虫库requests多线程抓取猫眼电影TOP100思路:
查看网页源代码
抓取单页内容
正则表达式提取信息
猫眼TOP100所有信息写入文件
多线程抓取
运行平台:windows
Python版本:Python 3.7.
IDE:Sublime Text
浏览器:Chrome浏览器
1.查看猫眼电影TOP100网页原代码
按F12查看网页源代码发现每一个电影的信息都在“<dd></dd>”标签之中。
点开之后,信息如下:
2.抓取单页内容
在浏览器中打开猫眼电影网站,点击“榜单”,再点击“TOP100榜”如下图:
接下来通过以下代码获取网页源代码:
#-*-coding:utf-8-*-
import requests
from requests.exceptions import RequestException
#猫眼电影网站有反爬虫措施,设置headers后可以爬取
headers = {
'Content-Type': 'text/plain; charset=UTF-8',
'Origin':'https://maoyan.com',
'Referer':'https://maoyan.com/board/4',
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'
}
#爬取网页源代码
def get_one_page(url,headers):
try:
response =requests.get(url,headers =headers)
if response.status_code == 200:
return response.text
return None
except RequestsException:
return None
def main():
url = "https://maoyan.com/board/4"
html = get_one_page(url,headers)
print(html)
if __name__ == '__main__':
main()
执行结果如下:
3.正则表达式提取信息
上图标示信息即为要提取的信息,代码实现如下:
#-*-coding:utf-8-*-
import requests
import re
from requests.exceptions import RequestException
#猫眼电影网站有反爬虫措施,设置headers后可以爬取
headers = {
'Content-Type': 'text/plain; charset=UTF-8',
'Origin':'https://maoyan.com',
'Referer':'https://maoyan.com/board/4',
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'
}
#爬取网页源代码
def get_one_page(url,headers):
try:
response =requests.get(url,headers =headers)
if response.status_code == 200:
return response.text
return None
except RequestsException:
return None
#正则表达式提取信息
def parse_one_page(html):
pattern = re.compile('<dd>.*?board-index.*?>(\d+)</i>.*?data-src="(.*?)".*?name"><a'
+'.*?>(.*?)</a>.*?star">(.*?)</p>.*?releasetime">(.*?)</p>.*?integer">(.*?)</i>.*?fraction">(.*?)</i>.*?</dd>',re.S)
items = re.findall(pattern,html)
for item in items:
yield{
'index':item[0],
'image':item[1],
'title':item[2],
'actor':item[3].strip()[3:],
'time':item[4].strip()[5:],
'score':item[5]+item[6]
}
def main():
url = "https://maoyan.com/board/4"
html = get_one_page(url,headers)
for item in parse_one_page(html):
print(item)
if __name__ == '__main__':
main()
执行结果如下:
4.猫眼TOP100所有信息写入文件
上边代码实现单页的信息抓取,要想爬取100个电影的信息,先观察每一页url的变化,点开每一页我们会发现url进行变化,原url后面多了‘?offset=0',且offset的值变化从0,10,20,变化如下:
代码实现如下:
#-*-coding:utf-8-*-
import requests
import re
import json
import os
from requests.exceptions import RequestException
#猫眼电影网站有反爬虫措施,设置headers后可以爬取
headers = {
'Content-Type': 'text/plain; charset=UTF-8',
'Origin':'https://maoyan.com',
'Referer':'https://maoyan.com/board/4',
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'
}
#爬取网页源代码
def get_one_page(url,headers):
try:
response =requests.get(url,headers =headers)
if response.status_code == 200:
return response.text
return None
except RequestsException:
return None
#正则表达式提取信息
def parse_one_page(html):
pattern = re.compile('<dd>.*?board-index.*?>(\d+)</i>.*?data-src="(.*?)".*?name"><a'
+'.*?>(.*?)</a>.*?star">(.*?)</p>.*?releasetime">(.*?)</p>.*?integer">(.*?)</i>.*?fraction">(.*?)</i>.*?</dd>',re.S)
items = re.findall(pattern,html)
for item in items:
yield{
'index':item[0],
'image':item[1],
'title':item[2],
'actor':item[3].strip()[3:],
'time':item[4].strip()[5:],
'score':item[5]+item[6]
}
#猫眼TOP100所有信息写入文件
def write_to_file(content):
#encoding ='utf-8',ensure_ascii =False,使写入文件的代码显示为中文
with open('result.txt','a',encoding ='utf-8') as f:
f.write(json.dumps(content,ensure_ascii =False)+'\n')
f.close()
#下载电影封面
def save_image_file(url,path):
jd = requests.get(url)
if jd.status_code == 200:
with open(path,'wb') as f:
f.write(jd.content)
f.close()
def main(offset):
url = "https://maoyan.com/board/4?offset="+str(offset)
html = get_one_page(url,headers)
if not os.path.exists('covers'):
os.mkdir('covers')
for item in parse_one_page(html):
print(item)
write_to_file(item)
save_image_file(item['image'],'covers/'+item['title']+'.jpg')
if __name__ == '__main__':
#对每一页信息进行爬取
for i in range(10):
main(i*10)
爬取结果如下:
5.多线程抓取
进行比较,发现多线程爬取时间明显较快:
多线程:
以下为完整代码:
#-*-coding:utf-8-*-
import requests
import re
import json
import os
from requests.exceptions import RequestException
from multiprocessing import Pool
#猫眼电影网站有反爬虫措施,设置headers后可以爬取
headers = {
'Content-Type': 'text/plain; charset=UTF-8',
'Origin':'https://maoyan.com',
'Referer':'https://maoyan.com/board/4',
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'
}
#爬取网页源代码
def get_one_page(url,headers):
try:
response =requests.get(url,headers =headers)
if response.status_code == 200:
return response.text
return None
except RequestsException:
return None
#正则表达式提取信息
def parse_one_page(html):
pattern = re.compile('<dd>.*?board-index.*?>(\d+)</i>.*?data-src="(.*?)".*?name"><a'
+'.*?>(.*?)</a>.*?star">(.*?)</p>.*?releasetime">(.*?)</p>.*?integer">(.*?)</i>.*?fraction">(.*?)</i>.*?</dd>',re.S)
items = re.findall(pattern,html)
for item in items:
yield{
'index':item[0],
'image':item[1],
'title':item[2],
'actor':item[3].strip()[3:],
'time':item[4].strip()[5:],
'score':item[5]+item[6]
}
#猫眼TOP100所有信息写入文件
def write_to_file(content):
#encoding ='utf-8',ensure_ascii =False,使写入文件的代码显示为中文
with open('result.txt','a',encoding ='utf-8') as f:
f.write(json.dumps(content,ensure_ascii =False)+'\n')
f.close()
#下载电影封面
def save_image_file(url,path):
jd = requests.get(url)
if jd.status_code == 200:
with open(path,'wb') as f:
f.write(jd.content)
f.close()
def main(offset):
url = "https://maoyan.com/board/4?offset="+str(offset)
html = get_one_page(url,headers)
if not os.path.exists('covers'):
os.mkdir('covers')
for item in parse_one_page(html):
print(item)
write_to_file(item)
save_image_file(item['image'],'covers/'+item['title']+'.jpg')
if __name__ == '__main__':
#对每一页信息进行爬取
pool = Pool()
pool.map(main,[i*10 for i in range(10)])
pool.close()
pool.join()
本文主要讲解了使用Python爬虫库requests多线程抓取猫眼电影TOP100数据的实例,更多关于Python爬虫库的知识请查看下面的相关链接
来源:https://blog.csdn.net/yuanfangPOET/article/details/81006521
标签:python,爬虫,requests
0
投稿
猜你喜欢
python缩进长度是否统一
2022-08-02 00:37:11
8种MySQL分页方法总结
2024-01-24 00:05:54
PyQt5实现将Matplotlib图像嵌入到Scoll Area中显示滚动条效果
2021-12-21 05:40:39
python使用sessions模拟登录淘宝的方式
2023-01-09 12:05:25
Python实现保证只能运行一个脚本实例
2021-04-01 06:58:39
Python 将json序列化后的字符串转换成字典(推荐)
2021-11-17 12:36:51
如何利用insert into values插入多条数据
2024-01-24 04:39:54
TensorFlow加载模型时出错的解决方式
2023-12-23 05:49:04
关于Python中模块的简介、定义与使用
2021-01-25 16:09:29
python使用openpyxl库读写Excel表格的方法(增删改查操作)
2021-11-29 01:22:43
使用Python文件读写,自定义分隔符(custom delimiter)
2021-12-06 08:17:51
sqlserver bcp(数据导入导出工具)一般用法与命令详解
2024-01-27 03:12:08
解决pytorch-yolov3 train 报错的问题
2023-11-26 04:14:29
Python中 传递值 和 传递引用 的区别解析
2023-12-26 07:05:22
Python 串口通信的实现
2022-05-25 21:08:54
xhEditor的异步载入实现代码
2022-01-29 10:40:28
使用 django orm 写 exists 条件过滤实例
2022-01-20 02:42:48
优化Oracle停机时间及数据库恢复
2010-07-20 12:54:00
centos7 + php7 lamp全套最新版本配置及mongodb和redis教程详解
2024-05-22 10:08:46
python django生成迁移文件的实例
2023-07-01 18:48:31