Python无法用requests获取网页源码的解决方法

作者:henanlion 时间:2023-04-24 07:38:04 

最近在抓取http://skell.sketchengine.eu网页时,发现用requests无法获得网页的全部内容,所以我就用selenium先模拟浏览器打开网页,再获取网页的源代码,通过BeautifulSoup解析后拿到网页中的例句,为了能让循环持续进行,我们在循环体中加了refresh(),这样当浏览器得到新网址时通过刷新再更新网页内容,注意为了更好地获取网页内容,设定刷新后停留2秒,这样可以降低抓不到网页内容的机率。为了减少被封的可能,我们还加入了Chrome,请看以下代码:

from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from bs4 import BeautifulSoup
import time,re

path = Service("D:\\MyDrivers\\chromedriver.exe")#
# 配置不显示浏览器
chrome_options = Options()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--disable-gpu')
chrome_options.add_argument('User-Agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36')

# 创建Chrome实例 。

driver = webdriver.Chrome(service=path,options=chrome_options)
lst=["happy","help","evening","great","think","adapt"]

for word in lst:
   url="https://skell.sketchengine.eu/#result?lang=en&query="+word+"&f=concordance"
   driver.get(url)
   # 刷新网页获取新数据
   driver.refresh()
   time.sleep(2)
   # page_source——》获得页面源码
   resp=driver.page_source
   # 解析源码
   soup=BeautifulSoup(resp,"html.parser")
   table = soup.find_all("td")
   with open("eps.txt",'a+',encoding='utf-8') as f:
       f.write(f"\n{word}的例子\n")
   for i in table[0:6]:
       text=i.text
       #替换多余的空格
       new=re.sub("\s+"," ",text)
       #写入txt文本
       with open("eps.txt",'a+',encoding='utf-8') as f:
           f.write(re.sub(r"^(\d+\.)",r"\n\1",new))
driver.close()

1. 为了加快访问速度,我们设置不显示浏览器,通过chrome.options实现

2. 最近通过re正则表达式来清理格式。

3. 我们设置table[0:6]来获取前三个句子的内容,最后显示结果如下。

happy的例子
1. This happy mood lasted roughly until last autumn. 
2. The lodging was neither convenient nor happy . 
3. One big happy family "fighting communism". 
help的例子
1. Applying hot moist towels may help relieve discomfort. 
2. The intense light helps reproduce colors more effectively. 
3. My survival route are self help books. 
evening的例子
1. The evening feast costs another $10. 
2. My evening hunt was pretty flat overall. 
3. The area nightclubs were active during evenings . 
great的例子
1. The three countries represented here are three great democracies. 
2. Our three different tour guides were great . 
3. Your receptionist "crew" is great ! 
think的例子
1. I said yes immediately without thinking everything through. 
2. This book was shocking yet thought provoking. 
3. He thought "disgusting" was more appropriate. 
adapt的例子
1. The novel has been adapted several times. 
2. There are many ways plants can adapt . 
3. They must adapt quickly to changing deadlines. 

补充:经过代码的优化以后,例句的爬取更加快捷,代码如下:

from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from bs4 import BeautifulSoup
import time,re
import os

# 配置模拟浏览器的位置
path = Service("D:\\MyDrivers\\chromedriver.exe")#
# 配置不显示浏览器
chrome_options = Options()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--disable-gpu')
chrome_options.add_argument('User-Agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36')

# 创建Chrome实例 。

def get_wordlist():
   wordlist=[]
   with open("wordlist.txt",'r',encoding='utf-8') as f:
       lines=f.readlines()
       for line in lines:
           word=line.strip()
           wordlist.append(word)
   return wordlist

def main(lst):
   driver = webdriver.Chrome(service=path,options=chrome_options)
   for word in lst:
       url="https://skell.sketchengine.eu/#result?lang=en&query="+word+"&f=concordance"
       driver.get(url)
       driver.refresh()
       time.sleep(2)
       # page_source——》页面源码
       resp=driver.page_source
       # 解析源码
       soup=BeautifulSoup(resp,"html.parser")
       table = soup.find_all("td")
       with open("examples.txt",'a+',encoding='utf-8') as f:
           f.writelines(f"\n{word}的例子\n")
       for i in table[0:6]:
           text=i.text
           new=re.sub("\s+"," ",text)
           with open("eps.txt",'a+',encoding='utf-8') as f:
               f.write(new)
#                 f.writelines(re.sub("(\.\s)(\d+\.)","\1\n\2",new))

if __name__=="__main__":
   lst=get_wordlist()
   main(lst)
   os.startfile("examples.txt")

来源:https://blog.csdn.net/henanlion/article/details/122757040

标签:requests,网页,源码
0
投稿

猜你喜欢

  • sql查询一个数组中是否包含某个内容find_in_set问题

    2024-01-23 19:55:12
  • 关于Python网络爬虫框架scrapy

    2023-03-17 17:02:50
  • 浅谈golang结构体偷懒初始化

    2024-02-15 17:27:10
  • Python利用VideoCapture读取视频或摄像头并进行保存

    2022-12-06 18:26:27
  • 如何正确使用开源项目?

    2023-01-29 22:14:57
  • Python实现图片添加文字

    2021-11-22 21:17:11
  • Python使用Selenium模拟浏览器自动操作功能

    2021-01-19 07:55:33
  • Oracle数据库逻辑备份的SH文件

    2010-07-27 13:26:00
  • Python摸鱼神器之利用树莓派opencv人脸识别自动控制电脑显示桌面

    2021-01-12 07:31:45
  • Django中auth模块用户认证的使用

    2023-02-08 13:49:58
  • python常用函数random()函数详解

    2022-08-04 18:17:21
  • python 中xpath爬虫实例详解

    2021-06-08 08:51:46
  • ASP开发的WAP格式简易邮件系统实例

    2008-06-10 17:00:00
  • 基于Python采集爬取微信公众号历史数据

    2023-04-11 15:16:13
  • Python使用淘宝API查询IP归属地功能分享

    2021-02-11 20:37:29
  • Python读取properties配置文件操作示例

    2021-06-10 04:20:55
  • 一文搞懂Python中的进程,线程和协程

    2023-06-13 17:26:41
  • 详解Django中的ifequal和ifnotequal标签使用

    2023-06-24 05:07:04
  • MySQL中ROUND函数进行四舍五入操作陷阱分析

    2024-01-15 04:02:04
  • python图像处理-利用一行代码实现灰度图抠图

    2021-03-16 10:40:49
  • asp之家 网络编程 m.aspxhome.com