python多线程爬取西刺代理的示例代码
作者:王瑞 时间:2021-05-27 04:32:57
西刺代理是一个国内IP代理,由于代理倒闭了,所以我就把原来的代码放出来供大家学习吧。
镜像地址:https://www.blib.cn/url/xcdl.html
首先找到所有的tr标签,与class="odd"的标签,然后提取出来。
然后再依次找到tr标签里面的所有td标签,然后只提取出里面的[1,2,5,9]这四个标签的位置,其他的不提取。
最后可以写出提取单一页面的代码,提取后将其保存到文件中。
import sys,re,threading
import requests,lxml
from queue import Queue
import argparse
from bs4 import BeautifulSoup
head = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.100 Safari/537.36"}
if __name__ == "__main__":
ip_list=[]
fp = open("SpiderAddr.json","a+",encoding="utf-8")
url = "https://www.blib.cn/url/xcdl.html"
request = requests.get(url=url,headers=head)
soup = BeautifulSoup(request.content,"lxml")
data = soup.find_all(name="tr",attrs={"class": re.compile("|[^odd]")})
for item in data:
soup_proxy = BeautifulSoup(str(item),"lxml")
proxy_list = soup_proxy.find_all(name="td")
for i in [1,2,5,9]:
ip_list.append(proxy_list[i].string)
print("[+] 爬行列表: {} 已转存".format(ip_list))
fp.write(str(ip_list) + '\n')
ip_list.clear()
爬取后会将文件保存为 SpiderAddr.json 格式。
最后再使用另一段代码,将其转换为一个SSR代理工具直接能识别的格式,{'http': 'http://119.101.112.31:9999'}
import sys,re,threading
import requests,lxml
from queue import Queue
import argparse
from bs4 import BeautifulSoup
if __name__ == "__main__":
result = []
fp = open("SpiderAddr.json","r")
data = fp.readlines()
for item in data:
dic = {}
read_line = eval(item.replace("\n",""))
Protocol = read_line[2].lower()
if Protocol == "http":
dic[Protocol] = "http://" + read_line[0] + ":" + read_line[1]
else:
dic[Protocol] = "https://" + read_line[0] + ":" + read_line[1]
result.append(dic)
print(result)
完整多线程版代码如下所示。
import sys,re,threading
import requests,lxml
from queue import Queue
import argparse
from bs4 import BeautifulSoup
head = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.100 Safari/537.36"}
class AgentSpider(threading.Thread):
def __init__(self,queue):
threading.Thread.__init__(self)
self._queue = queue
def run(self):
ip_list=[]
fp = open("SpiderAddr.json","a+",encoding="utf-8")
while not self._queue.empty():
url = self._queue.get()
try:
request = requests.get(url=url,headers=head)
soup = BeautifulSoup(request.content,"lxml")
data = soup.find_all(name="tr",attrs={"class": re.compile("|[^odd]")})
for item in data:
soup_proxy = BeautifulSoup(str(item),"lxml")
proxy_list = soup_proxy.find_all(name="td")
for i in [1,2,5,9]:
ip_list.append(proxy_list[i].string)
print("[+] 爬行列表: {} 已转存".format(ip_list))
fp.write(str(ip_list) + '\n')
ip_list.clear()
except Exception:
pass
def StartThread(count):
queue = Queue()
threads = []
for item in range(1,int(count)+1):
url = "https://www.xicidaili.com/nn/{}".format(item)
queue.put(url)
print("[+] 生成爬行链接 {}".format(url))
for item in range(count):
threads.append(AgentSpider(queue))
for t in threads:
t.start()
for t in threads:
t.join()
# 转换函数
def ConversionAgentIP(FileName):
result = []
fp = open(FileName,"r")
data = fp.readlines()
for item in data:
dic = {}
read_line = eval(item.replace("\n",""))
Protocol = read_line[2].lower()
if Protocol == "http":
dic[Protocol] = "http://" + read_line[0] + ":" + read_line[1]
else:
dic[Protocol] = "https://" + read_line[0] + ":" + read_line[1]
result.append(dic)
return result
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("-p","--page",dest="page",help="指定爬行多少页")
parser.add_argument("-f","--file",dest="file",help="将爬取到的结果转化为代理格式 SpiderAddr.json")
args = parser.parse_args()
if args.page:
StartThread(int(args.page))
elif args.file:
dic = ConversionAgentIP(args.file)
for item in dic:
print(item)
else:
parser.print_help()
来源:https://www.cnblogs.com/LyShark/p/13850457.html
标签:python,多线程,爬虫,代理
![](/images/zang.png)
![](/images/jiucuo.png)
猜你喜欢
Python一句代码实现找出所有水仙花数的方法
2021-10-09 08:10:44
Django模型层实现多表关系创建和多表操作
2022-12-01 09:13:46
使用线框图来简化你的产品设计流程
2011-06-10 13:10:00
python如何将两张图片生成为全景图片
2021-08-03 23:42:26
![](https://img.aspxhome.com/file/2023/0/81480_0s.jpg)
Python中常见的异常总结
2021-11-15 04:21:27
Python实现自动登录百度空间的方法
2023-11-11 09:11:23
Oracle 实现类似SQL Server中自增字段的一个办法
2009-08-02 07:51:00
web版Photoshop来了
2008-04-21 13:39:00
![](https://img.aspxhome.com/file/UploadPic/20084/21/2008421134249782s.jpg)
Python的内置数据类型中的数字
2023-12-29 19:36:37
![](https://img.aspxhome.com/file/2023/7/114517_0s.png)
几个SQL SERVER应用问题解答
2008-01-01 19:12:00
详解TensorFlow训练网络两种方式
2021-06-24 00:18:23
python 包实现JSON 轻量数据操作
2022-11-13 05:40:32
pandas DataFrame的修改方法(值、列、索引)
2021-10-17 11:35:10
Python 调用PIL库失败的解决方法
2023-01-25 02:47:08
Python减少循环层次和缩进的技巧分析
2023-10-07 21:41:09
![](https://img.aspxhome.com/file/2023/2/78732_0s.png)
python机器学习理论与实战(五)支持向量机
2021-11-27 11:36:30
![](https://img.aspxhome.com/file/2023/2/75942_0s.jpg)
什么是Ajax及Ajax的优势
2007-09-07 09:56:00
![](https://img.aspxhome.com/file/UploadPic/up/2007091821580144.jpg)
PHP函数shuffle()取数组若干个随机元素的方法分析
2023-10-14 16:06:55
Python 高级专用类方法的实例详解
2023-10-11 14:13:52
Python使用xlrd和xlwt批量读写excel文件的示例代码
2022-09-17 01:54:54
![](https://img.aspxhome.com/file/2023/5/79735_0s.png)