python实现的一只从百度开始不断搜索的小爬虫
时间:2022-05-20 10:46:18
文中用到了BeautifulSoup这个库, 目的是处理html文档分析的, 因为我只是提取了title的关键字,所以可以用正则表达式代替, 还有一个库是jieba, 这个库是中文分词的作用, 再有一个库是 chardet, 用来判断字符的编码, 本想多线程的, 但是自认为被搞糊涂了,就放弃了
#coding:utf-8
import re
import urllib
import urllib2
import sys
import time
import Queue
import thread
import threading
import jieba
import chardet
from BeautifulSoup import BeautifulSoup as BS
DEEP = 1000
LOCK = threading.Lock()
PATH = "c:\\test\\"
urlQueue = Queue.Queue()
def pachong():
url = 'http://www.baidu.com'
return url
def getPageUrl(html):
reUrl = re.compile(r'<\s*[Aa]{1}\s+[^>]*?[Hh][Rr][Ee][Ff]\s*=\s*[\"\']?([^>\"\']+)[\"\']?.*?>')
urls = reUrl.findall(html)
for url in urls:
if len(url) > 10:
if url.find('javascript') == -1:
urlQueue.put(url)
def getContents(url):
try:
url = urllib2.quote(url.split('#')[0].encode('utf-8'), safe = "%/:=&?~#+!$,;'@()*[]")
req = urllib2.urlopen(url)
res = req.read()
code = chardet.detect(res)['encoding']
#print
#print code
res = res.decode(str(code), 'ignore')
res = res.encode('gb2312', 'ignore')
code = chardet.detect(res)['encoding']
#print code
#print res
return res
except urllib2.HTTPError, e:
print e.code
return None
except urllib2.URLError, e:
print str(e)
return None
def writeToFile(html, url):
fp = file(PATH + str(time.time()) + '.html', 'w')
fp.write(html)
fp.close()
def getKeyWords(html):
code = chardet.detect(html)['encoding']
if code == 'ISO-8859-2':
html.decode('gbk', 'ignore').encode('gb2312', 'ignore')
code = chardet.detect(html)['encoding']
soup = BS(html, fromEncoding="gb2312")
titleTag = soup.title
titleKeyWords = titleTag.contents[0]
cutWords(titleKeyWords)
def cutWords(contents):
print contents
res = jieba.cut_for_search(contents)
res = '\n'.join(res)
print res
res = res.encode('gb2312')
keyWords = file(PATH + 'cutKeyWors.txt', 'a')
keyWords.write(res)
keyWords.close()
def start():
while urlQueue.empty() == False:
url = urlQueue.get()
html = getContents(url)
getPageUrl(html)
getKeyWords(html)
#writeToFile(html, url)
if __name__ == '__main__':
startUrl = pachong()
urlQueue.put(startUrl)
start()
![](/images/zang.png)
![](/images/jiucuo.png)
猜你喜欢
php生成随机密码的三种方法小结
MySQL5.6.40在CentOS7 64下安装过程详解
Python+OpenCV人脸检测原理及示例详解
![](https://img.aspxhome.com/file/2023/0/133480_0s.jpg)
Django使用Celery加redis执行异步任务的实例内容
![](https://img.aspxhome.com/file/2023/2/97762_0s.png)
python制作websocket服务器实例分享
![](https://img.aspxhome.com/file/2023/3/115233_0s.png)
js 数值项目的格式化函数代码
Python 读取WAV音频文件 画频谱的实例
![](https://img.aspxhome.com/file/2023/1/87661_0s.jpg)
golang调试bug及性能监控方式实践总结
![](https://img.aspxhome.com/file/2023/3/134363_0s.png)
如何将服务器上的python代码通过QQ发送回传信息(附实现方法)
![](https://img.aspxhome.com/file/2023/8/109518_0s.jpg)
Python简单实现gif动图倒放示例
![](https://img.aspxhome.com/file/2023/6/123516_0s.png)
javascript实现右下角广告框效果
![](https://img.aspxhome.com/file/2023/5/136275_0s.jpg)
python+django加载静态网页模板解析
![](https://img.aspxhome.com/file/2023/4/97754_0s.png)
Mysql的列修改成行并显示数据的简单实现
![](https://img.aspxhome.com/file/2023/0/88500_0s.png)
Python定时任务APScheduler原理及实例解析
influx+grafana自定义python采集数据和一些坑的总结
![](https://img.aspxhome.com/file/2023/6/107246_0s.png)
Python中使用strip()方法删除字符串中空格的教程
python中ASCII码字符与int之间的转换方法
Tornado Web Server框架编写简易Python服务器
详细解析Webpack是怎么运行的
![](https://img.aspxhome.com/file/2023/1/131841_0s.png)