Python大数据之从网页上爬取数据的方法详解

作者:xuehyunyu 时间:2023-10-06 22:10:53 

本文实例讲述了Python大数据之从网页上爬取数据的方法。分享给大家供大家参考,具体如下:

Python大数据之从网页上爬取数据的方法详解

myspider.py  :


#!/usr/bin/python
# -*- coding:utf-8 -*-
from scrapy.spiders import Spider
from lxml import etree
from jredu.items import JreduItem
class JreduSpider(Spider):
 name = 'tt' #爬虫的名字,必须的,唯一的
 allowed_domains = ['sohu.com']
 start_urls = [
   'http://www.sohu.com'
 ]
 def parse(self, response):
   content = response.body.decode('utf-8')
   dom = etree.HTML(content)
   for ul in dom.xpath("//div[@class='focus-news-box']/div[@class='list16']/ul"):
     lis = ul.xpath("./li")
     for li in lis:
       item = JreduItem() #定义对象
       if ul.index(li) == 0:
         strong = li.xpath("./a/strong/text()")
         li.xpath("./a/@href")
         item['title']= strong[0]
         item['href'] = li.xpath("./a/@href")[0]
       else:
         la = li.xpath("./a[last()]/text()")
         item['title'] = la[0]
         item['href'] = li.xpath("./a[last()]/href")[0]
       yield item

items.py    :


# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class JreduItem(scrapy.Item):#相当于Java里的实体类
 # define the fields for your item here like:
 # name = scrapy.Field()
 title = scrapy.Field()#创建一个field对象
 href = scrapy.Field()
 pass

middlewares.py  :


# -*- coding: utf-8 -*-
# Define here the models for your spider middleware
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/spider-middleware.html
from scrapy import signals
class JreduSpiderMiddleware(object):
 # Not all methods need to be defined. If a method is not defined,
 # scrapy acts as if the spider middleware does not modify the
 # passed objects.
 @classmethod
 def from_crawler(cls, crawler):
   # This method is used by Scrapy to create your spiders.
   s = cls()
   crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
   return s
 def process_spider_input(self, response, spider):
   # Called for each response that goes through the spider
   # middleware and into the spider.
   # Should return None or raise an exception.
   return None
 def process_spider_output(self, response, result, spider):
   # Called with the results returned from the Spider, after
   # it has processed the response.
   # Must return an iterable of Request, dict or Item objects.
   for i in result:
     yield i
 def process_spider_exception(self, response, exception, spider):
   # Called when a spider or process_spider_input() method
   # (from other spider middleware) raises an exception.
   # Should return either None or an iterable of Response, dict
   # or Item objects.
   pass
 def process_start_requests(self, start_requests, spider):
   # Called with the start requests of the spider, and works
   # similarly to the process_spider_output() method, except
   # that it doesn't have a response associated.
   # Must return only requests (not items).
   for r in start_requests:
     yield r
 def spider_opened(self, spider):
   spider.logger.info('Spider opened: %s' % spider.name)

pipelines.py  :


# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
import codecs
import json
class JreduPipeline(object):
 def __init__(self):
   self.fill = codecs.open("data.txt",encoding="utf-8",mode="wb");
 def process_item(self, item, spider):
   line = json.dumps(dict(item))+"\n"
   self.fill.write(line)
   return item

settings.py   :


# -*- coding: utf-8 -*-
# Scrapy settings for jredu project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#   http://doc.scrapy.org/en/latest/topics/settings.html
#   http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#   http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'jredu'
SPIDER_MODULES = ['jredu.spiders']
NEWSPIDER_MODULE = 'jredu.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'jredu (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = True
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#  'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#  'Accept-Language': 'en',
#}
# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#  'jredu.middlewares.JreduSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#  'jredu.middlewares.MyCustomDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#  'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
 'jredu.pipelines.JreduPipeline': 300,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

最后需要一个程序入口的方法:

main.py     :


#!/usr/bin/python
# -*- coding:utf-8 -*-
#爬虫文件的执行入口
from scrapy import cmdline
cmdline.execute("scrapy crawl tt".split())

希望本文所述对大家Python程序设计有所帮助。

来源:https://blog.csdn.net/xuehyunyu/article/details/74562406

标签:Python,大数据,网页,爬取数据
0
投稿

猜你喜欢

  • 详解配置Django的Celery异步之路踩坑

    2022-11-25 22:06:45
  • python线程优先级队列知识点总结

    2022-01-04 21:25:46
  • 分享十个Python超级好用提高工作效率的自动化脚本

    2021-06-26 17:17:16
  • 利用Pandas读取表格行数据判断是否相同的方法

    2022-07-30 22:10:20
  • 简洁js的隔行变色代码

    2008-06-18 18:17:00
  • 用python将pdf转化为有声读物

    2021-02-02 10:06:46
  • 基于Python __dict__与dir()的区别详解

    2021-04-23 15:00:34
  • pandas 强制类型转换 df.astype实例

    2022-03-27 04:41:56
  • python基于celery实现异步任务周期任务定时任务

    2021-06-14 05:20:26
  • jupyter notebook oepncv 显示一张图像的实现

    2022-03-26 20:09:19
  • nodejs处理tcp连接的核心流程

    2024-05-03 15:55:40
  • 关于python 读取csv最快的Datatable的用法,你都学会了吗

    2022-02-22 20:38:32
  • InnoDB解决幻读的方法详解

    2024-01-15 13:50:53
  • python基础while循环及if判断的实例讲解

    2021-02-18 06:56:06
  • 详解如何使用Python操作MySQL的各种功能

    2024-01-21 15:09:05
  • MySQL Enterprise备份的恢复解决方案

    2011-12-14 18:36:25
  • Perl split字符串分割函数用法指南

    2023-08-13 01:28:36
  • 整理Python 常用string函数(收藏)

    2021-08-25 19:21:52
  • Python持续监听文件变化代码实例

    2021-10-20 06:19:19
  • 基于Python爬取股票数据过程详解

    2021-02-27 08:42:18
  • asp之家 网络编程 m.aspxhome.com