文档章节

解决ValueError('Missing scheme in request url: %s' % self._url)

sjfgod
 sjfgod
发布于 2017/09/01 14:38
字数 1125
阅读 65
收藏 0

版权声明:原创文章,欢迎一起学习交流!

使用scrapy的ImagesPipeline爬取图片的时候,运行报错

Traceback (most recent call last):
  File "/home/lcy/.local/lib/python2.7/site-packages/twisted/internet/defer.py", line 653, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "/home/lcy/.local/lib/python2.7/site-packages/scrapy/pipelines/media.py", line 62, in process_item
    requests = arg_to_iter(self.get_media_requests(item, info))
  File "/home/lcy/.local/lib/python2.7/site-packages/scrapy/pipelines/images.py", line 147, in get_media_requests
    return [Request(x) for x in item.get(self.images_urls_field, [])]
  File "/home/lcy/.local/lib/python2.7/site-packages/scrapy/http/request/__init__.py", line 25, in __init__
    self._set_url(url)
  File "/home/lcy/.local/lib/python2.7/site-packages/scrapy/http/request/__init__.py", line 57, in _set_url
    raise ValueError('Missing scheme in request url: %s' % self._url)
ValueError: Missing scheme in request url: h


查找了相关的文档,了解到使用ImagesPipeline传入的url地址必须是一个list,在传入一个list的时候pipeline处理的速度要快得多,而我写的是一个字符串,所以报错,所以我们需要修改一下传入的url格式就行了

 

 

源码附上:

修改前:

# -*- coding: utf-8 -*-
import scrapy
from imgspider.items import QiubaiPicItem
import sys
reload(sys)
sys.setdefaultencoding( "utf-8" )
class QiubaipicSpider(scrapy.Spider):
    name = "qiubaiPic"
    allowed_domains = ["qiushibaike.com"]
    start_urls = ['http://qiushibaike.com/']

    def parse(self, response):
        # page_value=response.xpath('//*[@id="content-left"]/ul/li[8]/a/span/text()').extract()[0]
        # for page in range(1,int(page_value)):
        #     url='http://www.qiushibaike.com/pic/page/'+str(page)
        #     yield scrapy.Request(url,callback=self.parse_detail)

        url='http://www.qiushibaike.com/pic/page/3'
        yield scrapy.Request(url,callback=self.parse_detail)

    def parse_detail(self,response):
        item=[]  
        divs=response.xpath('//*[@id="content-left"]/div[@class="article block untagged mb15"]')
        for div in divs:
            QiubaiPic=QiubaiPicItem()
            src=div.xpath('div[@class="thumb"]/a/img/@src').extract()[0]
            img_path='http://'+src[2:]   
            QiubaiPic['img']=img_path
            item.append(QiubaiPic)
        return item


 

 

 

 

修改后:

# -*- coding: utf-8 -*-
import scrapy
from imgspider.items import QiubaiPicItem
import sys
reload(sys)
sys.setdefaultencoding( "utf-8" )
class QiubaipicSpider(scrapy.Spider):
    name = "qiubaiPic"
    allowed_domains = ["qiushibaike.com"]
    start_urls = ['http://qiushibaike.com/']

    def parse(self, response):
        # page_value=response.xpath('//*[@id="content-left"]/ul/li[8]/a/span/text()').extract()[0]
        # for page in range(1,int(page_value)):
        #     url='http://www.qiushibaike.com/pic/page/'+str(page)
        #     yield scrapy.Request(url,callback=self.parse_detail)

        url='http://www.qiushibaike.com/pic/page/3'
        yield scrapy.Request(url,callback=self.parse_detail)

    def parse_detail(self,response):
        item=[]
        img_paths=[]
        divs=response.xpath('//*[@id="content-left"]/div[@class="article block untagged mb15"]')
        for div in divs:
            QiubaiPic=QiubaiPicItem()
            src=div.xpath('div[@class="thumb"]/a/img/@src').extract()[0]
            img_path='http://'+src[2:]
            img_paths.append(img_path)
        QiubaiPic['img']=img_paths
        item.append(QiubaiPic)
        return item

 

setting.py文件

# -*- coding: utf-8 -*-

import random

BOT_NAME = 'imgspider'

SPIDER_MODULES = ['imgspider.spiders']
NEWSPIDER_MODULE = 'imgspider.spiders'
#浏览器请求头,这个必须要有
USER_AGENT_LIST=[
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
    "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
    "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
    "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
    "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
    "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SE 2.X MetaSr 1.0; SE 2.X MetaSr 1.0; .NET CLR 2.0.50727; SE 2.X MetaSr 1.0)",
    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; 360SE)",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
    "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
    "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
    "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1" \
    "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11", \
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6", \
    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6", \
    "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1", \
    "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5", \
    "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5", \
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", \
    "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", \
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", \
    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3", \
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3", \
    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", \
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", \
    "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", \
    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3", \
    "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24", \
    "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"

]
ua= random.choice(USER_AGENT_LIST)
if ua:
    USER_AGENT =ua
    print ua
else:
    USER_AGENT="Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"

#是否遵循robots协定
ROBOTSTXT_OBEY = False
#线程数量
CONCURRENT_REQUESTS = 32
#下载延迟单位秒
DOWNLOAD_DELAY = 3
#cookies开关,建议禁用
COOKIES_ENABLED = False

# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    'scrapy.pipelines.images.ImagesPipline':1}
ITEM_PIPELINES = {'scrapy.pipelines.images.ImagesPipeline': 1} IMAGES_URLS_FIELD = 'img' IMAGES_STORE = r'/home/lcy/pics' LOG_FILE="scrapy.log"
 

本文转载自:

共有 人打赏支持
上一篇: scrapy的安装
下一篇: python之random模块
sjfgod
粉丝 0
博文 19
码字总数 9137
作品 0
西安
私信 提问
爬取图片过程遇到的ValueError: Missing scheme in request url: h 报错与解决方法

一 、scrapy整体框架 1.1 scrapy框架图    1.2 scrapy框架各结构解析   item:保存抓取的内容   spider:定义抓取内容的规则,也是我们主要编辑的文件   pipelines:管道作用,用来定...

慕城落雪
2018/12/25
0
0
ValueError: ('Missing distribution spec', '\xe2\x80\x93upgrade')

cat pip.log ------------------------------------------------------------ /usr/bin/pip run on Fri Mar 10 02:16:02 2017 Exception: Traceback (most recent call last): File "/usr/li......

知行合一1
2017/03/10
309
0
 pip install –upgrade https://storage.googleapis.com/tensorflow 

pip install –upgrade https://storage.googleapis.com/tensorflow Exception: Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/pip-1.5.4-py2.7.egg/pip/base......

知行合一1
2017/03/10
673
0
iOS中UIWebView与其中网页的javascript的交互

首发:个人博客,更新&纠错&回复 1.本地语言调js的方式与android中的方式类似,也是向WebView控件发送要调用的js语句 2. 但js调本地语言,则不是像android那样直接调一个全局变量的方法,而是...

祁达方
2015/12/10
78
0
tornado 源码分析 之 异步io的实现方式

前言 AsyncHTTPClient : fetch fetch_impl _HTTPConnection TCPClient connect _Connector try_connect createstream IOStream connect addio_state 小总结: ioloop IOStream.handleevents ......

国夫君
2015/07/12
0
0

没有更多内容

加载失败,请刷新页面

加载更多

Java并发编程基础(三)

线程间通信 线程间通信称为进程内通信,多个线程实现互斥访问共享资源时会互相发送信号货这等待信号,比如线程等待数据到来的通知,线程收到变量改变的信号。 线程阻塞(同步)和非阻塞(异步)...

chendom
11分钟前
1
0
阿里重磅开源首款自研科学计算引擎Mars,揭秘超大规模科学计算

日前,阿里巴巴正式对外发布了分布式科学计算引擎 Mars 的开源代码地址,开发者们可以在pypi上自主下载安装,或在Github上获取源代码并参与开发。 此前,早在2018年9月的杭州云栖大会上,阿里...

阿里云官方博客
20分钟前
2
0
我是怎样和Linux系统结缘并通过红帽RHCE认证的

我高考完当时就是选择的计算机科学与技术专业,上大学以后联想到的和计算机相关的就只有写代码,开发,网站,网页设计,就没有其他的了,当时学习写代码也都是在Windows上,什么C#、C++之类的...

问题终结者
30分钟前
2
0
SSH之端口转发

第一部分 概述 当你在咖啡馆享受免费 WiFi 的时候,有没有想到可能有人正在窃取你的密码及隐私信息?当实验室的防火墙阻止了你的网络应用端口,是不是有苦难言?来看看 SSH 的端口转发功能带...

无语年华
35分钟前
1
0
我是怎样和Linux系统结缘并通过红帽RHCE认证的

我高考完当时就是选择的计算机科学与技术专业,上大学以后联想到的和计算机相关的就只有写代码,开发,网站,网页设计,就没有其他的了,当时学习写代码也都是在Windows上,什么C#、C++之类的...

linuxprobe16
52分钟前
1
0

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部