文档章节

解决ValueError('Missing scheme in request url: %s' % self._url)

sjfgod
 sjfgod
发布于 2017/09/01 14:38
字数 1125
阅读 4.5K
收藏 0

#程序员薪资揭榜#你做程序员几年了?月薪多少?发量还在么?>>>

版权声明:原创文章,欢迎一起学习交流!

使用scrapy的ImagesPipeline爬取图片的时候,运行报错

Traceback (most recent call last):
  File "/home/lcy/.local/lib/python2.7/site-packages/twisted/internet/defer.py", line 653, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "/home/lcy/.local/lib/python2.7/site-packages/scrapy/pipelines/media.py", line 62, in process_item
    requests = arg_to_iter(self.get_media_requests(item, info))
  File "/home/lcy/.local/lib/python2.7/site-packages/scrapy/pipelines/images.py", line 147, in get_media_requests
    return [Request(x) for x in item.get(self.images_urls_field, [])]
  File "/home/lcy/.local/lib/python2.7/site-packages/scrapy/http/request/__init__.py", line 25, in __init__
    self._set_url(url)
  File "/home/lcy/.local/lib/python2.7/site-packages/scrapy/http/request/__init__.py", line 57, in _set_url
    raise ValueError('Missing scheme in request url: %s' % self._url)
ValueError: Missing scheme in request url: h


查找了相关的文档,了解到使用ImagesPipeline传入的url地址必须是一个list,在传入一个list的时候pipeline处理的速度要快得多,而我写的是一个字符串,所以报错,所以我们需要修改一下传入的url格式就行了

 

 

源码附上:

修改前:

# -*- coding: utf-8 -*-
import scrapy
from imgspider.items import QiubaiPicItem
import sys
reload(sys)
sys.setdefaultencoding( "utf-8" )
class QiubaipicSpider(scrapy.Spider):
    name = "qiubaiPic"
    allowed_domains = ["qiushibaike.com"]
    start_urls = ['http://qiushibaike.com/']

    def parse(self, response):
        # page_value=response.xpath('//*[@id="content-left"]/ul/li[8]/a/span/text()').extract()[0]
        # for page in range(1,int(page_value)):
        #     url='http://www.qiushibaike.com/pic/page/'+str(page)
        #     yield scrapy.Request(url,callback=self.parse_detail)

        url='http://www.qiushibaike.com/pic/page/3'
        yield scrapy.Request(url,callback=self.parse_detail)

    def parse_detail(self,response):
        item=[]  
        divs=response.xpath('//*[@id="content-left"]/div[@class="article block untagged mb15"]')
        for div in divs:
            QiubaiPic=QiubaiPicItem()
            src=div.xpath('div[@class="thumb"]/a/img/@src').extract()[0]
            img_path='http://'+src[2:]   
            QiubaiPic['img']=img_path
            item.append(QiubaiPic)
        return item


 

 

 

 

修改后:

# -*- coding: utf-8 -*-
import scrapy
from imgspider.items import QiubaiPicItem
import sys
reload(sys)
sys.setdefaultencoding( "utf-8" )
class QiubaipicSpider(scrapy.Spider):
    name = "qiubaiPic"
    allowed_domains = ["qiushibaike.com"]
    start_urls = ['http://qiushibaike.com/']

    def parse(self, response):
        # page_value=response.xpath('//*[@id="content-left"]/ul/li[8]/a/span/text()').extract()[0]
        # for page in range(1,int(page_value)):
        #     url='http://www.qiushibaike.com/pic/page/'+str(page)
        #     yield scrapy.Request(url,callback=self.parse_detail)

        url='http://www.qiushibaike.com/pic/page/3'
        yield scrapy.Request(url,callback=self.parse_detail)

    def parse_detail(self,response):
        item=[]
        img_paths=[]
        divs=response.xpath('//*[@id="content-left"]/div[@class="article block untagged mb15"]')
        for div in divs:
            QiubaiPic=QiubaiPicItem()
            src=div.xpath('div[@class="thumb"]/a/img/@src').extract()[0]
            img_path='http://'+src[2:]
            img_paths.append(img_path)
        QiubaiPic['img']=img_paths
        item.append(QiubaiPic)
        return item

 

setting.py文件

# -*- coding: utf-8 -*-

import random

BOT_NAME = 'imgspider'

SPIDER_MODULES = ['imgspider.spiders']
NEWSPIDER_MODULE = 'imgspider.spiders'
#浏览器请求头,这个必须要有
USER_AGENT_LIST=[
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
    "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
    "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
    "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
    "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
    "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SE 2.X MetaSr 1.0; SE 2.X MetaSr 1.0; .NET CLR 2.0.50727; SE 2.X MetaSr 1.0)",
    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; 360SE)",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
    "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
    "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
    "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1" \
    "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11", \
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6", \
    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6", \
    "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1", \
    "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5", \
    "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5", \
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", \
    "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", \
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", \
    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3", \
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3", \
    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", \
    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", \
    "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", \
    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3", \
    "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24", \
    "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"

]
ua= random.choice(USER_AGENT_LIST)
if ua:
    USER_AGENT =ua
    print ua
else:
    USER_AGENT="Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"

#是否遵循robots协定
ROBOTSTXT_OBEY = False
#线程数量
CONCURRENT_REQUESTS = 32
#下载延迟单位秒
DOWNLOAD_DELAY = 3
#cookies开关,建议禁用
COOKIES_ENABLED = False

# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    'scrapy.pipelines.images.ImagesPipline':1}
ITEM_PIPELINES = {'scrapy.pipelines.images.ImagesPipeline': 1} IMAGES_URLS_FIELD = 'img' IMAGES_STORE = r'/home/lcy/pics' LOG_FILE="scrapy.log"
 

本文转载自网络

sjfgod
粉丝 0
博文 19
码字总数 9137
作品 0
西安
私信 提问
加载中

评论(0)

爬取图片过程遇到的ValueError: Missing scheme in request url: h 报错与解决方法

一 、scrapy整体框架 1.1 scrapy框架图    1.2 scrapy框架各结构解析   item:保存抓取的内容   spider:定义抓取内容的规则,也是我们主要编辑的文件   pipelines:管道作用,用来定...

慕城落雪
2018/12/25
0
0
Django----djagorestframwork使用

restful(表者征状态转移,面向资源编程)------------------------------------------->约定从资源的角度审视整个网络,将分布在网络中某个节点的资源通过url进行标识,客户端通过url获取资源...

osc_wa6fkyf0
2018/08/11
2
0
ValueError: Missing scheme in request url: h

相关URL必须是一个List,所以遇到该错误只需要将url转换成list即可。 例如: start_urls = ['someurls'] 如果是images_url也是如此,使用item存储的时候改成list即可。 item['images_urls'] ......

繁城落叶
04/01
0
0
cordova 使用WKWebView 适配iphoneX及解决不能拨打电话问题

先安装插件 cordova-plugin-wkwebview-engine 然后修改插件中CDVWKWebViewEngine.m文件,下面是全部代码,修改部分已经进行注释 /* Licensed to the Apache Software Foundation (ASF) unde...

osc_w306s2nm
2018/12/20
3
0
IOS 在一个应用里打开另一个应用 及其 两个应用互相调用

在IOS应用中打开另外一个应用的解决方案 最近要在IOS中实现一个应用启动另外一个应用的功能,搜了一些资料,使用UIApplication的openURL:的方法就能实现,现在整理和大家分享一下! 注册自定...

osc_0sz5p35w
2018/03/10
3
0

没有更多内容

加载失败,请刷新页面

加载更多

组件的自动装配

MainConfigOfAutowired.class /** * 自动装配 * Spring利用DI完成IOC容器中各个组件的依赖注入 * * 1. @Autowired ,自动注入 * 1.1 默认按照类型优先找寻组件,applicationContext.ge...

开源中国首席大督查
11分钟前
22
0
7-5 数组元素循环右移问题 (20分)

7-5 数组元素循环右移问题 (20分) 一个数组A中存有N(>0)个整数,在不允许使用另外数组的前提下,将每个整数循环向右移M(≥0)个位置,即将A中的数据由(A0A1⋯AN−1)变换为(AN−M⋯AN−...

bangbangtang007
20分钟前
18
0
Docker网络

查看网络 docker network ls

muoushi
21分钟前
17
0
银企直联-前置机-企业接入-Java-socket

银企直联一般都是通过前置机与银行服务进行通信,企业服务 前置机 银行三者关系如下 在企业应用在这里就相当于客户端,前置机就相当服务端 ERP 与 CT 之间的交易数据报文采用 TCP/IP 协议的 ...

布袋和尚_爱吃鱼
23分钟前
26
0
设计模式之适配器—我要给iPhone充个电

定义 将一个类的接口转换成客户期望的另一个接口,适配器让原本接口不兼容的类可以相互合作。 如何使用 适配器模式同样来自于我们生活中,如手机的电源适配器,同样笔记本电脑/Pad等都需要电...

风清扬不会武功
23分钟前
27
0

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部