请稍等 ...
×

采纳答案成功!

向帮助你的同学说点啥吧!感谢那些助人为乐的人

调试的时候不会跳转到parse_job()函数的断点,直接搜索完就结束

控制台返回状态如下(之前还有一些如下页面信息,2018-05-19 21:49:44 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://passport.lagou.com/login/login.html?msg=validation&uStatus=2&clientIp=113.99.220.141> from <GET https://www.lagou.com/zhaopin/CTO/>,包括有 jobs的 ):

2018-05-19 21:49:44 [scrapy.statscollectors] INFO: Dumping Scrapy stats:

{'downloader/request_bytes': 131331,

 'downloader/request_count': 329,

 'downloader/request_method_count/GET': 329,

 'downloader/response_bytes': 189515,

 'downloader/response_count': 329,

 'downloader/response_status_count/200': 4,

 'downloader/response_status_count/302': 325,

 'dupefilter/filtered': 323,

 'finish_reason': 'finished',

 'finish_time': datetime.datetime(2018, 5, 19, 13, 49, 44, 206308),

 'log_count/DEBUG': 331,

 'log_count/INFO': 7,

 'request_depth_max': 1,

 'response_received_count': 4,

 'scheduler/dequeued': 327,

 'scheduler/dequeued/memory': 327,

 'scheduler/enqueued': 327,

 'scheduler/enqueued/memory': 327,

 'start_time': datetime.datetime(2018, 5, 19, 13, 49, 28, 666419)}

2018-05-19 21:49:44 [scrapy.core.engine] INFO: Spider closed (finished)


正在回答

2回答

bobby 2018-05-21 14:32:39

302了 可以先用selenium模拟登录 然后再重新抓取

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
import time
import pickle
import datetime
import sys
import io
class LagouSpider(CrawlSpider):
    name = 'lagou_sel'
    allowed_domains = ['www.lagou.com']
    start_urls = ['https://www.lagou.com/']
    headers={
        "Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
        "Accept-Encoding":"gzip, deflate, br",
        "Accept-Language":"zh-CN,zh;q=0.8",
        "Connection":"keep-alive",
        "User-Agent":"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36",
         "Referer":'https://www.lagou.com',
         'Connection': 'keep-alive',
         "HOST": "www.lagou.com"
        }
    custom_settings = {
        "COOKIES_ENABLED": True
    }
    rules = (
        Rule(LinkExtractor(allow=r'gongsi/j/\d+.html'), follow=True),
        Rule(LinkExtractor(allow=r'zhaopin/.*'), follow=True),
        Rule(LinkExtractor(allow=r'jobs/\d+.html'), callback='parse_job', follow=True),
    )
    def parse_item(self, response):
        pass
    def start_requests(self):
        from selenium import webdriver
        sys.stdout=io.TextIOWrapper(sys.stdout.buffer,encoding='gb18030')
        chrome_opt=webdriver.ChromeOptions()
        prefs={"profile.managed_default_content_settings.images":2}
        chrome_opt.add_experimental_option("prefs",prefs)
        browser = webdriver.Chrome(executable_path="E:/tmp/chromedriver.exe",chrome_options=chrome_opt)
        browser.get("https://passport.lagou.com/login/login.html?service=https%3a%2f%2fwww.lagou.com%2f")
        browser.find_elements_by_css_selector(".input.input_white")[0].send_keys("xxx")
        browser.find_elements_by_css_selector(".input.input_white")[1].send_keys("xx")
        # browser.find_element_by_xpath("/html/body/section/div[1]/div[2]/form/div[2]/input").send_keys(password)
        browser.find_element_by_css_selector(".btn.btn_green.btn_active.btn_block.btn_lg").click()
        time.sleep(10)
        Cookies = browser.get_cookies()
        cookie_dict={}
        for cookie in Cookies:
            f=open('H:/慕课网课程/python爬虫/课程源码最终版/ArticleSpider/cookies123'+cookie['name']+'.lagou','wb')
            pickle.dump(cookie,f)
            f.close()
            cookie_dict[cookie['name']]=cookie['value']
        browser.close()
        return [scrapy.Request(url=self.start_urls[0], dont_filter=True, cookies=cookie_dict)]


2 回复 有任何疑惑可以回复我~
  • 太赞了,老师回复很给力
    回复 有任何疑惑可以回复我~ 2018-05-21 20:55:43
  • 提问者 慕用5281994 #2
    非常感谢!
    回复 有任何疑惑可以回复我~ 2018-05-24 21:40:07
  • 提问者 慕用5281994 #3
    是否一般从浏览器获取cookies后,只需传递name和 value字段就可以了?大部分网站都这样?
    回复 有任何疑惑可以回复我~ 2018-05-24 21:41:08
qq_AGGRESSIVE_0 2018-05-20 15:06:59

我也遇到了,可以通过在parse_job()中添加print(),验证进入了方法,

0 回复 有任何疑惑可以回复我~
问题已解决,确定采纳
还有疑问,暂不采纳
意见反馈 帮助中心 APP下载
官方微信