[Solved] How to crawl the url of url in scrapy?


At last i have done this, please follow below code to implement crawl values form url of url.

def parse(self, response):
    item=ProductItem()
    url_list = [content for  content in response.xpath("//div[@class="listing"]/div/a/@href").extract()]
    item['product_DetailUrl'] = url_list
    for url in url_list:
         request =  Request(str(url),callback=self.page2_parse)
         request.meta['item'] = item
         yield request

def page2_parse(self,response):
    item=ProductItem()
    item = response.meta['item']
    item['product_ColorAvailability'] = [content for content in  response.xpath("//div[@id='templateOption']//ul/li//img/@color").extract()]
    yield item

solved How to crawl the url of url in scrapy?