{"id":7790,"date":"2022-09-10T09:58:16","date_gmt":"2022-09-10T04:28:16","guid":{"rendered":"https:\/\/jassweb.com\/solved\/solved-scraping-project-euler-site-with-scrapy-closed\/"},"modified":"2022-09-10T09:58:16","modified_gmt":"2022-09-10T04:28:16","slug":"solved-scraping-project-euler-site-with-scrapy-closed","status":"publish","type":"post","link":"https:\/\/jassweb.com\/solved\/solved-scraping-project-euler-site-with-scrapy-closed\/","title":{"rendered":"[Solved] Scraping Project Euler site with scrapy [closed]"},"content":{"rendered":"<p> [ad_1]<br \/>\n<\/p>\n<div id=\"answer-48814417\" class=\"answer js-answer accepted-answer js-accepted-answer\" data-answerid=\"48814417\" data-parentid=\"48771856\" data-score=\"0\" data-position-on-page=\"1\" data-highest-scored=\"1\" data-question-has-accepted-highest-score=\"1\" itemprop=\"acceptedAnswer\" itemscope itemtype=\"https:\/\/schema.org\/Answer\">\n<div class=\"post-layout\">\n<div class=\"votecell post-layout--left\"><\/div>\n<div class=\"answercell post-layout--right\">\n<div class=\"s-prose js-post-body\" itemprop=\"text\">\n<p>I think I have found a simplest yet fitting solution (at least for my purpose), in respect to existent code written to scrape projecteuler:<\/p>\n<pre><code># -*- coding: utf-8 -*-\nimport scrapy\nfrom eulerscraper.items import Problem\nfrom scrapy.loader import ItemLoader\n\n\nclass EulerSpider(scrapy.Spider):\n    name = \"euler'\n    allowed_domains = ['projecteuler.net']\n    start_urls = [\"https:\/\/projecteuler.net\/archives\"]\n\n    def parse(self, response):\n        numpag = response.css(\"div.pagination a[href]::text\").extract()\n        maxpag = int(numpag[len(numpag) - 1])\n\n        for href in response.css(\"table#problems_table a::attr(href)\").extract():\n            next_page = \"https:\/\/projecteuler.net\/\" + href\n            yield response.follow(next_page, self.parse_problems)\n\n        for i in range(2, maxpag + 1):\n            next_page = \"https:\/\/projecteuler.net\/archives;page=\" + str(i)\n            yield response.follow(next_page, self.parse_next)\n\n        return [scrapy.Request(\"https:\/\/projecteuler.net\/archives\", self.parse)]\n\n    def parse_next(self, response):\n        for href in response.css(\"table#problems_table a::attr(href)\").extract():\n            next_page = \"https:\/\/projecteuler.net\/\" + href\n            yield response.follow(next_page, self.parse_problems)\n\n    def parse_problems(self, response):\n        l = ItemLoader(item=Problem(), response=response)\n        l.add_css(\"title\", \"h2\")\n        l.add_css(\"id\", \"#problem_info\")\n        l.add_css(\"content\", \".problem_content\")\n\n        yield l.load_item()\n<\/code><\/pre>\n<p>From the start page (archives) I follow every single link to a problem, scraping the data that I need with <code>parse_problems<\/code>. Then I launch the scraper for the other pages of the site, with the same procedure for every list of link.<br \/>\nAlso the Item definition with pre and post processes is very clean:<\/p>\n<pre><code>import re\n\nimport scrapy\nfrom scrapy.loader.processors import MapCompose, Compose\nfrom w3lib.html import remove_tags\n\n\ndef extract_first_number(text):\n    i = re.search('\\d+', text)\n    return int(text[i.start():i.end()])\n\n\ndef array_to_value(element):\n    return element[0]\n\n\nclass Problem(scrapy.Item):\n    id = scrapy.Field(\n        input_processor=MapCompose(remove_tags, extract_first_number),\n        output_processor=Compose(array_to_value)\n    )\n    title = scrapy.Field(input_processor=MapCompose(remove_tags))\n    content = scrapy.Field()\n<\/code><\/pre>\n<p>I launch this with the command <code>scrapy crawl euler -o euler.json<\/code> and it outputs an array of unordered json objects, everyone corrisponding to a single problem: this is fine for me because I&#8217;m going to process it with javascript, even if I think resolving the ordering problem via scrapy can be very simple.<\/p>\n<p>EDIT: in fact it is simple, using this pipeline<\/p>\n<pre><code>import json\n\nclass JsonWriterPipeline(object):\n\n    def open_spider(self, spider):\n        self.list_items = []\n        self.file = open('euler.json', 'w')\n\n    def close_spider(self, spider):\n        ordered_list = [None for i in range(len(self.list_items))]\n\n        self.file.write(\"[\\n\")\n\n        for i in self.list_items:\n            ordered_list[int(i['id']-1)] = json.dumps(dict(i))\n\n        for i in ordered_list:\n            self.file.write(str(i)+\",\\n\")\n\n        self.file.write(\"]\\n\")\n        self.file.close()\n\n    def process_item(self, item, spider):\n        self.list_items.append(item)\n        return item\n<\/code><\/pre>\n<p>though the best solution may be to create a custom exporter:<\/p>\n<pre><code>from scrapy.exporters import JsonItemExporter\nfrom scrapy.utils.python import to_bytes\n\n\nclass OrderedJsonItemExporter(JsonItemExporter):\n\n    def __init__(self, file, **kwargs):\n        # To initialize the object we use JsonItemExporter's constructor\n        super().__init__(file)\n        self.list_items = []\n\n    def export_item(self, item):\n        self.list_items.append(item)\n\n    def finish_exporting(self):\n        ordered_list = [None for i in range(len(self.list_items))]\n\n        for i in self.list_items:\n            ordered_list[int(i['id'] - 1)] = i\n\n        for i in ordered_list:\n            if self.first_item:\n                self.first_item = False\n            else:\n                self.file.write(b',')\n                self._beautify_newline()\n            itemdict = dict(self._get_serialized_fields(i))\n            data = self.encoder.encode(itemdict)\n            self.file.write(to_bytes(data, self.encoding))\n\n        self._beautify_newline()\n        self.file.write(b\"]\")\n<\/code><\/pre>\n<p>and configure it in settings to call it for json:<\/p>\n<pre><code>FEED_EXPORTERS = {\n    'json': 'eulerscraper.exporters.OrderedJsonItemExporter',\n}\n<\/code><\/pre>\n<\/p><\/div>\n<div class=\"mt24\"><\/div>\n<\/div>\n<p>            <span class=\"d-none\" itemprop=\"commentCount\"><\/span> <\/p><\/div>\n<\/div>\n<p>[ad_2]<\/p>\n<p>solved Scraping Project Euler site with scrapy [closed] <\/p>\n","protected":false},"excerpt":{"rendered":"<p>[ad_1] I think I have found a simplest yet fitting solution (at least for my purpose), in respect to existent code written to scrape projecteuler: # -*- coding: utf-8 -*- import scrapy from eulerscraper.items import Problem from scrapy.loader import ItemLoader class EulerSpider(scrapy.Spider): name = &#8220;euler&#8217; allowed_domains = [&#8216;projecteuler.net&#8217;] start_urls = [&#8220;https:\/\/projecteuler.net\/archives&#8221;] def parse(self, response): numpag &#8230; <a title=\"[Solved] Scraping Project Euler site with scrapy [closed]\" class=\"read-more\" href=\"https:\/\/jassweb.com\/solved\/solved-scraping-project-euler-site-with-scrapy-closed\/\" aria-label=\"More on [Solved] Scraping Project Euler site with scrapy [closed]\">Read more<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[320],"tags":[334,346,349,2182,760],"class_list":["post-7790","post","type-post","status-publish","format-standard","hentry","category-solved","tag-ajax","tag-html","tag-python","tag-scrapy","tag-web-scraping"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>[Solved] Scraping Project Euler site with scrapy [closed] - JassWeb<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/jassweb.com\/solved\/solved-scraping-project-euler-site-with-scrapy-closed\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"[Solved] Scraping Project Euler site with scrapy [closed] - JassWeb\" \/>\n<meta property=\"og:description\" content=\"[ad_1] I think I have found a simplest yet fitting solution (at least for my purpose), in respect to existent code written to scrape projecteuler: # -*- coding: utf-8 -*- import scrapy from eulerscraper.items import Problem from scrapy.loader import ItemLoader class EulerSpider(scrapy.Spider): name = &quot;euler&#039; allowed_domains = [&#039;projecteuler.net&#039;] start_urls = [&quot;https:\/\/projecteuler.net\/archives&quot;] def parse(self, response): numpag ... Read more\" \/>\n<meta property=\"og:url\" content=\"https:\/\/jassweb.com\/solved\/solved-scraping-project-euler-site-with-scrapy-closed\/\" \/>\n<meta property=\"og:site_name\" content=\"JassWeb\" \/>\n<meta property=\"article:published_time\" content=\"2022-09-10T04:28:16+00:00\" \/>\n<meta name=\"author\" content=\"Kirat\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kirat\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/jassweb.com\/solved\/solved-scraping-project-euler-site-with-scrapy-closed\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/jassweb.com\/solved\/solved-scraping-project-euler-site-with-scrapy-closed\/\"},\"author\":{\"name\":\"Kirat\",\"@id\":\"https:\/\/jassweb.com\/solved\/#\/schema\/person\/65c9c7b7958150c0dc8371fa35dd7c31\"},\"headline\":\"[Solved] Scraping Project Euler site with scrapy [closed]\",\"datePublished\":\"2022-09-10T04:28:16+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/jassweb.com\/solved\/solved-scraping-project-euler-site-with-scrapy-closed\/\"},\"wordCount\":174,\"publisher\":{\"@id\":\"https:\/\/jassweb.com\/solved\/#organization\"},\"keywords\":[\"ajax\",\"html\",\"python\",\"scrapy\",\"web-scraping\"],\"articleSection\":[\"Solved\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/jassweb.com\/solved\/solved-scraping-project-euler-site-with-scrapy-closed\/\",\"url\":\"https:\/\/jassweb.com\/solved\/solved-scraping-project-euler-site-with-scrapy-closed\/\",\"name\":\"[Solved] Scraping Project Euler site with scrapy [closed] - JassWeb\",\"isPartOf\":{\"@id\":\"https:\/\/jassweb.com\/solved\/#website\"},\"datePublished\":\"2022-09-10T04:28:16+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/jassweb.com\/solved\/solved-scraping-project-euler-site-with-scrapy-closed\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/jassweb.com\/solved\/solved-scraping-project-euler-site-with-scrapy-closed\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/jassweb.com\/solved\/solved-scraping-project-euler-site-with-scrapy-closed\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/jassweb.com\/solved\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"[Solved] Scraping Project Euler site with scrapy [closed]\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/jassweb.com\/solved\/#website\",\"url\":\"https:\/\/jassweb.com\/solved\/\",\"name\":\"JassWeb\",\"description\":\"Build High-quality Websites\",\"publisher\":{\"@id\":\"https:\/\/jassweb.com\/solved\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/jassweb.com\/solved\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/jassweb.com\/solved\/#organization\",\"name\":\"Jass Web\",\"url\":\"https:\/\/jassweb.com\/solved\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/jassweb.com\/solved\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/jassweb.com\/wp-content\/uploads\/2021\/02\/jass-website-logo-1.png\",\"contentUrl\":\"https:\/\/jassweb.com\/wp-content\/uploads\/2021\/02\/jass-website-logo-1.png\",\"width\":693,\"height\":132,\"caption\":\"Jass Web\"},\"image\":{\"@id\":\"https:\/\/jassweb.com\/solved\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/jassweb.com\/solved\/#\/schema\/person\/65c9c7b7958150c0dc8371fa35dd7c31\",\"name\":\"Kirat\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/jassweb.com\/solved\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/jassweb.com\/solved\/wp-content\/litespeed\/avatar\/1261af3c9451399fa1336d28b98ea3bb.jpg?ver=1775798750\",\"contentUrl\":\"https:\/\/jassweb.com\/solved\/wp-content\/litespeed\/avatar\/1261af3c9451399fa1336d28b98ea3bb.jpg?ver=1775798750\",\"caption\":\"Kirat\"},\"sameAs\":[\"http:\/\/jassweb.com\"],\"url\":\"https:\/\/jassweb.com\/solved\/author\/jaspritsinghghumangmail-com\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"[Solved] Scraping Project Euler site with scrapy [closed] - JassWeb","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/jassweb.com\/solved\/solved-scraping-project-euler-site-with-scrapy-closed\/","og_locale":"en_US","og_type":"article","og_title":"[Solved] Scraping Project Euler site with scrapy [closed] - JassWeb","og_description":"[ad_1] I think I have found a simplest yet fitting solution (at least for my purpose), in respect to existent code written to scrape projecteuler: # -*- coding: utf-8 -*- import scrapy from eulerscraper.items import Problem from scrapy.loader import ItemLoader class EulerSpider(scrapy.Spider): name = \"euler' allowed_domains = ['projecteuler.net'] start_urls = [\"https:\/\/projecteuler.net\/archives\"] def parse(self, response): numpag ... Read more","og_url":"https:\/\/jassweb.com\/solved\/solved-scraping-project-euler-site-with-scrapy-closed\/","og_site_name":"JassWeb","article_published_time":"2022-09-10T04:28:16+00:00","author":"Kirat","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Kirat","Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/jassweb.com\/solved\/solved-scraping-project-euler-site-with-scrapy-closed\/#article","isPartOf":{"@id":"https:\/\/jassweb.com\/solved\/solved-scraping-project-euler-site-with-scrapy-closed\/"},"author":{"name":"Kirat","@id":"https:\/\/jassweb.com\/solved\/#\/schema\/person\/65c9c7b7958150c0dc8371fa35dd7c31"},"headline":"[Solved] Scraping Project Euler site with scrapy [closed]","datePublished":"2022-09-10T04:28:16+00:00","mainEntityOfPage":{"@id":"https:\/\/jassweb.com\/solved\/solved-scraping-project-euler-site-with-scrapy-closed\/"},"wordCount":174,"publisher":{"@id":"https:\/\/jassweb.com\/solved\/#organization"},"keywords":["ajax","html","python","scrapy","web-scraping"],"articleSection":["Solved"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/jassweb.com\/solved\/solved-scraping-project-euler-site-with-scrapy-closed\/","url":"https:\/\/jassweb.com\/solved\/solved-scraping-project-euler-site-with-scrapy-closed\/","name":"[Solved] Scraping Project Euler site with scrapy [closed] - JassWeb","isPartOf":{"@id":"https:\/\/jassweb.com\/solved\/#website"},"datePublished":"2022-09-10T04:28:16+00:00","breadcrumb":{"@id":"https:\/\/jassweb.com\/solved\/solved-scraping-project-euler-site-with-scrapy-closed\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/jassweb.com\/solved\/solved-scraping-project-euler-site-with-scrapy-closed\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/jassweb.com\/solved\/solved-scraping-project-euler-site-with-scrapy-closed\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/jassweb.com\/solved\/"},{"@type":"ListItem","position":2,"name":"[Solved] Scraping Project Euler site with scrapy [closed]"}]},{"@type":"WebSite","@id":"https:\/\/jassweb.com\/solved\/#website","url":"https:\/\/jassweb.com\/solved\/","name":"JassWeb","description":"Build High-quality Websites","publisher":{"@id":"https:\/\/jassweb.com\/solved\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/jassweb.com\/solved\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/jassweb.com\/solved\/#organization","name":"Jass Web","url":"https:\/\/jassweb.com\/solved\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/jassweb.com\/solved\/#\/schema\/logo\/image\/","url":"https:\/\/jassweb.com\/wp-content\/uploads\/2021\/02\/jass-website-logo-1.png","contentUrl":"https:\/\/jassweb.com\/wp-content\/uploads\/2021\/02\/jass-website-logo-1.png","width":693,"height":132,"caption":"Jass Web"},"image":{"@id":"https:\/\/jassweb.com\/solved\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/jassweb.com\/solved\/#\/schema\/person\/65c9c7b7958150c0dc8371fa35dd7c31","name":"Kirat","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/jassweb.com\/solved\/#\/schema\/person\/image\/","url":"https:\/\/jassweb.com\/solved\/wp-content\/litespeed\/avatar\/1261af3c9451399fa1336d28b98ea3bb.jpg?ver=1775798750","contentUrl":"https:\/\/jassweb.com\/solved\/wp-content\/litespeed\/avatar\/1261af3c9451399fa1336d28b98ea3bb.jpg?ver=1775798750","caption":"Kirat"},"sameAs":["http:\/\/jassweb.com"],"url":"https:\/\/jassweb.com\/solved\/author\/jaspritsinghghumangmail-com\/"}]}},"_links":{"self":[{"href":"https:\/\/jassweb.com\/solved\/wp-json\/wp\/v2\/posts\/7790","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jassweb.com\/solved\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jassweb.com\/solved\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/jassweb.com\/solved\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/jassweb.com\/solved\/wp-json\/wp\/v2\/comments?post=7790"}],"version-history":[{"count":0,"href":"https:\/\/jassweb.com\/solved\/wp-json\/wp\/v2\/posts\/7790\/revisions"}],"wp:attachment":[{"href":"https:\/\/jassweb.com\/solved\/wp-json\/wp\/v2\/media?parent=7790"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jassweb.com\/solved\/wp-json\/wp\/v2\/categories?post=7790"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jassweb.com\/solved\/wp-json\/wp\/v2\/tags?post=7790"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}