{"id":11728,"date":"2022-09-28T09:36:27","date_gmt":"2022-09-28T04:06:27","guid":{"rendered":"https:\/\/jassweb.com\/solved\/solved-web-scraping-beautifulsoup-next-page-parsing\/"},"modified":"2022-09-28T09:36:27","modified_gmt":"2022-09-28T04:06:27","slug":"solved-web-scraping-beautifulsoup-next-page-parsing","status":"publish","type":"post","link":"https:\/\/jassweb.com\/solved\/solved-web-scraping-beautifulsoup-next-page-parsing\/","title":{"rendered":"[Solved] Web Scraping &#038; BeautifulSoup &#8211; Next Page parsing"},"content":{"rendered":"<p> [ad_1]<br \/>\n<\/p>\n<div id=\"answer-69980178\" class=\"answer js-answer accepted-answer js-accepted-answer\" data-answerid=\"69980178\" data-parentid=\"69979372\" data-score=\"2\" data-position-on-page=\"1\" data-highest-scored=\"1\" data-question-has-accepted-highest-score=\"1\" itemprop=\"acceptedAnswer\" itemscope itemtype=\"https:\/\/schema.org\/Answer\">\n<div class=\"post-layout\">\n<div class=\"votecell post-layout--left\"><\/div>\n<div class=\"answercell post-layout--right\">\n<div class=\"s-prose js-post-body\" itemprop=\"text\">\n<p>Try this:<br \/>\nIf you want <code>cvs file<\/code>  then you finish the line <code>print(df)<\/code> and use <code>df.to_csv(\"prod.csv\")<\/code> I have written in code to get csv file<\/p>\n<pre><code>import requests\nfrom bs4 import BeautifulSoup\nimport pandas as pd\nheaders = {'User-Agent': 'Mozilla\/5.0'}\ntemp=[]\nfor page in range(1, 20):\n    response = requests.get(\"https:\/\/www.avbuyer.com\/aircraft\/private-jets\/page-{page}\".format(page=page),headers=headers,)\n    soup = BeautifulSoup(response.content, 'html.parser')\n    postings = soup.find_all('div', class_='grid-x list-content')\n    for post in postings:\n        plane = post.find('h2', class_='item-title').text\n        try:\n            price = post.find('div', class_='price').text\n        except:\n            price=\" \"\n        location = post.find('div', class_='list-item-location').text\n        t=post.find_all('div',class_='list-other-dtl')\n        for i in t:\n            data=[tup.text for tup in i.find_all('li')]\n            years=data[0]\n            s=data[1]\n            total_time=data[2]\n            temp.append([plane,price,location,years,s,total_time])\n\ndf=pd.DataFrame(temp,columns=[\"plane\",\"price\",\"location\",\"Years\",\"S\/N\",\"Totaltime\"])\nprint(df)\n<\/code><\/pre>\n<p>output:<\/p>\n<pre><code>                      plane         price  ...             S\/N         Totaltime\n0            Gulfstream G280     Make offer  ...        S\/N 2007   Total Time 2528\n1    Dassault Falcon 2000LXS     Make offer  ...         S\/N 377     Total Time 33\n2       Cirrus Vision SF50 G1  Please call   ...        S\/N 0080    Total Time 615\n3              Gulfstream IV     Make offer  ...        S\/N 1148   Total Time 6425\n4            Gulfstream G280     Make offer  ...        S\/N 2072   Total Time 1918\n..                        ...           ...  ...             ...               ...\n342       Embraer Phenom 100       Now Sold  ...    S\/N 50000035   Total Time 3417\n343          Gulfstream G200       Now Sold  ...         S\/N 152   Total Time 7209\n344     Cessna Citation XLS+       Now Sold  ...           S\/N -      Total Time -\n345    Cessna Citation Ultra       Now Sold  ...    S\/N 560-0393  Total Time 12947\n346    Cessna Citation Excel       Now Sold  ...  S\/N 560XL-5253   Total Time 4850\n<\/code><\/pre>\n<\/p><\/div>\n<div class=\"mt24\"><\/div>\n<\/div>\n<p>            <span class=\"d-none\" itemprop=\"commentCount\">10<\/span> <\/p><\/div>\n<\/div>\n<p>[ad_2]<\/p>\n<p>solved Web Scraping &#038; BeautifulSoup &#8211; Next Page parsing <\/p>\n","protected":false},"excerpt":{"rendered":"<p>[ad_1] Try this: If you want cvs file then you finish the line print(df) and use df.to_csv(&#8220;prod.csv&#8221;) I have written in code to get csv file import requests from bs4 import BeautifulSoup import pandas as pd headers = {&#8216;User-Agent&#8217;: &#8216;Mozilla\/5.0&#8217;} temp=[] for page in range(1, 20): response = requests.get(&#8220;https:\/\/www.avbuyer.com\/aircraft\/private-jets\/page-{page}&#8221;.format(page=page),headers=headers,) soup = BeautifulSoup(response.content, &#8216;html.parser&#8217;) postings = &#8230; <a title=\"[Solved] Web Scraping &#038; BeautifulSoup &#8211; Next Page parsing\" class=\"read-more\" href=\"https:\/\/jassweb.com\/solved\/solved-web-scraping-beautifulsoup-next-page-parsing\/\" aria-label=\"More on [Solved] Web Scraping &#038; BeautifulSoup &#8211; Next Page parsing\">Read more<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[320],"tags":[622,349,760],"class_list":["post-11728","post","type-post","status-publish","format-standard","hentry","category-solved","tag-beautifulsoup","tag-python","tag-web-scraping"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>[Solved] Web Scraping &amp; BeautifulSoup - Next Page parsing - JassWeb<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/jassweb.com\/solved\/solved-web-scraping-beautifulsoup-next-page-parsing\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"[Solved] Web Scraping &amp; BeautifulSoup - Next Page parsing - JassWeb\" \/>\n<meta property=\"og:description\" content=\"[ad_1] Try this: If you want cvs file then you finish the line print(df) and use df.to_csv(&quot;prod.csv&quot;) I have written in code to get csv file import requests from bs4 import BeautifulSoup import pandas as pd headers = {&#039;User-Agent&#039;: &#039;Mozilla\/5.0&#039;} temp=[] for page in range(1, 20): response = requests.get(&quot;https:\/\/www.avbuyer.com\/aircraft\/private-jets\/page-{page}&quot;.format(page=page),headers=headers,) soup = BeautifulSoup(response.content, &#039;html.parser&#039;) postings = ... Read more\" \/>\n<meta property=\"og:url\" content=\"https:\/\/jassweb.com\/solved\/solved-web-scraping-beautifulsoup-next-page-parsing\/\" \/>\n<meta property=\"og:site_name\" content=\"JassWeb\" \/>\n<meta property=\"article:published_time\" content=\"2022-09-28T04:06:27+00:00\" \/>\n<meta name=\"author\" content=\"Kirat\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Kirat\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"1 minute\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/jassweb.com\/solved\/solved-web-scraping-beautifulsoup-next-page-parsing\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/jassweb.com\/solved\/solved-web-scraping-beautifulsoup-next-page-parsing\/\"},\"author\":{\"name\":\"Kirat\",\"@id\":\"https:\/\/jassweb.com\/solved\/#\/schema\/person\/65c9c7b7958150c0dc8371fa35dd7c31\"},\"headline\":\"[Solved] Web Scraping &#038; BeautifulSoup &#8211; Next Page parsing\",\"datePublished\":\"2022-09-28T04:06:27+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/jassweb.com\/solved\/solved-web-scraping-beautifulsoup-next-page-parsing\/\"},\"wordCount\":40,\"publisher\":{\"@id\":\"https:\/\/jassweb.com\/solved\/#organization\"},\"keywords\":[\"beautifulsoup\",\"python\",\"web-scraping\"],\"articleSection\":[\"Solved\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/jassweb.com\/solved\/solved-web-scraping-beautifulsoup-next-page-parsing\/\",\"url\":\"https:\/\/jassweb.com\/solved\/solved-web-scraping-beautifulsoup-next-page-parsing\/\",\"name\":\"[Solved] Web Scraping & BeautifulSoup - Next Page parsing - JassWeb\",\"isPartOf\":{\"@id\":\"https:\/\/jassweb.com\/solved\/#website\"},\"datePublished\":\"2022-09-28T04:06:27+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/jassweb.com\/solved\/solved-web-scraping-beautifulsoup-next-page-parsing\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/jassweb.com\/solved\/solved-web-scraping-beautifulsoup-next-page-parsing\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/jassweb.com\/solved\/solved-web-scraping-beautifulsoup-next-page-parsing\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/jassweb.com\/solved\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"[Solved] Web Scraping &#038; BeautifulSoup &#8211; Next Page parsing\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/jassweb.com\/solved\/#website\",\"url\":\"https:\/\/jassweb.com\/solved\/\",\"name\":\"JassWeb\",\"description\":\"Build High-quality Websites\",\"publisher\":{\"@id\":\"https:\/\/jassweb.com\/solved\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/jassweb.com\/solved\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/jassweb.com\/solved\/#organization\",\"name\":\"Jass Web\",\"url\":\"https:\/\/jassweb.com\/solved\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/jassweb.com\/solved\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/jassweb.com\/wp-content\/uploads\/2021\/02\/jass-website-logo-1.png\",\"contentUrl\":\"https:\/\/jassweb.com\/wp-content\/uploads\/2021\/02\/jass-website-logo-1.png\",\"width\":693,\"height\":132,\"caption\":\"Jass Web\"},\"image\":{\"@id\":\"https:\/\/jassweb.com\/solved\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/jassweb.com\/solved\/#\/schema\/person\/65c9c7b7958150c0dc8371fa35dd7c31\",\"name\":\"Kirat\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/jassweb.com\/solved\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/jassweb.com\/solved\/wp-content\/litespeed\/avatar\/1261af3c9451399fa1336d28b98ea3bb.jpg?ver=1775798750\",\"contentUrl\":\"https:\/\/jassweb.com\/solved\/wp-content\/litespeed\/avatar\/1261af3c9451399fa1336d28b98ea3bb.jpg?ver=1775798750\",\"caption\":\"Kirat\"},\"sameAs\":[\"http:\/\/jassweb.com\"],\"url\":\"https:\/\/jassweb.com\/solved\/author\/jaspritsinghghumangmail-com\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"[Solved] Web Scraping & BeautifulSoup - Next Page parsing - JassWeb","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/jassweb.com\/solved\/solved-web-scraping-beautifulsoup-next-page-parsing\/","og_locale":"en_US","og_type":"article","og_title":"[Solved] Web Scraping & BeautifulSoup - Next Page parsing - JassWeb","og_description":"[ad_1] Try this: If you want cvs file then you finish the line print(df) and use df.to_csv(\"prod.csv\") I have written in code to get csv file import requests from bs4 import BeautifulSoup import pandas as pd headers = {'User-Agent': 'Mozilla\/5.0'} temp=[] for page in range(1, 20): response = requests.get(\"https:\/\/www.avbuyer.com\/aircraft\/private-jets\/page-{page}\".format(page=page),headers=headers,) soup = BeautifulSoup(response.content, 'html.parser') postings = ... Read more","og_url":"https:\/\/jassweb.com\/solved\/solved-web-scraping-beautifulsoup-next-page-parsing\/","og_site_name":"JassWeb","article_published_time":"2022-09-28T04:06:27+00:00","author":"Kirat","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Kirat","Est. reading time":"1 minute"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/jassweb.com\/solved\/solved-web-scraping-beautifulsoup-next-page-parsing\/#article","isPartOf":{"@id":"https:\/\/jassweb.com\/solved\/solved-web-scraping-beautifulsoup-next-page-parsing\/"},"author":{"name":"Kirat","@id":"https:\/\/jassweb.com\/solved\/#\/schema\/person\/65c9c7b7958150c0dc8371fa35dd7c31"},"headline":"[Solved] Web Scraping &#038; BeautifulSoup &#8211; Next Page parsing","datePublished":"2022-09-28T04:06:27+00:00","mainEntityOfPage":{"@id":"https:\/\/jassweb.com\/solved\/solved-web-scraping-beautifulsoup-next-page-parsing\/"},"wordCount":40,"publisher":{"@id":"https:\/\/jassweb.com\/solved\/#organization"},"keywords":["beautifulsoup","python","web-scraping"],"articleSection":["Solved"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/jassweb.com\/solved\/solved-web-scraping-beautifulsoup-next-page-parsing\/","url":"https:\/\/jassweb.com\/solved\/solved-web-scraping-beautifulsoup-next-page-parsing\/","name":"[Solved] Web Scraping & BeautifulSoup - Next Page parsing - JassWeb","isPartOf":{"@id":"https:\/\/jassweb.com\/solved\/#website"},"datePublished":"2022-09-28T04:06:27+00:00","breadcrumb":{"@id":"https:\/\/jassweb.com\/solved\/solved-web-scraping-beautifulsoup-next-page-parsing\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/jassweb.com\/solved\/solved-web-scraping-beautifulsoup-next-page-parsing\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/jassweb.com\/solved\/solved-web-scraping-beautifulsoup-next-page-parsing\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/jassweb.com\/solved\/"},{"@type":"ListItem","position":2,"name":"[Solved] Web Scraping &#038; BeautifulSoup &#8211; Next Page parsing"}]},{"@type":"WebSite","@id":"https:\/\/jassweb.com\/solved\/#website","url":"https:\/\/jassweb.com\/solved\/","name":"JassWeb","description":"Build High-quality Websites","publisher":{"@id":"https:\/\/jassweb.com\/solved\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/jassweb.com\/solved\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/jassweb.com\/solved\/#organization","name":"Jass Web","url":"https:\/\/jassweb.com\/solved\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/jassweb.com\/solved\/#\/schema\/logo\/image\/","url":"https:\/\/jassweb.com\/wp-content\/uploads\/2021\/02\/jass-website-logo-1.png","contentUrl":"https:\/\/jassweb.com\/wp-content\/uploads\/2021\/02\/jass-website-logo-1.png","width":693,"height":132,"caption":"Jass Web"},"image":{"@id":"https:\/\/jassweb.com\/solved\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/jassweb.com\/solved\/#\/schema\/person\/65c9c7b7958150c0dc8371fa35dd7c31","name":"Kirat","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/jassweb.com\/solved\/#\/schema\/person\/image\/","url":"https:\/\/jassweb.com\/solved\/wp-content\/litespeed\/avatar\/1261af3c9451399fa1336d28b98ea3bb.jpg?ver=1775798750","contentUrl":"https:\/\/jassweb.com\/solved\/wp-content\/litespeed\/avatar\/1261af3c9451399fa1336d28b98ea3bb.jpg?ver=1775798750","caption":"Kirat"},"sameAs":["http:\/\/jassweb.com"],"url":"https:\/\/jassweb.com\/solved\/author\/jaspritsinghghumangmail-com\/"}]}},"_links":{"self":[{"href":"https:\/\/jassweb.com\/solved\/wp-json\/wp\/v2\/posts\/11728","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jassweb.com\/solved\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jassweb.com\/solved\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/jassweb.com\/solved\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/jassweb.com\/solved\/wp-json\/wp\/v2\/comments?post=11728"}],"version-history":[{"count":0,"href":"https:\/\/jassweb.com\/solved\/wp-json\/wp\/v2\/posts\/11728\/revisions"}],"wp:attachment":[{"href":"https:\/\/jassweb.com\/solved\/wp-json\/wp\/v2\/media?parent=11728"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jassweb.com\/solved\/wp-json\/wp\/v2\/categories?post=11728"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jassweb.com\/solved\/wp-json\/wp\/v2\/tags?post=11728"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}