Project Title: Scrape Posts Daily from Reddit Pages
Project Description:
I would provide a list of 8000 reddit page urls.
1st you must open each and sort by new and scroll down and count all unique posts for each page and gather the following data:
Reddit Post id
Posted by Name
Post Title
Date / time of Post
Date of Scrape
Number of Comments
Number of Lightbulbs
% Upvoted
Category
Then for all 8000 pages you must open each page every 4 hours and sort by new and scroll down to 4 hours taking all post data the same as above.
You repeat this all the time so we have all unique post details about a page.
We don’t want any promoted posts.
For similar work requirement feel free to email us on info@logicwis.com.
Hey there!
Our brand was on the front page of reddit and we would like to analyze the comment information to identify conversation trends.
We would just like the raw comment data in a .csv or .xls if possible.
Here is the thread we’d like to scrape: https://www.reddit.com/r/oddlysatisfying/comments/hbnyj1/soda_design_done_right/
Hello there,
I’m running a stock analysis company.
I need a script on a Google Spreadsheet to automate collecting posts from a subreddit and their relevant details every 4-6 hours.
Can you do this?
I have a Wix website that has a few of blog posts, I need someone to scrape each of these posts and add the information to a spreadsheet. I will provide a spreadsheet template for you to use, it’ll have columns for Title, Content, and Tags and Image URL.
Hi,
I require all the blog posts that can be found on WaybackMachine for the website http://www.ecospeed.co.uk/blog/.
Any dates in 2018 & 2021 are fine for the data to be copied from.
Need a scrapper to scrape twitter data based on hashtag search.
Should work like this-
User enter a hashtag into a text field and submit the form from my website, now scrapper goes into twitter and search with that hashtag and fetch all media and show results into our website (Not storing into db until user clicks a SAVE button)
Let me know if you can help on this.