Everyone loves the idea of scraping, no one likes maintaining scrapers that break once a week because the CSS or HTML changed.
This one. One of the best motivators. Sense of satisfaction when you get it working and you feel unstoppable (until the next subtle changes happens anyway)
I loved scraping until my ip was blocked for botting lol. I know there's ways around it it's just work though
I successfully scraped millions of Amazon product listings simply by routing through TOR and cycling the exit node every 10 seconds.
Or in the case of wikipedia, every table on successive pages for sequential data is formatted differently.
Relevant SO post. https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags#1732454
13 years ago my god. I wonder what Jon Skeet is doing these days.
I remember when he passed me in the reputation ranking back in the early days and thinking that I needed to be a little bit more active on the site to catch him lol.
In short, it's the wrong tool for the job.
In practice, if your target is very limited and consistent, it's probably fine. But as a general statement about someone's behavior, it really sounds like someone is wasting a lot of time and regularly getting sub-par results.
Just a heads up for companies thinking it's wrong to scrap: if you don't want info to be scraped, don't put it on the internet.
Much less beholden to arbitrary rules also. Way too many times companies will just up and lift their API access or push through restrictions. No ty, I'll just access it myself then
API starter kit
Hold on, I thought it was supposed to be realism on the virgin's behalf and ridiculous nonsense on the chad behalf:
All I see is realism on both sides lol
I created a shitty script (with ChatGPT's help) that uses Selenium and can dump a Confluence page from work, all its subpages and all linked Google Drive documents.
When a customer needs a part replaced, they send in shipping data. This data has to be entered into 3-4 different web forms and an email. This allows me to automate it all from a single form that has built in error checking so human mistakes are limited.
Company could probably automate this all in the backend but they won’t :shrug:
Using Selenium for this is probably overkill. You might be better off sending direct HTTP requests with your form data. This way you don't actually have to spin up an entire browser to perform that simple operation for you.
That said, if it works - it works!
I'm guessing forms like this have CSRF protection, so you'd probably have to obtain that token and hope they don't make a new one on every request.
Good point. This is also possible to overcome with one additional HTTP request and some HTML parsing. Still less overhead than running Selenium! In any event, I was replying in a general sense: Selenium is easy to understand and seems like an intuitive solution to a simple problem. In 99% of cases some additional effort will result in a more efficient solution.
I used Twitter Scraper to get twitter data for my thesis. Shortly after, it became obsolete.
https://github.com/taspinar/twitterscraper/issues/368 rip twitter scraper
I wanted to build a Discord bot that would check NIST for new CVEs every 24 hours. But their API leaves quiiiiiiite a bit to be desired.
Their pages, however…
It’s all fun and games until you have to support all this shit and it breaks weekly!
That being said, I do miss the simplicity of maintaining selenium projects for work
I use scrapy. It has a steeper learning curve than other libraries, but it's totally worth it.
Fuck, I think I've been doing it wrong and this meme gave me more things to learn than any YouTube video has
Let's see what WEI (if implemented ) will do with the scrapers. The future doesn't look promising.
Websites and services create APIs for programmers to use them. So Spotify has code that let's you build a program that can use its features. But you need a token they give you after you sign up. The token can be revoked and used to monitor how much of their service you're using. That way they can restrict if its too much.
Scraping is raw dogging the web slut you met at the cougar ranch who went home with you because you reminded her of her dog
This is the greatest definition for scraping I've ever read. You should have it bronzed.
'Scraping' is the process of anonymously and programmatically collecting data from a webpage(s), often without the website's permission and only limited to the content made publicly available. This is in contrast to using an API provided by the database owner which is limited by tokens, access volume, available end points etc.
Everytime I think I'm good with tech, something like this shows up in my feed and makes me realize I know jackshit.
Scrapers have been a thing since the web exists.
One of the first search engines is even called WebCrawler
So, where can I find the Chad scrapper for reddit? They definitely have made it harder to track admin shadow ban and removal shenanigans, specially because sites like reveddit have decided to play ball as if reddit was acting in good faith in the first place.
If you wanted a chad scraper, look at Pushshift. Reveddit relied on it before Reddit got it taken down.
When the data is on multiple sites or sources.
API licenses can be expensive, and some sources might not even have an API.
I get the concept but a concrete example. What company could possibly want to pay for scraping a site?
Some dude as a hobby I get it, but what, like Amazon will pay some guy to scrape competition prices or something?
I can't imagine data scraping is something companies will quickly admit to, considering the legal issues involved. It was also the norm for a long time -- APIs for accessing user generated data is a relatively new thing.
As for a concrete example: companies using chatGPT. A lot of useful data comes from scraping sites that don't offer an API.
Maybe you've got a small company involved in toy buying and reselling, and they want to scrape toy postings from ebay etc. so that they can scroll through a database of different postings and sort it by price or estimated profit or whatever.
Imagine an investment firm looking at a property market. They need data like price trends in the surrounding area.
Real estate API is expensive, scraping is free. By hiring an employee the can save money.
There's a ton of money to be made from scraping, consolidating, and organizing publicly accessible data. A company I worked for did it with health insurance policy data because every insurance company has a different website with a different data format and data that updates every day. People will pay da big bux for someone to wrap all that messiness into a neat, consistent package. Many sites even gave us explicit permission to scrape because they didn't want to set up an api or find some way to send us files.
Right now, gathering machine learning data is hot, cause you need a lot of it to train a model. Companies may specialize in getting, say, social media posts from all kinds of sites and putting them together in a consistent format.
Most clear example is these apps that get you the best deals on hotels or flights, they compare prices by web scrapping. Obviously they take a cut in these transactions.