Comment by Ronsenshi

Comment by Ronsenshi a day ago

2 replies

Just today I wanted to get a list of locations of various art events around the city which are all located on the same website, but which does not provide a page with all events happening this month on a map. I need a single map to figure out what I want to visit based on distance I have to travel, unfortunately that's not an option - only option is to go through hundreds of items and hope whatever I picked is near me.

Do you think this is such a horrible thing to scrape? I can't do it manually since there are few hundred locations. I could write some python script which uses playwrite to scrape things using my desktop browser in order to avoid CloudFlare. Or, which I am much more familiar with, I could write a python script that uses BeautifulSoup to extract all the relevant locations once for me. I would have been perfectly happy fetching 1 page/sec or even 1 page/2 seconds and would still be done within 20 minutes if only there was no anti-scraping protection.

Scraping is a perfectly legal activity, after all. Except thanks to overly-eager scraping bots and clueless/malicious people who run them there's very little chance for anyone trying to compete with Google or even do small scale scraping to make their life and life of local art enthusiasts easier. Google owns search. Google IS search and no competition is allowed, it seems.

ErroneousBosh a day ago

If you want the data, why not contact the organisation with the website?

Why is hammering the everloving fuck out of their website okay?

  • Saris 18 hours ago

    1 request per second is nowhere even close to hammering a website.

    They made the data available on the website already, there's no reason to contact them when you can just load it from their website.