Status
Not open for further replies.

AdultFoundry

New Member
2
2015
0
0
I will be working on downloading a lot of pages like this:

domain-name.com/go/to/some-name01-here/12674747474/

I would like to start downloading from this page, include it, and also download all subdirectories, but only on this domain, and descendent, from the main ulr, like the one above. It is usually 10 or 20 pages max, and nothing else. There may be images on these pages, and I would like to include them too.

So this is basically -> enter url like the one above -> download 10-30 pages (this url, and all underneath it, only on this domain)

I've been testing ScrapeBook for Firefox, but there may be something better. I've been also trying HTTrack and Teleport Pro, but these, as far as can go back remembering, never work.

What could be the best solution for this? Something fast would be good too. I may work on 10,000 separate urls like this, lets say.

Thanks.
 
7 comments
I know you said you've tried HTTrack, but I'd give it another go. Sounds like just what you need, probably just need to modify the configuration a bit more.
 
I hope you won't scrape websites in order to steal their designs. Not accusing you in any way, just saying because I've seen it happen a lot of times. :)
 
Status
Not open for further replies.
Back
Top