The Problem.
Pirate streaming sites have a problem. Over the years they have been getting swarmed by an army of automated bots/scrapers that detect links to copyrighted work and automatically file DMCA complaints, and it's getting worse every day. In some cases stuff gets reported and deleted within minutes. As I've explained a few times you can't really prevent this from happening. Some sites offer services to protect/hide links or embeds but in reality using such a service will do nothing but annoy your users. Other sites try cheap tricks like encoding the links to try and hide them but that is utterly pointless. As someone who's been writing bots/scrapers for over a decade I can tell you that none of that stuff works. It's 30 minutes of extra work at best, though usually it's as simple as adding only 1 line of code to decode the encoded link. There are other steps you can take to combat bots in general but again none of them are effective or practical against anything but the most simple bots. The kind of bots employed by the copyright mafia couldn't care less about any of the above. Well then... If you can't prevent them from doing their thing what can you do?
But First...
There is something I need to point out. Scraping is easy (if you know what you're doing) and requires very little in terms of resources. A single cheapo dedicated server is enough to track and scrape 100's of websites. It can scrape and process dozens of pages per second easily. This means that a for the price of a cheap server they can take on all of the streaming sites that matter, or perhaps even all of them. Period. This is one of the reasons anti-piracy is a valid business model. They get paid good money by the copyright holders and the operating costs are low. All they need is one or a couple of servers and some devs. This is important to understand because it's one of the key reasons why DMCA abuse and trolling has become so widespread. It's easy money. But what if that wasn't the case? The answer to that question should be pretty obvious.
Distributed Denial of Copyright.
Everyone here should know what a DDoS attack is. A network of computers spam a server or network causing it to overload, knocking out the site/service. So what's "DDoC"? Excellent clickbait material if I do say so myself ! Nah just kidding . For starters, unlike DDoS it is not an attack but a defense. Consider this: what if a server that could scrape 100 pages per second suddenly can only do 1 per second. There are probably hundreds, if not thousands of pirate streaming sites. Each one easily has a few 1000 pages worth of movies/episodes/whatever. Suddenly instead of that latest episode getting reported in an hour it takes days. Instead needing one server to DMCA troll the entire internet now you need and entire datacenter. Operating costs explode .
But How?
How do you slow a server down 100x? Easy: you increase the workload 100x. Right now processing a page only takes a fraction of a second. A HTTP request, parse the HTML DOM, extract the links and that's it. This is a very lightweight task which is why you can do it many times a second in parallel on a single server. However, if we were to pull the links through 5000 iterations of state-of-the-art encryption a CPU core would now need to spend a few seconds decrypting it. To avoid nuking your own server you have to use Javascript to do the same thing on the client side (inside the browser). A user on your site wouldn't have much problem waiting a couple of extra seconds for the stream to start or the links to appear but for a server running a bot(s) that has to do this a ton of times it creates a massive CPU load/bottleneck. The drawback is that this relies on it being a widely used technique to really ruin the copyright mafia's day. That said, if nothing else the implementation I have in mind for this concept is the best link obfuscation out there bar none. So even if nobody else uses it still does a better job at hiding links than anything out there. I'm not sure yet if I'm actually going to write a proof-of-concept implementation. Being a keeper of robots myself I'd be shooting myself in the foot for little gain. That said, the idea is pretty simple so anyone with some programming experience can easily implement this.
Thoughts? I'd love to hear what other devs think. Actually, this is one of those rare cases where I'd love to see someone poke holes into this and tell me "nah that will never work because XYZ, stop being silly!".
Pirate streaming sites have a problem. Over the years they have been getting swarmed by an army of automated bots/scrapers that detect links to copyrighted work and automatically file DMCA complaints, and it's getting worse every day. In some cases stuff gets reported and deleted within minutes. As I've explained a few times you can't really prevent this from happening. Some sites offer services to protect/hide links or embeds but in reality using such a service will do nothing but annoy your users. Other sites try cheap tricks like encoding the links to try and hide them but that is utterly pointless. As someone who's been writing bots/scrapers for over a decade I can tell you that none of that stuff works. It's 30 minutes of extra work at best, though usually it's as simple as adding only 1 line of code to decode the encoded link. There are other steps you can take to combat bots in general but again none of them are effective or practical against anything but the most simple bots. The kind of bots employed by the copyright mafia couldn't care less about any of the above. Well then... If you can't prevent them from doing their thing what can you do?
But First...
There is something I need to point out. Scraping is easy (if you know what you're doing) and requires very little in terms of resources. A single cheapo dedicated server is enough to track and scrape 100's of websites. It can scrape and process dozens of pages per second easily. This means that a for the price of a cheap server they can take on all of the streaming sites that matter, or perhaps even all of them. Period. This is one of the reasons anti-piracy is a valid business model. They get paid good money by the copyright holders and the operating costs are low. All they need is one or a couple of servers and some devs. This is important to understand because it's one of the key reasons why DMCA abuse and trolling has become so widespread. It's easy money. But what if that wasn't the case? The answer to that question should be pretty obvious.
Distributed Denial of Copyright.
Everyone here should know what a DDoS attack is. A network of computers spam a server or network causing it to overload, knocking out the site/service. So what's "DDoC"? Excellent clickbait material if I do say so myself ! Nah just kidding . For starters, unlike DDoS it is not an attack but a defense. Consider this: what if a server that could scrape 100 pages per second suddenly can only do 1 per second. There are probably hundreds, if not thousands of pirate streaming sites. Each one easily has a few 1000 pages worth of movies/episodes/whatever. Suddenly instead of that latest episode getting reported in an hour it takes days. Instead needing one server to DMCA troll the entire internet now you need and entire datacenter. Operating costs explode .
But How?
How do you slow a server down 100x? Easy: you increase the workload 100x. Right now processing a page only takes a fraction of a second. A HTTP request, parse the HTML DOM, extract the links and that's it. This is a very lightweight task which is why you can do it many times a second in parallel on a single server. However, if we were to pull the links through 5000 iterations of state-of-the-art encryption a CPU core would now need to spend a few seconds decrypting it. To avoid nuking your own server you have to use Javascript to do the same thing on the client side (inside the browser). A user on your site wouldn't have much problem waiting a couple of extra seconds for the stream to start or the links to appear but for a server running a bot(s) that has to do this a ton of times it creates a massive CPU load/bottleneck. The drawback is that this relies on it being a widely used technique to really ruin the copyright mafia's day. That said, if nothing else the implementation I have in mind for this concept is the best link obfuscation out there bar none. So even if nobody else uses it still does a better job at hiding links than anything out there. I'm not sure yet if I'm actually going to write a proof-of-concept implementation. Being a keeper of robots myself I'd be shooting myself in the foot for little gain. That said, the idea is pretty simple so anyone with some programming experience can easily implement this.
Thoughts? I'd love to hear what other devs think. Actually, this is one of those rare cases where I'd love to see someone poke holes into this and tell me "nah that will never work because XYZ, stop being silly!".
Last edited: