RapidShare & Requests Per Second :(

Status
Not open for further replies.

Robin H

Active Member
807
2009
73
0
20USD - RapidShare & Requests Per Second :(

Hi :))

*$20 to the one that fixes mi shit :D*

I've got this feeling i'm being DoS'd by RS :P
I'm hosting a script ( rs2rs.com <_< ) that acts as a proxy for RapidShare files.

But with every file running, I get about 70-80 Requests per second.
A server does about 60 files at once... So that adds up.

So... Is there any way to limit the amount of reqs/sec for an IP-range?
Running Debian
 
12 comments
Still thesame problem

#
MaxKeepAliveRequests 20

#
# KeepAliveTimeout: Number of seconds to wait for the next request from the
# same client on the same connection.
#
KeepAliveTimeout 15

##
## Server-Pool Size Regulation (MPM specific)
##

# prefork MPM
# StartServers: number of server processes to start
# MinSpareServers: minimum number of server processes which are kept spare
# MaxSpareServers: maximum number of server processes which are kept spare
# MaxClients: maximum number of server processes allowed to start
# MaxRequestsPerChild: maximum number of requests a server process serves
<IfModule mpm_prefork_module>
StartServers 5
MinSpareServers 5
MaxSpareServers 10
MaxClients 150
MaxRequestsPerChild 50
</IfModule>

# worker MPM
# StartServers: initial number of server processes to start
# MaxClients: maximum number of simultaneous client connections
# MinSpareThreads: minimum number of worker threads which are kept spare
# MaxSpareThreads: maximum number of worker threads which are kept spare
# ThreadsPerChild: constant number of worker threads in each server process
# MaxRequestsPerChild: maximum number of requests a server process serves
<IfModule mpm_worker_module>
StartServers 2
MaxClients 150
MinSpareThreads 25
MaxSpareThreads 75
ThreadsPerChild 25
MaxRequestsPerChild 50
</IfModule>

# These need to be set in /etc/apache2/envvars
User ${APACHE_RUN_USER}
Group ${APACHE_RUN_GROUP}

#
# AccessFileName: The name of the file to look for in each directory
# for additional configuration directives. See also the AllowOverride
# directive.
#

AccessFileName .htaccess

#
 
^^ its already less :))
iptables might be your friend ;)

just to give some idea what ip tables can do..you can run this commands which will drop incoming connections which make more than 10 connection attempts upon port 80 within 1 min
iptables -I INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent \
--update --seconds 60 --hitcount 10 -j DROP
 
This is getting fun :?

MaxKeepAliveRequests 5

<IfModule mpm_prefork_module>
StartServers 5
MinSpareServers 5
MaxSpareServers 10
MaxClients 65
MaxRequestsPerChild 1
</IfModule>
<IfModule mpm_worker_module>
StartServers 2
MaxClients 65
MinSpareThreads 25
MaxSpareThreads 75
ThreadsPerChild 25
MaxRequestsPerChild 1
</IfModule>
edit:
Desiboy, wont that result in failed remotes?

edit²:
Nope, still going to high:
Yq0_sX.png




EDDIDIIITIITITIT:
FFS, was in wrong server xD

another_edit:

Well, doesnt really matter, still nothing changed

edit:
Gonna try the IPtables shit, but with port80 instead of 22 ofcourse
 
Well, those methods has been posted before.
And I guess it blocks connections, not requests.

Anyway, I actually got this solved.

Problem was the bandwidth monitor & speed limiter.
It made a request with every package I guess.

Disabled it, and load dropped allot, servers are stable again, but ofcourse now everyone is exploiting the bandwidth.
But thats gonna end soon.

Splitice once wrote a speed-limit shit for me in the past. (He's worth every penny :D )
But because it was on a /file basis, and not on a /user shit, it was pretty hard to guarantee speeds.

I learned allot because of his scripts and now i'm gonna relook his idea.
 
Status
Not open for further replies.
Back
Top