[SR-Users] RTPProxy

Maxim Sobolev sobomax at sippysoft.com
Wed Oct 19 20:00:26 CEST 2016


Just a little comment on the numbers that I've thrown out earlier today.
Those are probably somewhat pessimistic, with some creative tuneup you can
probably go much higher. But we also constrained by some other
considerations (i.e. running fully redundant network connection with FEC,
full firewall etc, custom OS), so those are what we get.

Also, I wanted to point to the list, speaking about number of sessions is
pretty much pointless, as the main thing that keeps us busy is packet per
second rate. Since same 10,000 sessions might translate to as much as half
of PPS rate if use 10ms ptype versus 20ms ptype. Our limit at this point of
time is some 450k PPS in and 450k PPS out, 16 cores, FreeBSD 10.3, which
could be either 4.5k sessions with 10ms packets or 9k sessions with 20ms or
somewhere in between if you have mixed traffic (as most of our customers
do). Latest Linux kernels might get better contention control on higher CPU
count systems, or at least it is what I've seen on some of the benchmarks
not so long time ago, We've planned to run some evaluations but have not
got time to do so yet.

On top of that, even if you can push say 1 million PPS through single tuned
up box (10k sessions at 10ms), some other constrains may arise. Most of the
general-purpose DC providers we've encountered in our somewhat limited
practice, design their networks with much lower PPS per port in mind. It's
often an issue with a new DC here that we bump into all sorts of automated
DDoS prevention systems once we reach 100-200k PPS per box/port. So at the
end of the day it might be more practical and economical to run bunch of
the smaller nodes and spread the load across them using something like
rtp_cluster rather than try to cram all that traffic into a single box/port.

-Max
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.sip-router.org/pipermail/sr-users/attachments/20161019/da88e4b7/attachment.html>


More information about the sr-users mailing list