Hello,
thanks for all those details, very useful ...
To be clear -- the issue of using high cpu on idle (no active
calls) was with rtpproxy v1.2 on a centos (iirc, v6), not with
rtpproxy 2.0. On debian, same version of rtpproxy was not exposing
this. I was just curios to see if anyone else saw it ... might
have been just that system...
Cheers,
Daniel
Just a little comment on the numbers that I've thrown out earlier today. Those are probably somewhat pessimistic, with some creative tuneup you can probably go much higher. But we also constrained by some other considerations (i.e. running fully redundant network connection with FEC, full firewall etc, custom OS), so those are what we get.
Also, I wanted to point to the list, speaking about number of sessions is pretty much pointless, as the main thing that keeps us busy is packet per second rate. Since same 10,000 sessions might translate to as much as half of PPS rate if use 10ms ptype versus 20ms ptype. Our limit at this point of time is some 450k PPS in and 450k PPS out, 16 cores, FreeBSD 10.3, which could be either 4.5k sessions with 10ms packets or 9k sessions with 20ms or somewhere in between if you have mixed traffic (as most of our customers do). Latest Linux kernels might get better contention control on higher CPU count systems, or at least it is what I've seen on some of the benchmarks not so long time ago, We've planned to run some evaluations but have not got time to do so yet.
On top of that, even if you can push say 1 million PPS through single tuned up box (10k sessions at 10ms), some other constrains may arise. Most of the general-purpose DC providers we've encountered in our somewhat limited practice, design their networks with much lower PPS per port in mind. It's often an issue with a new DC here that we bump into all sorts of automated DDoS prevention systems once we reach 100-200k PPS per box/port. So at the end of the day it might be more practical and economical to run bunch of the smaller nodes and spread the load across them using something like rtp_cluster rather than try to cram all that traffic into a single box/port.
-Max
-- Daniel-Constantin Mierla http://twitter.com/#!/miconda - http://www.linkedin.com/in/miconda Kamailio Advanced Training, Berlin, Oct 24-26, 2016 - http://www.asipto.com