Hi, I'm trying to setup an HA cluster based of 2 Proxy 4 Asterisk 2 DB's
Can some one please help me with this.
On Friday 19 September 2008, VIP Carrier wrote:
I'm trying to setup an HA cluster based of 2 Proxy 4 Asterisk 2 DB's
Can some one please help me with this.
Hi VIP,
you need to give a little bit more details about your specific problem(s) and the setup you want to install, otherwise we will not able to help you. If you like to have commercial support, or perhaps want to buy a turn key solution, just ask at the business@lists.kamailio.org list, or contact for example on of the companies listed here: http://www.voip-info.org/wiki/view/OpenSER+Consultants
Cheers,
Henning
We ran a benchmark test of OpenSER and RTPproxy to understand the relation between CPU capacity and the number of simultaneous calls that could be managed with acceptable call quality.
OpenSER and RTPproxy were hosted on a Dell Precision 490 server with two Intel Xeon 5140 dual core, 2.33 GHz CPUs and 4 GB of RAM. Three CPU cores were disabled for the test. The test was run using just a single CPU core. Multiple SIPp clients directed traffic to OpenSER which queried an OSP server for call routing instructions. The test was designed for each call to require an average of two retries before the call was completed to a SIPp server on the third try. The two-way RTP stream for each call flowed through the RTPproxy. Call Detail Records for each call attempt were sent from OpenSER to the OSP server.
OpenSER and RTPproxy, using a single core of the 2.33 GHz CPU, can manage up to 750 simultaneous calls.
A summary and detailed description of the benchmark test are available at http://www.transnexus.com/White%20Papers/OpenSER_RTPproxy_test.htm
On Fri, Sep 19, 2008 at 5:03 PM, Jim dalton jpdaltonjr@gmail.com wrote:
We ran a benchmark test of OpenSER and RTPproxy to understand the relation between CPU capacity and the number of simultaneous calls that could be managed with acceptable call quality.
OpenSER and RTPproxy were hosted on a Dell Precision 490 server with two Intel Xeon 5140 dual core, 2.33 GHz CPUs and 4 GB of RAM. Three CPU cores were disabled for the test. The test was run using just a single CPU core. Multiple SIPp clients directed traffic to OpenSER which queried an OSP server for call routing instructions. The test was designed for each call to require an average of two retries before the call was completed to a SIPp server on the third try. The two-way RTP stream for each call flowed through the RTPproxy. Call Detail Records for each call attempt were sent from OpenSER to the OSP server.
OpenSER and RTPproxy, using a single core of the 2.33 GHz CPU, can manage up to 750 simultaneous calls.
A summary and detailed description of the benchmark test are available at http://www.transnexus.com/White%20Papers/OpenSER_RTPproxy_test.htm
Jim,
As always, thanks for the performance analysis!
A few questions. 750 calls shows almost %100 CPU usage. How much of that CPU time was spent in userspace (rtpproxy)? What was the interrupt CPU load?
At 50pps x 750 RTP streams, that's at least 37,500 interrupts per second, per stream just for received RTP. Factor in the other stream (bidirectional RTP) we get to 75,000pps. That's a lot of interrupts (which can be mitigated) just to handle receive traffic, not even counting interrupt load for transmitted traffic.
What NICs were you using? What kernel was this? Were you using NAPI receive interrupt polling? What about more advanced options like those available in the e1000 driver (Interrupt Throttle Rate)? I'd be interested to see what happens with a decent e1000 NIC and ITR set to 1000 or something really low (in terms of performance and the effect on latency, etc).
Any plans on testing the kernel-level stuff in MediaProxy 2? It would be neat to see the performance difference when userspace doesn't get involved...
Sorry for all of the questions, and thanks again for the test!
Jim,
As always, thanks for the performance analysis!
A few questions. 750 calls shows almost %100 CPU usage. How much of that CPU time was spent in userspace (rtpproxy)? What was the interrupt CPU load?
We used SAR to capture server level CPU utilization. We did not capture any data specific to OpenSER or RTPproxy. Also, we did not collect any data on interupts. We welcome suggestions for how to collect this data in future tests.
At 50pps x 750 RTP streams, that's at least 37,500 interrupts per second, per stream just for received RTP. Factor in the other stream (bidirectional RTP) we get to 75,000pps. That's a lot of interrupts (which can be mitigated) just to handle receive traffic, not even counting interrupt load for transmitted traffic.
What NICs were you using? What kernel was this? Were you using NAPI receive interrupt polling? What about more advanced options like those available in the e1000 driver (Interrupt Throttle Rate)? I'd be interested to see what happens with a decent e1000 NIC and ITR set to 1000 or something really low (in terms of performance and the effect on latency, etc).
This was an out of the box performance test. No optimizations were made to the software or hardware. I do not have the specs for the NIC card. The NIC card was the standard NIC card shipped by Dell.
The complete details of the test plan are available at http://www.transnexus.com/White%20Papers/OpenSER_RTPproxy_Benchmark_Test.pdf . Suggestions on how to improve the test are welcome.
Any plans on testing the kernel-level stuff in MediaProxy 2? It would be neat to see the performance difference when userspace doesn't get involved...
At this time, we do not have any plans to run a performance test on MediaProxy.
Sorry for all of the questions, and thanks again for the test!
-- Kristian Kielhofner http://blog.krisk.org
Hello Jim,
On 09/20/08 00:03, Jim dalton wrote:
We ran a benchmark test of OpenSER and RTPproxy to understand the relation between CPU capacity and the number of simultaneous calls that could be managed with acceptable call quality.
OpenSER and RTPproxy were hosted on a Dell Precision 490 server with two Intel Xeon 5140 dual core, 2.33 GHz CPUs and 4 GB of RAM. Three CPU cores were disabled for the test. The test was run using just a single CPU core. Multiple SIPp clients directed traffic to OpenSER which queried an OSP server for call routing instructions. The test was designed for each call to require an average of two retries before the call was completed to a SIPp server on the third try. The two-way RTP stream for each call flowed through the RTPproxy. Call Detail Records for each call attempt were sent from OpenSER to the OSP server.
OpenSER and RTPproxy, using a single core of the 2.33 GHz CPU, can manage up to 750 simultaneous calls.
A summary and detailed description of the benchmark test are available at http://www.transnexus.com/White%20Papers/OpenSER_RTPproxy_test.htm
thanks for sharing the results! I wonder if you could get the share of cpu used by openser during testing.
As you have access to 4 CPU cores, would be very interesting to see the impact of running 4 rtpproxy instances on the same machine and configure kamailio/openser to use all of them, to see how it really scales. nathelper module in kamailio/openser can use many rtpproxy instances at same time, doing load balancing of the rtp streams among available rtpproxies. The rtpproxy_sock parameter of nathelper module can take many rtpproxy control socket addresses: http://www.kamailio.org/docs/modules/1.3.x/nathelper.html#AEN160
Cheers, Daniel
OpenSER and RTPproxy, using a single core of the 2.33 GHz CPU, can manage up to 750 simultaneous calls.
A summary and detailed description of the benchmark test are available at
http://www.transnexus.com/White%20Papers/OpenSER_RTPproxy_test.htm
Daniel-Constantin Mierla wrote:
thanks for sharing the results! I wonder if you could get the share of cpu used by openser during testing.
We did not record the CPU utilization for OpenSER and RTPproxy separately. Our experience shows that OpenSER CPU utilization is trivial compared to the CPU cycles required to proxy media. If we had measured OpenSER CPU utilization, we expect it would have been a very small number (<5%).
As you have access to 4 CPU cores, would be very interesting to see the impact of running 4 rtpproxy instances on the same machine and configure kamailio/openser to use all of them, to see how it really scales. nathelper module in kamailio/openser can use many rtpproxy instances at same time, doing load balancing of the rtp streams among available rtpproxies. The rtpproxy_sock parameter of nathelper module can take many rtpproxy control socket addresses: http://www.kamailio.org/docs/modules/1.3.x/nathelper.html#AEN160
Testing multiple RTPproxy instances would be a good test. We would like to do that if we return to this project.
For those interested, there is a project that adds a kernel module that could be used for rtp relaying:
http://www.2p.cz/en/netfilter_rtp_proxy
Cheers, Daniel
On 09/20/08 22:44, Jim dalton wrote:
OpenSER and RTPproxy, using a single core of the 2.33 GHz CPU, can manage up to 750 simultaneous calls.
A summary and detailed description of the benchmark test are available at
http://www.transnexus.com/White%20Papers/OpenSER_RTPproxy_test.htm
Daniel-Constantin Mierla wrote:
thanks for sharing the results! I wonder if you could get the share of cpu used by openser during testing.
We did not record the CPU utilization for OpenSER and RTPproxy separately. Our experience shows that OpenSER CPU utilization is trivial compared to the CPU cycles required to proxy media. If we had measured OpenSER CPU utilization, we expect it would have been a very small number (<5%).
As you have access to 4 CPU cores, would be very interesting to see the impact of running 4 rtpproxy instances on the same machine and configure kamailio/openser to use all of them, to see how it really scales. nathelper module in kamailio/openser can use many rtpproxy instances at same time, doing load balancing of the rtp streams among available rtpproxies. The rtpproxy_sock parameter of nathelper module can take many rtpproxy control socket addresses: http://www.kamailio.org/docs/modules/1.3.x/nathelper.html#AEN160
Testing multiple RTPproxy instances would be a good test. We would like to do that if we return to this project.