Hi all!
I have been doing some performance tests with Kamailio 5.7.4 and SIPp. The infrastructure is as follows:3 VMs running on VMWare ESXi running: UAC on 10.20.0.1 with SIPP-> Kamailio on 10.20.0.5 -> UAS on 10.20.0.3
The Kamailio VM has 6 dedicated vCPU of type Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz and, 2 NICs and 4Gb RAM and MariaDB 10.6 as DB Backend., all running on a HP G380 host with a gazillion CPUs and a googol disk space!
I currently have 3 scripts: - script #1 stateful with RTJson and simulating requests to routing engine and accounting - script #2 stateful but with just a simple routing to UAS, no rules, no DB, - script #3 stateless with a forward to UAS
With script #3 I can go up to 2000CPS without issues with CPU at 37%! Above that value, I get retransmissions everywhere. On both scripts #1 and #2, the limit is 330CPS max after which I get a lot of retransmissions, while CPU/Core usage on Kamailio server stays below 10%. So I do not expect this to be a CPU issue.
I could not understand why such (low) results, so I followed this article found at https://www.kamailio.org/docs/openser-performance-tests/#tm-tests-c https://www.kamailio.org/docs/openser-performance-tests/#tm-tests-c and created exact same scenarios, with kamailio script and SIPP templates available on the article, hoping for better results. But I get the same results: between 300 and 330CPS which is far, very far from the 7000CPS found in the article!
I understand that I'm using VMs and probably the tests made for the article, which is pretty old already, were made on physical servers. Still, I would not expect 95% of lower performance!
Any clue what could be the issue? I suspect NICs, but.... Any tips anyone could share?
Thanks in advance!
*Sérgio Charrua*
I assume that you are using udp. Please increase the length of the udp queue: https://medium.com/@CameronSparr/increase-os-udp-buffers-to-improve-performa...
Regards. Ovidiu Sas
On Fri, Mar 22, 2024 at 12:56 Sergio Charrua via sr-users < sr-users@lists.kamailio.org> wrote:
Hi all!
I have been doing some performance tests with Kamailio 5.7.4 and SIPp. The infrastructure is as follows:3 VMs running on VMWare ESXi running: UAC on 10.20.0.1 with SIPP-> Kamailio on 10.20.0.5 -> UAS on 10.20.0.3
The Kamailio VM has 6 dedicated vCPU of type Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz and, 2 NICs and 4Gb RAM and MariaDB 10.6 as DB Backend., all running on a HP G380 host with a gazillion CPUs and a googol disk space!
I currently have 3 scripts:
- script #1 stateful with RTJson and simulating requests to routing engine
and accounting
- script #2 stateful but with just a simple routing to UAS, no rules, no
DB,
- script #3 stateless with a forward to UAS
With script #3 I can go up to 2000CPS without issues with CPU at 37%! Above that value, I get retransmissions everywhere. On both scripts #1 and #2, the limit is 330CPS max after which I get a lot of retransmissions, while CPU/Core usage on Kamailio server stays below 10%. So I do not expect this to be a CPU issue.
I could not understand why such (low) results, so I followed this article found at https://www.kamailio.org/docs/openser-performance-tests/#tm-tests-c https://www.kamailio.org/docs/openser-performance-tests/#tm-tests-c and created exact same scenarios, with kamailio script and SIPP templates available on the article, hoping for better results. But I get the same results: between 300 and 330CPS which is far, very far from the 7000CPS found in the article!
I understand that I'm using VMs and probably the tests made for the article, which is pretty old already, were made on physical servers. Still, I would not expect 95% of lower performance!
Any clue what could be the issue? I suspect NICs, but.... Any tips anyone could share?
Thanks in advance!
*Sérgio Charrua* __________________________________________________________ Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
What do you have as a UAS?
The UDP receive buffer increase that Ovidiu suggested would be necessary if you see traffic piling up/getting dropped, e.g. look at `netstat -s | grep "receive buffer errors"` or watch for 2nd column Recv-Q of netstat.
Cheers,
On Fri, Mar 22, 2024 at 1:31 PM Ovidiu Sas via sr-users < sr-users@lists.kamailio.org> wrote:
I assume that you are using udp. Please increase the length of the udp queue:
https://medium.com/@CameronSparr/increase-os-udp-buffers-to-improve-performa...
Regards. Ovidiu Sas
On Fri, Mar 22, 2024 at 12:56 Sergio Charrua via sr-users < sr-users@lists.kamailio.org> wrote:
Hi all!
I have been doing some performance tests with Kamailio 5.7.4 and SIPp. The infrastructure is as follows:3 VMs running on VMWare ESXi running: UAC on 10.20.0.1 with SIPP-> Kamailio on 10.20.0.5 -> UAS on 10.20.0.3
The Kamailio VM has 6 dedicated vCPU of type Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz and, 2 NICs and 4Gb RAM and MariaDB 10.6 as DB Backend., all running on a HP G380 host with a gazillion CPUs and a googol disk space!
I currently have 3 scripts:
- script #1 stateful with RTJson and simulating requests to routing
engine and accounting
- script #2 stateful but with just a simple routing to UAS, no rules, no
DB,
- script #3 stateless with a forward to UAS
With script #3 I can go up to 2000CPS without issues with CPU at 37%! Above that value, I get retransmissions everywhere. On both scripts #1 and #2, the limit is 330CPS max after which I get a lot of retransmissions, while CPU/Core usage on Kamailio server stays below 10%. So I do not expect this to be a CPU issue.
I could not understand why such (low) results, so I followed this article found at https://www.kamailio.org/docs/openser-performance-tests/#tm-tests-c https://www.kamailio.org/docs/openser-performance-tests/#tm-tests-c and created exact same scenarios, with kamailio script and SIPP templates available on the article, hoping for better results. But I get the same results: between 300 and 330CPS which is far, very far from the 7000CPS found in the article!
I understand that I'm using VMs and probably the tests made for the article, which is pretty old already, were made on physical servers. Still, I would not expect 95% of lower performance!
Any clue what could be the issue? I suspect NICs, but.... Any tips anyone could share?
Thanks in advance!
*Sérgio Charrua* __________________________________________________________ Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
UAC and UAS are both SIPP instances, in separate VMs. On Kamailio server, I use the "ss" command instead of the one you suggested (using watch -n 2 ) , and only from time to time, like every minute or a bit less, and for 1 or 2 seconds, there is some changes on the field you mention, with values between the low thousands and around 10.000. I will assume this is acceptable....
*Sérgio Charrua*
On Fri, Mar 22, 2024 at 6:22 PM Sergiu Pojoga via sr-users < sr-users@lists.kamailio.org> wrote:
What do you have as a UAS?
The UDP receive buffer increase that Ovidiu suggested would be necessary if you see traffic piling up/getting dropped, e.g. look at `netstat -s | grep "receive buffer errors"` or watch for 2nd column Recv-Q of netstat.
Cheers,
On Fri, Mar 22, 2024 at 1:31 PM Ovidiu Sas via sr-users < sr-users@lists.kamailio.org> wrote:
I assume that you are using udp. Please increase the length of the udp queue:
https://medium.com/@CameronSparr/increase-os-udp-buffers-to-improve-performa...
Regards. Ovidiu Sas
On Fri, Mar 22, 2024 at 12:56 Sergio Charrua via sr-users < sr-users@lists.kamailio.org> wrote:
Hi all!
I have been doing some performance tests with Kamailio 5.7.4 and SIPp. The infrastructure is as follows:3 VMs running on VMWare ESXi running: UAC on 10.20.0.1 with SIPP-> Kamailio on 10.20.0.5 -> UAS on 10.20.0.3
The Kamailio VM has 6 dedicated vCPU of type Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz and, 2 NICs and 4Gb RAM and MariaDB 10.6 as DB Backend., all running on a HP G380 host with a gazillion CPUs and a googol disk space!
I currently have 3 scripts:
- script #1 stateful with RTJson and simulating requests to routing
engine and accounting
- script #2 stateful but with just a simple routing to UAS, no rules, no
DB,
- script #3 stateless with a forward to UAS
With script #3 I can go up to 2000CPS without issues with CPU at 37%! Above that value, I get retransmissions everywhere. On both scripts #1 and #2, the limit is 330CPS max after which I get a lot of retransmissions, while CPU/Core usage on Kamailio server stays below 10%. So I do not expect this to be a CPU issue.
I could not understand why such (low) results, so I followed this article found at https://www.kamailio.org/docs/openser-performance-tests/#tm-tests-c https://www.kamailio.org/docs/openser-performance-tests/#tm-tests-c and created exact same scenarios, with kamailio script and SIPP templates available on the article, hoping for better results. But I get the same results: between 300 and 330CPS which is far, very far from the 7000CPS found in the article!
I understand that I'm using VMs and probably the tests made for the article, which is pretty old already, were made on physical servers. Still, I would not expect 95% of lower performance!
Any clue what could be the issue? I suspect NICs, but.... Any tips anyone could share?
Thanks in advance!
*Sérgio Charrua* __________________________________________________________ Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
I don't know that I would do that. If your server isn't coping with the load, making the kernel receive queue deeper doesn't increase throughput, it just relocates the queueing -- or rather, adds another layer of queueing.
Squeeze the balloon in one place, it inflates in another.
-- Alex
On Mar 22, 2024, at 1:05 PM, Ovidiu Sas via sr-users sr-users@lists.kamailio.org wrote:
I assume that you are using udp. Please increase the length of the udp queue: https://medium.com/@CameronSparr/increase-os-udp-buffers-to-improve-performa...
Regards. Ovidiu Sas
On Fri, Mar 22, 2024 at 12:56 Sergio Charrua via sr-users sr-users@lists.kamailio.org wrote: Hi all!
I have been doing some performance tests with Kamailio 5.7.4 and SIPp. The infrastructure is as follows:3 VMs running on VMWare ESXi running: UAC on 10.20.0.1 with SIPP-> Kamailio on 10.20.0.5 -> UAS on 10.20.0.3
The Kamailio VM has 6 dedicated vCPU of type Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz and, 2 NICs and 4Gb RAM and MariaDB 10.6 as DB Backend., all running on a HP G380 host with a gazillion CPUs and a googol disk space!
I currently have 3 scripts:
- script #1 stateful with RTJson and simulating requests to routing engine and accounting
- script #2 stateful but with just a simple routing to UAS, no rules, no DB,
- script #3 stateless with a forward to UAS
With script #3 I can go up to 2000CPS without issues with CPU at 37%! Above that value, I get retransmissions everywhere. On both scripts #1 and #2, the limit is 330CPS max after which I get a lot of retransmissions, while CPU/Core usage on Kamailio server stays below 10%. So I do not expect this to be a CPU issue.
I could not understand why such (low) results, so I followed this article found at https://www.kamailio.org/docs/openser-performance-tests/#tm-tests-c and created exact same scenarios, with kamailio script and SIPP templates available on the article, hoping for better results. But I get the same results: between 300 and 330CPS which is far, very far from the 7000CPS found in the article!
I understand that I'm using VMs and probably the tests made for the article, which is pretty old already, were made on physical servers. Still, I would not expect 95% of lower performance!
Any clue what could be the issue? I suspect NICs, but.... Any tips anyone could share?
Thanks in advance!
Sérgio Charrua __________________________________________________________ Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe: __________________________________________________________ Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
Totally agree with Alex here.
On Fri 22 Mar 2024, 19:24 Alex Balashov via sr-users, < sr-users@lists.kamailio.org> wrote:
I don't know that I would do that. If your server isn't coping with the load, making the kernel receive queue deeper doesn't increase throughput, it just relocates the queueing -- or rather, adds another layer of queueing.
Squeeze the balloon in one place, it inflates in another.
-- Alex
On Mar 22, 2024, at 1:05 PM, Ovidiu Sas via sr-users <
sr-users@lists.kamailio.org> wrote:
I assume that you are using udp. Please increase the length of the udp queue:
https://medium.com/@CameronSparr/increase-os-udp-buffers-to-improve-performa...
Regards. Ovidiu Sas
On Fri, Mar 22, 2024 at 12:56 Sergio Charrua via sr-users <
sr-users@lists.kamailio.org> wrote:
Hi all!
I have been doing some performance tests with Kamailio 5.7.4 and SIPp. The infrastructure is as follows:3 VMs running on VMWare ESXi running: UAC on 10.20.0.1 with SIPP-> Kamailio on 10.20.0.5 -> UAS on 10.20.0.3
The Kamailio VM has 6 dedicated vCPU of type Intel(R) Xeon(R) Silver
4216 CPU @ 2.10GHz and, 2 NICs and 4Gb RAM and MariaDB 10.6 as DB Backend., all running on a HP G380 host with a gazillion CPUs and a googol disk space!
I currently have 3 scripts:
- script #1 stateful with RTJson and simulating requests to routing
engine and accounting
- script #2 stateful but with just a simple routing to UAS, no rules, no
DB,
- script #3 stateless with a forward to UAS
With script #3 I can go up to 2000CPS without issues with CPU at 37%!
Above that value, I get retransmissions everywhere.
On both scripts #1 and #2, the limit is 330CPS max after which I get a
lot of retransmissions, while CPU/Core usage on Kamailio server stays below 10%.
So I do not expect this to be a CPU issue.
I could not understand why such (low) results, so I followed this
article found at
https://www.kamailio.org/docs/openser-performance-tests/#tm-tests-c and created exact same scenarios, with kamailio script and SIPP
templates available on the article, hoping for better results.
But I get the same results: between 300 and 330CPS which is far, very
far from the 7000CPS found in the article!
I understand that I'm using VMs and probably the tests made for the
article, which is pretty old already, were made on physical servers. Still, I would not expect 95% of lower performance!
Any clue what could be the issue? I suspect NICs, but.... Any tips anyone could share?
Thanks in advance!
Sérgio Charrua __________________________________________________________ Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to
the sender!
Edit mailing list options or unsubscribe: __________________________________________________________ Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to
the sender!
Edit mailing list options or unsubscribe:
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
Sergio,
You may consider giving this a read:
https://blog.evaristesys.com/2016/02/15/tuning-kamailio-for-high-throughput-...
Child processes will make a big difference, especially in your scenario #1, where you're waiting on outside services.
But it's possible you've just run into a fundamental throughput limit of your environment, and if you have, I would not take the advice to increase receive queue depth.
-- Alex
On Mar 22, 2024, at 4:03 PM, Ilie Soltanici iliusha.md@gmail.com wrote:
Totally agree with Alex here.
On Fri 22 Mar 2024, 19:24 Alex Balashov via sr-users, sr-users@lists.kamailio.org wrote: I don't know that I would do that. If your server isn't coping with the load, making the kernel receive queue deeper doesn't increase throughput, it just relocates the queueing -- or rather, adds another layer of queueing.
Squeeze the balloon in one place, it inflates in another.
-- Alex
On Mar 22, 2024, at 1:05 PM, Ovidiu Sas via sr-users sr-users@lists.kamailio.org wrote:
I assume that you are using udp. Please increase the length of the udp queue: https://medium.com/@CameronSparr/increase-os-udp-buffers-to-improve-performa...
Regards. Ovidiu Sas
On Fri, Mar 22, 2024 at 12:56 Sergio Charrua via sr-users sr-users@lists.kamailio.org wrote: Hi all!
I have been doing some performance tests with Kamailio 5.7.4 and SIPp. The infrastructure is as follows:3 VMs running on VMWare ESXi running: UAC on 10.20.0.1 with SIPP-> Kamailio on 10.20.0.5 -> UAS on 10.20.0.3
The Kamailio VM has 6 dedicated vCPU of type Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz and, 2 NICs and 4Gb RAM and MariaDB 10.6 as DB Backend., all running on a HP G380 host with a gazillion CPUs and a googol disk space!
I currently have 3 scripts:
- script #1 stateful with RTJson and simulating requests to routing engine and accounting
- script #2 stateful but with just a simple routing to UAS, no rules, no DB,
- script #3 stateless with a forward to UAS
With script #3 I can go up to 2000CPS without issues with CPU at 37%! Above that value, I get retransmissions everywhere. On both scripts #1 and #2, the limit is 330CPS max after which I get a lot of retransmissions, while CPU/Core usage on Kamailio server stays below 10%. So I do not expect this to be a CPU issue.
I could not understand why such (low) results, so I followed this article found at https://www.kamailio.org/docs/openser-performance-tests/#tm-tests-c and created exact same scenarios, with kamailio script and SIPP templates available on the article, hoping for better results. But I get the same results: between 300 and 330CPS which is far, very far from the 7000CPS found in the article!
I understand that I'm using VMs and probably the tests made for the article, which is pretty old already, were made on physical servers. Still, I would not expect 95% of lower performance!
Any clue what could be the issue? I suspect NICs, but.... Any tips anyone could share?
Thanks in advance!
Sérgio Charrua __________________________________________________________ Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe: __________________________________________________________ Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
Thanks Alex!
I already knew that article, and followed it. But it doesn't seem to have had any positive impact....
*Sérgio Charrua*
On Fri, Mar 22, 2024 at 8:44 PM Alex Balashov via sr-users < sr-users@lists.kamailio.org> wrote:
Sergio,
You may consider giving this a read:
https://blog.evaristesys.com/2016/02/15/tuning-kamailio-for-high-throughput-...
Child processes will make a big difference, especially in your scenario #1, where you're waiting on outside services.
But it's possible you've just run into a fundamental throughput limit of your environment, and if you have, I would not take the advice to increase receive queue depth.
-- Alex
On Mar 22, 2024, at 4:03 PM, Ilie Soltanici iliusha.md@gmail.com
wrote:
Totally agree with Alex here.
On Fri 22 Mar 2024, 19:24 Alex Balashov via sr-users, <
sr-users@lists.kamailio.org> wrote:
I don't know that I would do that. If your server isn't coping with the
load, making the kernel receive queue deeper doesn't increase throughput, it just relocates the queueing -- or rather, adds another layer of queueing.
Squeeze the balloon in one place, it inflates in another.
-- Alex
On Mar 22, 2024, at 1:05 PM, Ovidiu Sas via sr-users <
sr-users@lists.kamailio.org> wrote:
I assume that you are using udp. Please increase the length of the udp queue:
https://medium.com/@CameronSparr/increase-os-udp-buffers-to-improve-performa...
Regards. Ovidiu Sas
On Fri, Mar 22, 2024 at 12:56 Sergio Charrua via sr-users <
sr-users@lists.kamailio.org> wrote:
Hi all!
I have been doing some performance tests with Kamailio 5.7.4 and SIPp. The infrastructure is as follows:3 VMs running on VMWare ESXi running: UAC on 10.20.0.1 with SIPP-> Kamailio on 10.20.0.5 -> UAS on 10.20.0.3
The Kamailio VM has 6 dedicated vCPU of type Intel(R) Xeon(R) Silver
4216 CPU @ 2.10GHz and, 2 NICs and 4Gb RAM and MariaDB 10.6 as DB Backend., all running on a HP G380 host with a gazillion CPUs and a googol disk space!
I currently have 3 scripts:
- script #1 stateful with RTJson and simulating requests to routing
engine and accounting
- script #2 stateful but with just a simple routing to UAS, no rules,
no DB,
- script #3 stateless with a forward to UAS
With script #3 I can go up to 2000CPS without issues with CPU at 37%!
Above that value, I get retransmissions everywhere.
On both scripts #1 and #2, the limit is 330CPS max after which I get a
lot of retransmissions, while CPU/Core usage on Kamailio server stays below 10%.
So I do not expect this to be a CPU issue.
I could not understand why such (low) results, so I followed this
article found at
https://www.kamailio.org/docs/openser-performance-tests/#tm-tests-c and created exact same scenarios, with kamailio script and SIPP
templates available on the article, hoping for better results.
But I get the same results: between 300 and 330CPS which is far, very
far from the 7000CPS found in the article!
I understand that I'm using VMs and probably the tests made for the
article, which is pretty old already, were made on physical servers. Still, I would not expect 95% of lower performance!
Any clue what could be the issue? I suspect NICs, but.... Any tips anyone could share?
Thanks in advance!
Sérgio Charrua __________________________________________________________ Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only
to the sender!
Edit mailing list options or unsubscribe: __________________________________________________________ Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only
to the sender!
Edit mailing list options or unsubscribe:
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to
the sender!
Edit mailing list options or unsubscribe:
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
The OP mentioned that the load is around 10% at 330cps, so load is not an issue. Traffic bottleneck can be an issue. The default udp queue length is not enough for high cps. And of course, you need to keep a balance between all the variables that affect traffic :)
-ovidiu
On Fri, Mar 22, 2024 at 15:15 Alex Balashov via sr-users < sr-users@lists.kamailio.org> wrote:
I don't know that I would do that. If your server isn't coping with the load, making the kernel receive queue deeper doesn't increase throughput, it just relocates the queueing -- or rather, adds another layer of queueing.
Squeeze the balloon in one place, it inflates in another.
-- Alex
On Mar 22, 2024, at 1:05 PM, Ovidiu Sas via sr-users <
sr-users@lists.kamailio.org> wrote:
I assume that you are using udp. Please increase the length of the udp queue:
https://medium.com/@CameronSparr/increase-os-udp-buffers-to-improve-performa...
Regards. Ovidiu Sas
On Fri, Mar 22, 2024 at 12:56 Sergio Charrua via sr-users <
sr-users@lists.kamailio.org> wrote:
Hi all!
I have been doing some performance tests with Kamailio 5.7.4 and SIPp. The infrastructure is as follows:3 VMs running on VMWare ESXi running: UAC on 10.20.0.1 with SIPP-> Kamailio on 10.20.0.5 -> UAS on 10.20.0.3
The Kamailio VM has 6 dedicated vCPU of type Intel(R) Xeon(R) Silver
4216 CPU @ 2.10GHz and, 2 NICs and 4Gb RAM and MariaDB 10.6 as DB Backend., all running on a HP G380 host with a gazillion CPUs and a googol disk space!
I currently have 3 scripts:
- script #1 stateful with RTJson and simulating requests to routing
engine and accounting
- script #2 stateful but with just a simple routing to UAS, no rules, no
DB,
- script #3 stateless with a forward to UAS
With script #3 I can go up to 2000CPS without issues with CPU at 37%!
Above that value, I get retransmissions everywhere.
On both scripts #1 and #2, the limit is 330CPS max after which I get a
lot of retransmissions, while CPU/Core usage on Kamailio server stays below 10%.
So I do not expect this to be a CPU issue.
I could not understand why such (low) results, so I followed this
article found at
https://www.kamailio.org/docs/openser-performance-tests/#tm-tests-c and created exact same scenarios, with kamailio script and SIPP
templates available on the article, hoping for better results.
But I get the same results: between 300 and 330CPS which is far, very
far from the 7000CPS found in the article!
I understand that I'm using VMs and probably the tests made for the
article, which is pretty old already, were made on physical servers. Still, I would not expect 95% of lower performance!
Any clue what could be the issue? I suspect NICs, but.... Any tips anyone could share?
Thanks in advance!
Sérgio Charrua __________________________________________________________ Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to
the sender!
Edit mailing list options or unsubscribe: __________________________________________________________ Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to
the sender!
Edit mailing list options or unsubscribe:
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
On Mar 22, 2024, at 10:21 PM, Ovidiu Sas osas@voipembedded.com wrote:
The default udp queue length is not enough for high cps.
That's interesting. I'm not necessarily saying you're wrong, but I'd be curious to know more about what informs this theory.
The default queue I believe is about 2MB. Not very hard to exceed that queue while still having a good amount of CPU/processor available.
-- Fred Posner Sent from mobile Phone: +1 (352) 664-3733 qxork.com
On Mar 23, 2024, at 11:47 AM, Alex Balashov via sr-users sr-users@lists.kamailio.org wrote:
On Mar 22, 2024, at 10:21 PM, Ovidiu Sas osas@voipembedded.com wrote:
The default udp queue length is not enough for high cps.
That's interesting. I'm not necessarily saying you're wrong, but I'd be curious to know more about what informs this theory.
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
Sure, but if you're blowing through the default, that means you're not consistently coping with the load due to some other factors. Does it not follow that enlarging the queue will just give you more backlog without increasing throughput?
-- Alex
On Mar 23, 2024, at 1:54 PM, Fred Posner fred@pgpx.io wrote:
The default queue I believe is about 2MB. Not very hard to exceed that queue while still having a good amount of CPU/processor available.
-- Fred Posner Sent from mobile Phone: +1 (352) 664-3733 qxork.com
On Mar 23, 2024, at 11:47 AM, Alex Balashov via sr-users sr-users@lists.kamailio.org wrote:
On Mar 22, 2024, at 10:21 PM, Ovidiu Sas osas@voipembedded.com wrote:
The default udp queue length is not enough for high cps.
That's interesting. I'm not necessarily saying you're wrong, but I'd be curious to know more about what informs this theory.
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
That could be one scenario. But when I’ve seen there being more than enough CPU it has been there’s a more for 2MB worth of traffic and the OS is making an issue that is easily solved by letting the system handle more traffic.
It’s a bottleneck before kamailio.
The analogy would be a funnel. At some point, when you’re pouring a lot of liquid, you need to use a wider funnel. Ain’t nothing you do after the funnel will matter. The funnel is the problem.
-- Fred Posner Sent from mobile Phone: +1 (352) 664-3733 qxork.com
On Mar 23, 2024, at 4:13 PM, Alex Balashov via sr-users sr-users@lists.kamailio.org wrote:
Sure, but if you're blowing through the default, that means you're not consistently coping with the load due to some other factors. Does it not follow that enlarging the queue will just give you more backlog without increasing throughput?
-- Alex
On Mar 23, 2024, at 1:54 PM, Fred Posner fred@pgpx.io wrote:
The default queue I believe is about 2MB. Not very hard to exceed that queue while still having a good amount of CPU/processor available.
-- Fred Posner Sent from mobile Phone: +1 (352) 664-3733 qxork.com
On Mar 23, 2024, at 11:47 AM, Alex Balashov via sr-users sr-users@lists.kamailio.org wrote:
On Mar 22, 2024, at 10:21 PM, Ovidiu Sas osas@voipembedded.com wrote:
The default udp queue length is not enough for high cps.
That's interesting. I'm not necessarily saying you're wrong, but I'd be curious to know more about what informs this theory.
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
Fair, but when have you found CPU to be the limiting factor in Kamailio's throughput? Kamailio's workload isn't really computational / CPU-bound. It's closer to something like Node; it's an I/O multiplexer.
On Mar 23, 2024, at 4:53 PM, Fred Posner fred@pgpx.io wrote:
That could be one scenario. But when I’ve seen there being more than enough CPU it has been there’s a more for 2MB worth of traffic and the OS is making an issue that is easily solved by letting the system handle more traffic.
It’s a bottleneck before kamailio.
The analogy would be a funnel. At some point, when you’re pouring a lot of liquid, you need to use a wider funnel. Ain’t nothing you do after the funnel will matter. The funnel is the problem.
-- Fred Posner Sent from mobile Phone: +1 (352) 664-3733 qxork.com
On Mar 23, 2024, at 4:13 PM, Alex Balashov via sr-users sr-users@lists.kamailio.org wrote:
Sure, but if you're blowing through the default, that means you're not consistently coping with the load due to some other factors. Does it not follow that enlarging the queue will just give you more backlog without increasing throughput?
-- Alex
On Mar 23, 2024, at 1:54 PM, Fred Posner fred@pgpx.io wrote:
The default queue I believe is about 2MB. Not very hard to exceed that queue while still having a good amount of CPU/processor available.
-- Fred Posner Sent from mobile Phone: +1 (352) 664-3733 qxork.com
On Mar 23, 2024, at 11:47 AM, Alex Balashov via sr-users sr-users@lists.kamailio.org wrote:
On Mar 22, 2024, at 10:21 PM, Ovidiu Sas osas@voipembedded.com wrote:
The default udp queue length is not enough for high cps.
That's interesting. I'm not necessarily saying you're wrong, but I'd be curious to know more about what informs this theory.
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
But the CPU is not the limiting factor.
Back to the funnel analogy: you have a big bottle and you are using a small funnel. That will not work well. If you use a bigger funnel with the same bottle (the bigger funnel must fit the bottle), than you can handle more liquid. If the funnel is too big for the bottle, then you are spilling out and this is the scenario that you are referring to.
-ovidiu
On Sat, Mar 23, 2024 at 17:12 Alex Balashov via sr-users < sr-users@lists.kamailio.org> wrote:
Fair, but when have you found CPU to be the limiting factor in Kamailio's throughput? Kamailio's workload isn't really computational / CPU-bound. It's closer to something like Node; it's an I/O multiplexer.
On Mar 23, 2024, at 4:53 PM, Fred Posner fred@pgpx.io wrote:
That could be one scenario. But when I’ve seen there being more than
enough CPU it has been there’s a more for 2MB worth of traffic and the OS is making an issue that is easily solved by letting the system handle more traffic.
It’s a bottleneck before kamailio.
The analogy would be a funnel. At some point, when you’re pouring a lot
of liquid, you need to use a wider funnel. Ain’t nothing you do after the funnel will matter. The funnel is the problem.
-- Fred Posner Sent from mobile Phone: +1 (352) 664-3733 qxork.com
On Mar 23, 2024, at 4:13 PM, Alex Balashov via sr-users <
sr-users@lists.kamailio.org> wrote:
Sure, but if you're blowing through the default, that means you're not
consistently coping with the load due to some other factors. Does it not follow that enlarging the queue will just give you more backlog without increasing throughput?
-- Alex
On Mar 23, 2024, at 1:54 PM, Fred Posner fred@pgpx.io wrote:
The default queue I believe is about 2MB. Not very hard to exceed that
queue while still having a good amount of CPU/processor available.
-- Fred Posner Sent from mobile Phone: +1 (352) 664-3733 qxork.com
On Mar 23, 2024, at 11:47 AM, Alex Balashov via sr-users <
sr-users@lists.kamailio.org> wrote:
On Mar 22, 2024, at 10:21 PM, Ovidiu Sas osas@voipembedded.com
wrote:
The default udp queue length is not enough for high cps.
That's interesting. I'm not necessarily saying you're wrong, but I'd
be curious to know more about what informs this theory.
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only
to the sender!
Edit mailing list options or unsubscribe:
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only
to the sender!
Edit mailing list options or unsubscribe:
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
On Mar 23, 2024, at 5:19 PM, Ovidiu Sas osas@voipembedded.com wrote:
But the CPU is not the limiting factor.
We can agree on that, at least. :-)
Back to the funnel analogy: you have a big bottle and you are using a small funnel. That will not work well. If you use a bigger funnel with the same bottle (the bigger funnel must fit the bottle), than you can handle more liquid. If the funnel is too big for the bottle, then you are spilling out and this is the scenario that you are referring to.
I'm not sure how far this bottle metaphor works, because water falls into the bottle at a constant rate and at a constant acceleration (force of gravity), notwithstanding the moderating effect of funnel shape or other geometric irregularities.
I think in the real world, we are dealing with a situation where water falls into the bottle (through the bottle?) at different speeds, so most limits are related to that flow and not to funnel or bottle size.
Or to say it differently: under most typical Kamailio workloads, which are I/O-bound, a single Kamailio process (bottle) can handle a certain amount of messages per second. You can't really make it handle more messages than it does without tweaking the nature of the workload. Assuming the workload is held constant, the only way to get more messages through the system is to have more worker processes, or bottles if you like, which has its own performance implications and limits (locking and CPU contention).
Consequently, there's a certain throughput-maximising amount of bottles for a given workload that is neither so low as to under-utilise available resources, nor so high as to degrade the overall throughput from fighting for CPU, big locks over shared resources, overwhelming I/O dependencies (e.g. databases with queries), etc. The main input variable is the number of bottles itself.
The size of the funnel isn't really relevant. If there are enough bottles and water goes into them at sufficient velocity, bigger funnels won't help. If the velocity is not sufficient or there aren't enough bottles, funnels of any size will just back up and overflow.
About the only scenario where the funnel matters is the one you pointed out previously, where the inflow is highly irregular, is modally moderate, and only momentarily bursts to high volumes.
-- Alex
I’ve had issues with cpp and just to get more throughput I had to increase the size of the udp queue. It is also true that the cps was ten times higher.
In the end, is very simple: if most of the time the udp queue is empty and dropped packets are observed, then the size of the udp queue is too small. Any other scenario must be handled differently.
-ovidiu
On Sat, Mar 23, 2024 at 17:49 Alex Balashov via sr-users < sr-users@lists.kamailio.org> wrote:
On Mar 23, 2024, at 5:19 PM, Ovidiu Sas osas@voipembedded.com wrote:
But the CPU is not the limiting factor.
We can agree on that, at least. :-)
Back to the funnel analogy: you have a big bottle and you are using a
small funnel. That will not work well. If you use a bigger funnel with the same bottle (the bigger funnel must fit the bottle), than you can handle more liquid.
If the funnel is too big for the bottle, then you are spilling out and
this is the scenario that you are referring to.
I'm not sure how far this bottle metaphor works, because water falls into the bottle at a constant rate and at a constant acceleration (force of gravity), notwithstanding the moderating effect of funnel shape or other geometric irregularities.
I think in the real world, we are dealing with a situation where water falls into the bottle (through the bottle?) at different speeds, so most limits are related to that flow and not to funnel or bottle size.
Or to say it differently: under most typical Kamailio workloads, which are I/O-bound, a single Kamailio process (bottle) can handle a certain amount of messages per second. You can't really make it handle more messages than it does without tweaking the nature of the workload. Assuming the workload is held constant, the only way to get more messages through the system is to have more worker processes, or bottles if you like, which has its own performance implications and limits (locking and CPU contention).
Consequently, there's a certain throughput-maximising amount of bottles for a given workload that is neither so low as to under-utilise available resources, nor so high as to degrade the overall throughput from fighting for CPU, big locks over shared resources, overwhelming I/O dependencies (e.g. databases with queries), etc. The main input variable is the number of bottles itself.
The size of the funnel isn't really relevant. If there are enough bottles and water goes into them at sufficient velocity, bigger funnels won't help. If the velocity is not sufficient or there aren't enough bottles, funnels of any size will just back up and overflow.
About the only scenario where the funnel matters is the one you pointed out previously, where the inflow is highly irregular, is modally moderate, and only momentarily bursts to high volumes.
-- Alex
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
On Mar 23, 2024, at 6:05 PM, Ovidiu Sas osas@voipembedded.com wrote:
if most of the time the udp queue is empty and dropped packets are observed, then the size of the udp queue is too small.
But would that happen with a sipp load test, once the threshold of dropped requests is observed?
if most of the time the udp queue is empty and dropped packets are
observed, then the size of the udp queue is too small.
Hmm.. why would you need to increase the queue size if nothing gets queued up?
On Sat, Mar 23, 2024 at 6:38 PM Alex Balashov via sr-users < sr-users@lists.kamailio.org> wrote:
On Mar 23, 2024, at 6:05 PM, Ovidiu Sas osas@voipembedded.com wrote:
if most of the time the udp queue is empty and dropped packets are
observed, then the size of the udp queue is too small.
But would that happen with a sipp load test, once the threshold of dropped requests is observed?
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
Because you have dropped packets. The queue it’s empty most of the time and if is full for a few milliseconds and that’s the moment when you experience dropped packets.
On Sat, Mar 23, 2024 at 19:23 Sergiu Pojoga via sr-users < sr-users@lists.kamailio.org> wrote:
if most of the time the udp queue is empty and dropped packets are
observed, then the size of the udp queue is too small.
Hmm.. why would you need to increase the queue size if nothing gets queued up?
On Sat, Mar 23, 2024 at 6:38 PM Alex Balashov via sr-users < sr-users@lists.kamailio.org> wrote:
On Mar 23, 2024, at 6:05 PM, Ovidiu Sas osas@voipembedded.com wrote:
if most of the time the udp queue is empty and dropped packets are
observed, then the size of the udp queue is too small.
But would that happen with a sipp load test, once the threshold of dropped requests is observed?
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
On Mar 23, 2024, at 7:29 PM, Ovidiu Sas via sr-users sr-users@lists.kamailio.org wrote:
Because you have dropped packets. The queue it’s empty most of the time and if is full for a few milliseconds and that’s the moment when you experience dropped packets.
Totally fair. Just wondering if that's what we're discussing here. OP said he's running sipp, and sipp doesn't push out volatile workloads that are offset by periods of little to no activity to rapidly drain the queue.
-- Alex
The OP didn’t provide any additional info. Maybe he’s doing db operations. Maybe he’s running something else on the server. The traffic can be generally ok and then the OS is doing something weird or some other application is doing something weird that can affect kamailio for a short period of time. Then you have this extremely short bottlenecks and packets are dropped.
And it all depends on the testing environment: CPU, synch or asynch logging, etc.
One can stress kamailio on low performing hardware to reach the limit of a sistem faster and the results can be “misleading” for others that are used to test on server grade hardware.
In the end, we agree with each other and my feeling is that we are repeating the same concept.
Checking for dropped packets is always a good idea. How to deal with dropped packets depends a lot of the context.
In most of my deployments I don’t need to mess with the udp queue size. For high cps traffic, from my experience, it’s a must.
And of course, going overboard and increasing it too much just makes things worse.
- ovidiu
On Sat, Mar 23, 2024 at 20:02 Alex Balashov via sr-users < sr-users@lists.kamailio.org> wrote:
On Mar 23, 2024, at 7:29 PM, Ovidiu Sas via sr-users <
sr-users@lists.kamailio.org> wrote:
Because you have dropped packets. The queue it’s empty most of the time and if is full for a few
milliseconds and that’s the moment when you experience dropped packets.
Totally fair. Just wondering if that's what we're discussing here. OP said he's running sipp, and sipp doesn't push out volatile workloads that are offset by periods of little to no activity to rapidly drain the queue.
-- Alex
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
On Mar 23, 2024, at 9:30 PM, Ovidiu Sas osas@voipembedded.com wrote:
In the end, we agree with each other and my feeling is that we are repeating the same concept.
Yeah, I think that's mostly right.
In most of my deployments I don’t need to mess with the udp queue size. For high cps traffic, from my experience, it’s a must.
Although I don't deal with very high-CPS deployments (500-1000 CPS) much these days, I used to, and my experiences there led me to the diametrically opposite conclusion: one should never increase the UDP queue size, and if you find yourself doing that, you're doing something wrong, _except_ in the occasional burst case we discussed.
You can be absolutely sure that when I first encountered the problem, my first impulse was to increase the receive queue as high as it will go, then, gradually, to a lesser extent. I ultimately found that the proper amount by which to raise it is 0. ;)
-- Alex
It all depends on the hardware, but I noticed that after you pass 3-4k cps you run into this kind of issues.
- ovidiu
On Sat, Mar 23, 2024 at 22:11 Alex Balashov via sr-users < sr-users@lists.kamailio.org> wrote:
On Mar 23, 2024, at 9:30 PM, Ovidiu Sas osas@voipembedded.com wrote:
In the end, we agree with each other and my feeling is that we are
repeating the same concept.
Yeah, I think that's mostly right.
In most of my deployments I don’t need to mess with the udp queue size. For high cps traffic, from my experience, it’s a must.
Although I don't deal with very high-CPS deployments (500-1000 CPS) much these days, I used to, and my experiences there led me to the diametrically opposite conclusion: one should never increase the UDP queue size, and if you find yourself doing that, you're doing something wrong, _except_ in the occasional burst case we discussed.
You can be absolutely sure that when I first encountered the problem, my first impulse was to increase the receive queue as high as it will go, then, gradually, to a lesser extent. I ultimately found that the proper amount by which to raise it is 0. ;)
-- Alex
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
Not really related to the ongoing discussion, but:
Going to that kind of CPS might exceed the natural limits of all but the most exquisitely tuned execution environments. It probably wouldn't work at all on the average moderately oversubscribed public cloud VM, even a generously resourced one.
Once you get to that point, you might be better off just scaling horizontally.
-- Alex
On Mar 23, 2024, at 11:26 PM, Ovidiu Sas osas@voipembedded.com wrote:
It all depends on the hardware, but I noticed that after you pass 3-4k cps you run into this kind of issues.
- ovidiu
On Sat, Mar 23, 2024 at 22:11 Alex Balashov via sr-users sr-users@lists.kamailio.org wrote:
On Mar 23, 2024, at 9:30 PM, Ovidiu Sas osas@voipembedded.com wrote:
In the end, we agree with each other and my feeling is that we are repeating the same concept.
Yeah, I think that's mostly right.
In most of my deployments I don’t need to mess with the udp queue size. For high cps traffic, from my experience, it’s a must.
Although I don't deal with very high-CPS deployments (500-1000 CPS) much these days, I used to, and my experiences there led me to the diametrically opposite conclusion: one should never increase the UDP queue size, and if you find yourself doing that, you're doing something wrong, _except_ in the occasional burst case we discussed.
You can be absolutely sure that when I first encountered the problem, my first impulse was to increase the receive queue as high as it will go, then, gradually, to a lesser extent. I ultimately found that the proper amount by which to raise it is 0. ;)
-- Alex
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
Thank you all for helping! I wasn't expecting such a large number of replies!
I ended up partially solving the issue with a different approach. Modifying the size of the UDP Buffer did not reveal any improvement. However, modifying the memory management did improve a lot: from 330 CPS to 1800 CPS in stateful mode. So, starting kamailio with the following command:
kamailio -M 256 -m 128 -f <script.cfg>
did the trick! And the VM is still running with 6 vCPU.
Still very far from the test results described in https://www.kamailio.org/docs/openser-performance-tests/#tm-tests-c but a lot better and meets our requirements
Thanks guys for your help! Greatly appreciated!
*Sérgio Charrua*
On Sun, Mar 24, 2024 at 4:28 PM Alex Balashov via sr-users < sr-users@lists.kamailio.org> wrote:
Not really related to the ongoing discussion, but:
Going to that kind of CPS might exceed the natural limits of all but the most exquisitely tuned execution environments. It probably wouldn't work at all on the average moderately oversubscribed public cloud VM, even a generously resourced one.
Once you get to that point, you might be better off just scaling horizontally.
-- Alex
On Mar 23, 2024, at 11:26 PM, Ovidiu Sas osas@voipembedded.com wrote:
It all depends on the hardware, but I noticed that after you pass 3-4k
cps you run into this kind of issues.
- ovidiu
On Sat, Mar 23, 2024 at 22:11 Alex Balashov via sr-users <
sr-users@lists.kamailio.org> wrote:
On Mar 23, 2024, at 9:30 PM, Ovidiu Sas osas@voipembedded.com wrote:
In the end, we agree with each other and my feeling is that we are
repeating the same concept.
Yeah, I think that's mostly right.
In most of my deployments I don’t need to mess with the udp queue size. For high cps traffic, from my experience, it’s a must.
Although I don't deal with very high-CPS deployments (500-1000 CPS) much
these days, I used to, and my experiences there led me to the diametrically opposite conclusion: one should never increase the UDP queue size, and if you find yourself doing that, you're doing something wrong, _except_ in the occasional burst case we discussed.
You can be absolutely sure that when I first encountered the problem, my
first impulse was to increase the receive queue as high as it will go, then, gradually, to a lesser extent. I ultimately found that the proper amount by which to raise it is 0. ;)
-- Alex
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to
the sender!
Edit mailing list options or unsubscribe:
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
Hello,
you should be able to decrease the private memory substantially, as this is per process. This much is never needed. On the other hand, you should probably increase the shared memory if you are having a lot of transactions going on, TLS etc.. This is per server, so you can configure more.
Cheers,
Henning
From: Sergio Charrua via sr-users sr-users@lists.kamailio.org Sent: Freitag, 5. April 2024 11:45 To: Kamailio (SER) - Users Mailing List sr-users@lists.kamailio.org Cc: Sergio Charrua sergio.charrua@voip.pt Subject: [SR-Users] Re: low performance with no apparent reason
Thank you all for helping! I wasn't expecting such a large number of replies!
I ended up partially solving the issue with a different approach. Modifying the size of the UDP Buffer did not reveal any improvement. However, modifying the memory management did improve a lot: from 330 CPS to 1800 CPS in stateful mode. So, starting kamailio with the following command:
kamailio -M 256 -m 128 -f <script.cfg>
did the trick! And the VM is still running with 6 vCPU.
Still very far from the test results described in https://www.kamailio.org/docs/openser-performance-tests/#tm-tests-c but a lot better and meets our requirements
Thanks guys for your help! Greatly appreciated!
Sérgio Charrua
On Sun, Mar 24, 2024 at 4:28 PM Alex Balashov via sr-users <sr-users@lists.kamailio.orgmailto:sr-users@lists.kamailio.org> wrote: Not really related to the ongoing discussion, but:
Going to that kind of CPS might exceed the natural limits of all but the most exquisitely tuned execution environments. It probably wouldn't work at all on the average moderately oversubscribed public cloud VM, even a generously resourced one.
Once you get to that point, you might be better off just scaling horizontally.
-- Alex
On Mar 23, 2024, at 11:26 PM, Ovidiu Sas <osas@voipembedded.commailto:osas@voipembedded.com> wrote:
It all depends on the hardware, but I noticed that after you pass 3-4k cps you run into this kind of issues.
- ovidiu
On Sat, Mar 23, 2024 at 22:11 Alex Balashov via sr-users <sr-users@lists.kamailio.orgmailto:sr-users@lists.kamailio.org> wrote:
On Mar 23, 2024, at 9:30 PM, Ovidiu Sas <osas@voipembedded.commailto:osas@voipembedded.com> wrote:
In the end, we agree with each other and my feeling is that we are repeating the same concept.
Yeah, I think that's mostly right.
In most of my deployments I don’t need to mess with the udp queue size. For high cps traffic, from my experience, it’s a must.
Although I don't deal with very high-CPS deployments (500-1000 CPS) much these days, I used to, and my experiences there led me to the diametrically opposite conclusion: one should never increase the UDP queue size, and if you find yourself doing that, you're doing something wrong, _except_ in the occasional burst case we discussed.
You can be absolutely sure that when I first encountered the problem, my first impulse was to increase the receive queue as high as it will go, then, gradually, to a lesser extent. I ultimately found that the proper amount by which to raise it is 0. ;)
-- Alex
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.orgmailto:sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
__________________________________________________________ Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.orgmailto:sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
Thanks Henning! I followed your tip, and indeed got 2.500 CPS now :) ... more than enough for the project!
Cheers!
*Sérgio Charrua*
On Fri, Apr 5, 2024 at 12:37 PM Henning Westerholt hw@gilawa.com wrote:
Hello,
you should be able to decrease the private memory substantially, as this is per process. This much is never needed.
On the other hand, you should probably increase the shared memory if you are having a lot of transactions going on, TLS etc.. This is per server, so you can configure more.
Cheers,
Henning
*From:* Sergio Charrua via sr-users sr-users@lists.kamailio.org *Sent:* Freitag, 5. April 2024 11:45 *To:* Kamailio (SER) - Users Mailing List sr-users@lists.kamailio.org *Cc:* Sergio Charrua sergio.charrua@voip.pt *Subject:* [SR-Users] Re: low performance with no apparent reason
Thank you all for helping! I wasn't expecting such a large number of replies!
I ended up partially solving the issue with a different approach.
Modifying the size of the UDP Buffer did not reveal any improvement. However, modifying the memory management did improve a lot: from 330 CPS to 1800 CPS in stateful mode.
So, starting kamailio with the following command:
kamailio -M 256 -m 128 -f <script.cfg>
did the trick! And the VM is still running with 6 vCPU.
Still very far from the test results described in https://www.kamailio.org/docs/openser-performance-tests/#tm-tests-c but a lot better and meets our requirements
Thanks guys for your help! Greatly appreciated!
*Sérgio Charrua*
On Sun, Mar 24, 2024 at 4:28 PM Alex Balashov via sr-users < sr-users@lists.kamailio.org> wrote:
Not really related to the ongoing discussion, but:
Going to that kind of CPS might exceed the natural limits of all but the most exquisitely tuned execution environments. It probably wouldn't work at all on the average moderately oversubscribed public cloud VM, even a generously resourced one.
Once you get to that point, you might be better off just scaling horizontally.
-- Alex
On Mar 23, 2024, at 11:26 PM, Ovidiu Sas osas@voipembedded.com wrote:
It all depends on the hardware, but I noticed that after you pass 3-4k
cps you run into this kind of issues.
- ovidiu
On Sat, Mar 23, 2024 at 22:11 Alex Balashov via sr-users <
sr-users@lists.kamailio.org> wrote:
On Mar 23, 2024, at 9:30 PM, Ovidiu Sas osas@voipembedded.com wrote:
In the end, we agree with each other and my feeling is that we are
repeating the same concept.
Yeah, I think that's mostly right.
In most of my deployments I don’t need to mess with the udp queue size. For high cps traffic, from my experience, it’s a must.
Although I don't deal with very high-CPS deployments (500-1000 CPS) much
these days, I used to, and my experiences there led me to the diametrically opposite conclusion: one should never increase the UDP queue size, and if you find yourself doing that, you're doing something wrong, _except_ in the occasional burst case we discussed.
You can be absolutely sure that when I first encountered the problem, my
first impulse was to increase the receive queue as high as it will go, then, gradually, to a lesser extent. I ultimately found that the proper amount by which to raise it is 0. ;)
-- Alex
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to
the sender!
Edit mailing list options or unsubscribe:
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
Things could cascade due to retransmission. It all depends on how much the system is loaded and the type of the traffic and message handling in kamailio. Sometimes the system recovers and works ok for a while until the next burst.
-ovidiu
On Sat, Mar 23, 2024 at 18:28 Alex Balashov via sr-users < sr-users@lists.kamailio.org> wrote:
On Mar 23, 2024, at 6:05 PM, Ovidiu Sas osas@voipembedded.com wrote:
if most of the time the udp queue is empty and dropped packets are
observed, then the size of the udp queue is too small.
But would that happen with a sipp load test, once the threshold of dropped requests is observed?
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
On Mar 23, 2024, at 7:19 PM, Ovidiu Sas osas@voipembedded.com wrote:
Things could cascade due to retransmission. It all depends on how much the system is loaded and the type of the traffic and message handling in kamailio. Sometimes the system recovers and works ok for a while until the next burst.
Sometimes.
I still don't think enlarging the packet queue is good general advice for increasing throughput or better handling a load of X CPS.
Under very specific parameters outlined in this discussion, it might help. Outside of those parameters, it's pretty useless.
That will happen only if your CPUs are already maxed out. Under ideal conditions, the default should work fine. But if you are dealing with network jitter and several clients pumping traffic at different rates, for a short period of time all the workers will be busy while new SIP messages will pile up and the default queue will not be able to hold all of them. An increased UDP queue will hold all the SIP messages giving time the workers to consume them. The idea is that you have the UDP queue empty all most of the time, and only when there’s a short temporary traffic burst, it will come to the rescue. If the UDP queue of full most of the time, then increasing it obviously won’t help. The increased UDP queue works only for coping with short traffic bursts.
-ovidiu
On Sat, Mar 23, 2024 at 16:22 Alex Balashov via sr-users < sr-users@lists.kamailio.org> wrote:
Sure, but if you're blowing through the default, that means you're not consistently coping with the load due to some other factors. Does it not follow that enlarging the queue will just give you more backlog without increasing throughput?
-- Alex
On Mar 23, 2024, at 1:54 PM, Fred Posner fred@pgpx.io wrote:
The default queue I believe is about 2MB. Not very hard to exceed that
queue while still having a good amount of CPU/processor available.
-- Fred Posner Sent from mobile Phone: +1 (352) 664-3733 qxork.com
On Mar 23, 2024, at 11:47 AM, Alex Balashov via sr-users <
sr-users@lists.kamailio.org> wrote:
On Mar 22, 2024, at 10:21 PM, Ovidiu Sas osas@voipembedded.com
wrote:
The default udp queue length is not enough for high cps.
That's interesting. I'm not necessarily saying you're wrong, but I'd be
curious to know more about what informs this theory.
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only
to the sender!
Edit mailing list options or unsubscribe:
-- Alex Balashov Principal Consultant Evariste Systems LLC Web: https://evaristesys.com Tel: +1-706-510-6800
Kamailio - Users Mailing List - Non Commercial Discussions To unsubscribe send an email to sr-users-leave@lists.kamailio.org Important: keep the mailing list in the recipients, do not reply only to the sender! Edit mailing list options or unsubscribe:
On Mar 23, 2024, at 5:14 PM, Ovidiu Sas osas@voipembedded.com wrote:
But if you are dealing with network jitter and several clients pumping traffic at different rates, for a short period of time all the workers will be busy while new SIP messages will pile up and the default queue will not be able to hold all of them.
Absolutely. I'm just not convinced that you want to hold them in most situations.
An increased UDP queue will hold all the SIP messages giving time the workers to consume them. The idea is that you have the UDP queue empty all most of the time, and only when there’s a short temporary traffic burst, it will come to the rescue.
If the UDP queue of full most of the time, then increasing it obviously won’t help. The increased UDP queue works only for coping with short traffic bursts.
100% agree, if short, relatively infrequent stochastic bursts are the specific problem to be solved.
It's just that OP is testing with SIPp, which doesn't send that kind of burst. It sets up calls at a pretty constant rate, distributed uniformly throughout the temporal domain. So, the scenario being simulated there is indeed a UDP queue that is full most of the time, under a +/- constant base load.
My argument was implicitly tailored to the idea that if your base load is excessive, a bigger queue won't help. I definitely agree with you that increasing rmax can take the edge off some ephemeral bursts.
-- Alex