I haven't converted this test to native configuration yet. That is my next step to rule out any general issues. I've never experienced any performance issues like this before and since this was my first test using KEMI I assumed it was related in some fashion. I've read through performance optimization related posts and documents that I've been able to find.
-dan
________________________________ From: Henning Westerholt hw@gilawa.com Sent: Sunday, January 12, 2025 03:52 To: Kamailio (SER) - Users Mailing List sr-users@lists.kamailio.org Cc: Daniel W. Graham dan@cmsinter.net Subject: RE: Performance issues with KEMI
Caution: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe
Hello Daniel,
I am wondering if your issues are specific to KEMI, e.g. if you’ve also tried the same script logic with a native cfg and observing similar numbers? If it’s a simple script, you can maybe just repeat the same test. There were benchmarks done for KEMI some years ago which only showed a small performance difference.
Or do you have generic performance issues which you just happened to observe in your test with KEMI? In this case it would be more of a generic performance optimization topic.
Cheers,
Henning
-- Henning Westerholt – https://skalatan.de/blog/ Kamailio services – https://gilawa.comhttps://gilawa.com/
From: Daniel W. Graham via sr-users sr-users@lists.kamailio.org Sent: Sonntag, 12. Januar 2025 08:21 To: sr-users@lists.kamailio.org Cc: Daniel W. Graham dan@cmsinter.net Subject: [SR-Users] Performance issues with KEMI
Testing out KEMI functionality and running into performance issues. If I exceed 150 calls per second the network receive queue grows and Kamailio is unable to keep up with requests and they begin dropping.
KEMI script for testing is just doing a stateless reply to invites.
Using python3s module.
I've played with Kamailio child processes and memory allocations, but there is no impact. I've also attempted some buffer / memory tweaking at the OS level, again with no impact. Increasing CPU cores and even running the test on bare metal results in the same.
Example of receive queue at 150 calls per second - Netid State Recv-Q Send-Q Local Address:Port udp UNCONN 337280 0 x.x.x.x:5060
Just wondering if anyone has experienced similar issues or has an example of the performance they are seeing before I continue down this path.
Thanks,
- dan