Cody,
Kamailio should receive from normal TCP /
Kernel stack handoff - you may be able to do some tuning with sysctl -
however one alternate suggestion that could help spread load on actual
Kamailio TCP workers is by firing up additional workers on alternate ports
but this still would not ensure that any given worker is executed on unique
cpu cores. With this said however the kernel should likely do a efficient
job of this already in the handoff between TCP stack and Kamailio.
Is there a reason you feel there is an efficiency issue? Perhaps better
knowing any current limitations could help better provide suggestions for
your specific situation.
On Fri, Feb 23, 2018 at 10:01 AM Cody Herzog <CHerzog(a)intouchhealth.com>
wrote:
Hello.
I'm curious as to how SIP messages received over TLS are divided amongst
the available TCP workers.
Based on some searching, I was hoping and expecting that the messages
would be load balanced amongst the TCP workers at the level of individual
SIP messages.
However, with my configuration, it seems that the work is load balanced at
the TCP connection level.
I will now elaborate on my basic architecture.
I have several edge proxies to which the clients connect. Those proxies do
not handle registration, or do much of anything, they just forward traffic
to a central registrar server.
On the central registrar server, it seems that a particular TCP worker is
assigned to handle all the traffic arriving from a particular edge proxy.
As long as the TCP connection between the servers remains open, it seems
that the particular worker is locked into handling all traffic on that
connection.
Does that sound like expected behavior?
That behavior is not ideal for me, because I would like to evenly divide
the SIP message load across all the TCP workers on my central registrar.
Perhaps I'm doing something bad in my config file that is preventing the
load balancing from working at the SIP message level.
If that behavior is expected, is there anything I can do to achieve more
even load balancing?
Perhaps the ASYNC module might work for my needs. It seems like I could
use async_task_route() to divide certain messages evenly amongst the async
workers.
Does that sound reasonable?
In my use case, certain messages require a lot of CPU time to perform
crypto calculations, and I want those calculations to be distributed evenly
across all CPU cores.
Thanks.
-Cody
_______________________________________________
Kamailio (SER) - Users Mailing List
sr-users(a)lists.kamailio.org
https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-users
--
Sent from Gmail Mobile