Hello guys,
I hava a setup where there are 2 kamailios in active-active connected to
the same db (no registration yet). They both forward requests load
balancing to several freeswitches. My question is, is the load balance
algorithm independent on the 2 servers, or do they act as 1 load balancing
in term of sending the next request to the next FS in the list independent
of which kamailio sent the last request? I.e.
Kam1 -> fs1
Kam2 -> fs2
Kam1 -> fs3
Kam1 -> fs1
Kam2 -> fs2
And so on?
Thanks for your help
David
Thanks.
Regarding UDP, that all makes sense.
For my short-term needs, to minimize risk and infrastructure changes, it looks like the ASYNC module will work.
Here's the basic outline of what I'm doing:
--------
async_workers=8
route
{
route(CHECK_IS_EXPENSIVE_OPERATION);
if ($var(isExpensiveOperation))
{
async_task_route("COMMON");
}
else
{
route(COMMON);
}
}
route[COMMON]
{
# Do all my normal routing logic.
}
--------
I detect at the very top of the main route whether the operation will be expensive.
If so, I process the message with an async worker.
If not, I directly process the message on the TCP worker.
The rest of the routing logic remains unchanged.
The only difference is that I execute the expensive operations using the async workers.
It's not the prettiest thing in the world, but it seems like an OK short-term fix to relieve pressure.
I will revisit the bigger architectural ideas in a later phase.
Thanks again for all your help and good ideas.
Happy Friday.
-Cody
Thanks very much for the quick replies, Alex and Brandon.
The main reason I'm hitting a bottleneck is because my architecture is not optimal.
I have a number of edge proxies which communicate with all my clients.
The clients are usually distributed pretty evenly across all the proxies.
On those proxies, client TCP connections are distributed pretty evenly across the TCP workers, so that's all fine.
The problem occurs when the edge proxies communicate with the central registrar.
When that happens, the SIP messages from a very large number of clients are all multiplexed onto a single TCP socket connection between the proxy and the registrar.
That narrowing results in the registrar worker processes not being utilized efficiently.
For example, say I have 3 edge proxies, and my registrar has 8 cores and 8 worker processes.
I want to spread the message processing across all 8 workers, but I'm only able to utilize 3, because all the messages from a given edge proxy are being processed by a single TCP worker on the registrar.
The long-term solution is to change my architecture.
I should not be doing all the expensive work on a single central registrar.
I'm planning to move to a mesh architecture, where all servers are the same, and the registration processing is divided amongst all the servers.
That design makes sense, but it's more complex and will require more information sharing amongst the servers.
I'm looking for a short-term improvement that will give me some breathing room, which led me to looking at the ASYNC module.
>...one alternate suggestion that could help spread load on actual Kamailio TCP workers is by firing up additional workers on alternate ports.
That makes sense. I could have my central registrar listen on several different ports, and could perhaps use the dispatcher module on my edge proxies to try and evenly divide the traffic across those ports. I will look into that.
Thanks again.
-Cody
Hello.
I'm curious as to how SIP messages received over TLS are divided amongst the available TCP workers.
Based on some searching, I was hoping and expecting that the messages would be load balanced amongst the TCP workers at the level of individual SIP messages.
However, with my configuration, it seems that the work is load balanced at the TCP connection level.
I will now elaborate on my basic architecture.
I have several edge proxies to which the clients connect. Those proxies do not handle registration, or do much of anything, they just forward traffic to a central registrar server.
On the central registrar server, it seems that a particular TCP worker is assigned to handle all the traffic arriving from a particular edge proxy. As long as the TCP connection between the servers remains open, it seems that the particular worker is locked into handling all traffic on that connection.
Does that sound like expected behavior?
That behavior is not ideal for me, because I would like to evenly divide the SIP message load across all the TCP workers on my central registrar.
Perhaps I'm doing something bad in my config file that is preventing the load balancing from working at the SIP message level.
If that behavior is expected, is there anything I can do to achieve more even load balancing?
Perhaps the ASYNC module might work for my needs. It seems like I could use async_task_route() to divide certain messages evenly amongst the async workers.
Does that sound reasonable?
In my use case, certain messages require a lot of CPU time to perform crypto calculations, and I want those calculations to be distributed evenly across all CPU cores.
Thanks.
-Cody
Hello,
Is there a study comparing the performance (quality and cpu/ram consumption)
of codecs transcoding between rtpengine and asterisk/freeswith?
Is rtpengine more efficient?
Regards
Abdoul
Hello,
(cross-posting being something I want to get developers and users
express their opinions).
We were recently in a (lucky) situation to have two new modules
submissions targeting more or less the same purpose: allowing to use
Kamailio with a Redis backend via database API.
One was submitted by Andreas Granig (Sipwise) and already merged with
the name db_redis, because it was designed from the beginning as a
generic DB connector, so the module can be used with auth_db, acc,
usrloc, ...
* https://www.kamailio.org/docs/modules/devel/modules/db_redis.html
The second one was submitted by Surendra Tiwari (Plivo), initially
having a naming conflict with db_redis, but renamed to db_redisusrloc,
see the pull request at:
* https://github.com/kamailio/kamailio/pull/1446
Now, this email is about deciding the way to go forward with the second
module.
It was designed to be used only for usrloc in the first phase, with many
specific usrloc attributes hard coded inside db_redisusrloc. Surendra
said (in a private chat) that the long term plan is to make it work for
other modules. Anyhow, at this moment is very tied to usrloc, therefore
the name of the module.
Given that the backend (Redis) is not an SQL engine, mapping over
Kamailio's DB API needs some schema definition (see the readme of
db_redis) in order to make it generic and work for all our modules that
use a DB backend.
On the other hand, to squeeze the best of the backend, specially in
no-SQL cases, having a dedicated DB connector module optimized for a
specific module might help to get more
performances/high-availability/scalability from the backend itself.
In this case, for example the expires value for a contact record can be
set inside redis, so kamailio module doesn't have to run a timer routine
to clean up (it doesn't mean db_redisusrloc does it right now, I didn't
have time for a proper review, just giving an example). Surendra said
they use it in production for couple of months now and it is several
times faster than using usrloc with db_postgres (iirc, not db_mysql) for
db_mode=3 (database only mode).
But of course, the reverse of the medal with a dedicated db connector
for a module: it adds overhead to code maintenance (besides generic
updates due to external library changes, I expect changing something
relevant in usrloc, like adding new columns, would require updates in
this module as well).
So, there are few things I want feedback on:
1) how do you fill about splitting from a generic-only DB connectors to
have also some dedicated ones? This is more from confusion point of
view, as a general rule so far, we do not deny contributions if there
are other options for same kind of feature (e.g., many lcr or nat
traversal options). As long as the contributor is willing to maintain
the code, we were fine.
2) I guess usrloc, presence and dialog modules would be the main
suspects that would benefit for such dedicated connectors, in other
cases might not worth adding dedicated connectors. Missing any other
module one would like to squeeze more performances with a dedicated
connector?
3) should we set a different naming policy for such modules, for
example: use *dbs_* prefix instead *db_*, to suggest better it is a
DataBase Specific connector?
4) Andreas said he plans to do some performance testing of usrloc module
with the two modules and see the differences. Anyone else that wants to
do it? It can be a good metric to see if it worth going one way or another?
5) Helping to review the pull request, specially if you use Redis, is
appreciated. Personally I am very short in available time these days,
next week I plan do to new Kamailio stable releases, so the schedule is
not getting lighter in my side.
Cheers,
Daniel
--
Daniel-Constantin Mierla
www.twitter.com/miconda -- www.linkedin.com/in/miconda
Kamailio Advanced Training - March 5-7, 2018, Berlin - www.asipto.com
Kamailio World Conference - May 14-16, 2018 - www.kamailioworld.com
Is there a way thru kamctl or kamcmd to check which dispatcher gateways are
currently inactive? "kamctl dispatcher show" doesn't seem to have that
info and I can't find anything in the docs.
Thanks
Hello,
Running into odd behaviour in Kamailio 5.0.2 and I'm hoping to clarify. I
have a simple route that uses the textops functions subst and remove_hf.
Other than that, there is nothing else done within the route. In some
cases, the remove_hf function returns false as there are no matching
headers.
Documentation in https://www.kamailio.org/wiki/cookbooks/5.0.x/core#return
seems to show that if there are no explicit return statements, the route
should return 1. However, this route appears to be returning -1.
My question is: if a function within a route returns a negative value, and
there are no explicit return statements, will the entire route return -1
(as it appears is happening), or should the route return 1 as the core
cookbook indicates?
Thanks in advance,
Jeff
Hello!
I'm working on setting up our kamailio instance to use TLS for inter-phone communication and so far I've been able to configure the GUI and get it to work which is great. However, I'm trying to grab a packet capture and decode the TLS session to get the SIP ladder (so I can troubleshoot issues) and so far I've been unable to. I've been following the article at: http://packetpushers.net/using-wireshark-to-decode-ssltls-packets/ and using the kamailio self-signed cert as I think that's the cert being used with no success. I'm getting the Cipher Suite: TLS_RSA_WITH_AES_256_CBC_SHA (0x0035) which doesn't use perfect forward secrecy so it should be able to be decoded.
Is there a way to decode the kamailio TLS encoded communications so I can troubleshoot issues that may come up?
Thank you in advance,
Robert
Robert Remsik
Telecom
Desk Phone: 970 491 7120
Robert.Remsik(a)colostate.edu