With best wishes,
Unai Rodriguez
On 13 Dec 2022, 18:28 +0100, Henning Westerholt <hw(a)gilawa.com>om>, wrote:
Hello,
regarding limiting the number of gateways retries, you can use different failure routes
(e.g. jump from first to second and then stop), you can keep an counter, or probably just
rely on ds_next_dst() reaching the end of available gateways in the XAVP (did not checked
code).
Cheers,
Henning
--
Henning Westerholt –
https://skalatan.de/blog/ <https://skalatan.de/blog/>
Kamailio services –
https://gilawa.com <https://gilawa.com/>
From: Unai Rodriguez <unai(a)rodr.org>
Sent: Tuesday, December 13, 2022 5:32 PM
To: Kamailio (SER) - Users Mailing List <sr-users(a)lists.kamailio.org>rg>; Henning
Westerholt <hw(a)gilawa.com>
Subject: RE: [SR-Users] How to have Kamailio retry backends
Hi Henning/All,
After some digging we realized the system was already retrying thanks to this block on
kamailio.cfg
# Try next destionations in failure route
failure_route[RTF_DISPATCH] {
if (t_is_canceled()) {
exit;
}
# next DST - only for 500 or local timeout
if (t_check_status("500")
or (t_branch_timeout() and !t_branch_replied())) {
if(ds_next_dst()) {
t_on_failure("RTF_DISPATCH");
route(RELAY);
exit;
}
}
}
How can we control the maximum number of retries? Seems to be infinite at the moment? Or
!t_branch_replied()means that each backend can only reply once?
Thanks you
With best wishes,
Unai Rodriguez
On 6 Dec 2022, 10:04 +0100, Henning Westerholt <hw(a)gilawa.com
<mailto:hw@gilawa.com>>, wrote:
Hello,
you can implement this by using a failure_route. There is one example in the dispatcher
module configuration how to do it.
Cheers,
Henning
--
Henning Westerholt –
https://skalatan.de/blog/ <https://skalatan.de/blog/>
Kamailio services –
https://gilawa.com <https://gilawa.com/>
From: sr-users <sr-users-bounces(a)lists.kamailio.org
<mailto:sr-users-bounces@lists.kamailio.org>> On Behalf Of Unai Rodriguez
Sent: Saturday, December 3, 2022 5:16 PM
To: sr-users(a)lists.kamailio.org <mailto:sr-users@lists.kamailio.org>
Subject: [SR-Users] How to have Kamailio retry backends
Dear List,
We’re using Kamailio to load balance MRCP requests to multiple backend groups with a
configuration as follows:
# kamailio.cfg
...
...
route[DISPATCH] {
if($ua=="mrcp_backend_1") {
if(!ds_select_dst("1", "4")) {
send_reply("404", "No
destination");
exit;
}
}
if($ua=="mrcp_backend_2") {
if(!ds_select_dst("2", "4")) {
send_reply("404", "No
destination");
exit;
}
}
xlog("L_DBG", "--- SCRIPT: going to <$ru> via
<$du>\n");
t_on_failure("RTF_DISPATCH");
route(RELAY);
exit;
}
...
...
# dispatcher.list
1 sip:mrcp01.server.int:8060;transport=tcp
<sip:mrcp01.server.int:8060;transport=tcp>
1 sip:mrcp02.server.int:8060;transport=tcp
<sip:mrcp02.server.int:8060;transport=tcp>
2 sip:mrcp03.server.int:8060;transport=tcp
<sip:mrcp03.server.int:8060;transport=tcp>
2 sip:mrcp04.server.int:8060;transport=tcp
<sip:mrcp04.server.int:8060;transport=tcp>
With this configuration, Kamailio load balances the initial SIP INVITE among the MRCP
servers. After the INVITE, the service communicates directly to the MRCP servers via SIP
(for hanging up the call), MRCPv2 (for sending speech control messages), and RTP (for
sending audio).
We would like to implement a configurable number of retries, so that if a particular
backend times out, Kamailio would retry X times to other backend(s). In short, something
equivalent to HAProxy’s retries
<https://cbonte.github.io/haproxy-dconv/2.4/configuration.html#4-retries>, but for
Kamailio. This probably implies having Kamailio always as part of our communication (not
just load balancing the initial SIP INVITE).
I haven’t been able to find much information about this, could someone provide some
pointers?
Thank you so much
With best wishes,
Unai Rodriguez
__________________________________________________________
Kamailio - Users Mailing List - Non Commercial Discussions
To unsubscribe send an email to sr-users-leave(a)lists.kamailio.org
Important: keep the mailing list in the recipients, do not reply only to the sender!
Edit mailing list options or unsubscribe: