[SR-Users] Problem with worst case scenario of accessing 15 gateways over the DISPATCHER module - MAX_BRANCHES is reached

Henning Westerholt henning.westerholt at 1und1.de
Mon Jul 4 11:07:56 CEST 2011


On Sunday 03 July 2011, Klaus Feichtinger wrote:
>  I have a problem in my scenario where I use the dispatcher module for
> (round-robin based) distribution of incoming calls to 15 (!) different
> gateways. In idle case (= all gateways are up and running or just a couple
> of them does not accept calls) my solution is working fine. The solution
> is very primitive and based on the example of the dispatcher module's
> ReadMe. However, when e.g. 12 of 15 gateways are out of service (in
> detail: rejecting the call with "480 - Temporarily Unavailable") and only
> the 13th would be okay, I can not reach that gateway. This is because the
> TM module is interrupting the distribution with an error message
> indicating that the maximum number of branches (per transaction) is
> reached (ERROR:tm:add_uac: maximum number of branches exceeded)....
> 
>  My first idea for solving this problem was/is in manipulating the
> definition of the "max_branches" constant in the config.h file. However, I
> prefer using the precompiled Debian packages. Therefore that solution is
> not optimal. I have not found a method for decreasing the destination_set
> on an easy way... Does anybody know an alternative solution / workaround?

Hi Klaus,

did I understand it correctly, you trying sequential 15 gateways in your 
setup? E.g., 

- try GW 1
- on error try GW 2
- ...
- up to GW 15?

Normally I'd expect that you restrict the number of tries to something 
smaller, that you don't run into this max_branches issue and also don't have a 
that long call setup time for the user. And I also would think that you would 
notice that you running at a few percent of your capacitiy via some other 
alarms, and just fix the gateways?

With regards to your question, you could make the max_branches number 
configurable with a patch, at the moment its just a define.

Best regards,

Henning



More information about the sr-users mailing list