[Users] Serial forking

Tim Klein tkpublic at timklein.fastmail.fm
Wed Nov 30 18:49:55 CET 2005


Bogdan said:
>I just committed serial proper forking support into core - it was 
>migrated from LCR module. I mean proper, since it has q value 
>support and it can be used by any module without any inter-module 
>dependencies.
>
>The idea behind was to allow to all module that performs parallel 
>forking to do also serial forking - exec, enum, registrar, etc.

That's good news!  Thank you!

But I have some questions to help me understand the new functions. 
Please see below...

>There are two new script functions :
>     *serialize_branches(n)* : it inherits the functionality of 
>load_contacts() from LCR; gets all parallel branches and convert 
>them into AVPs for serial forking; numerical parameter 'n' says if 
>any previous AVP should be removed (if non-0) or not (if 0). Returns 
>true is no error (even if no serialization happened).
>     *next_branches()* : it inherits the functionality of 
>next_contacts() from LCR; get (based on q value) the next contact(s) 
>to be used in  sequential forking. Returns true only if a new 
>contact was got to be used.
>
>The AVP containing the branches is accessible only via alias - its 
>ID is not configurable or visible; the alias (automatically exported 
>by core) is "serial_branch" - it is visible from any module that 
>uses the avp core aliasing system.

My questions:

Let's say there are 4 contacts registered.  Two of the contacts ("A" 
and "B") have q value of 1.0.  The other two contacts ("C" and "D") 
have q value of 0.5.

Using the following routing script, which contacts will be tried in 
the main route block, and which will be tried in the failure_route 
block?

       modparam("registrar", "append_branches", 1)
       .....
       {
          .....
          lookup("location");
          serialize_branches(1);
          t_on_failure("1");
          t_relay();
       }

       failure_route[1] {
          if (next_branches()) {
              t_relay();
          }
       }


Thank you!

Tim





More information about the Users mailing list