[OpenSER-Devel] SF.net SVN: openser: [4242] trunk

Dan Pascu dan at ag-projects.com
Mon May 26 12:43:02 CEST 2008


On Sunday 25 May 2008, Juha Heinanen wrote:
>   347    module keeps its internal state about endpoints that need
>   348    keepalive, state that is build while messages are processed by
>   349    the proxy and thus it doesn't need to transfer any information
>   350    from the usrloc module, which should also improve its
>   351    efficiency.
>
> and
>
> 450 1.4.5. keepalive_state_file (string)
>   451
>   452    Specifies a filename where information about the NAT endpoints
>   453    and the conditions for which they are being kept alive is
> saved 454    when OpenSER exits.
>
> this may not be a good idea if entry proxy (loadbalancer) has a standby
> proxy that becomes active if the other one dies, looses network
> connectivity or whatever.  in that kind of situation, keeping keepalive
> info in clustered database may be a better idea.

You may be thinking so, but on a closer analysis it's not. Writing to a 
database is very expensive. Tests show that mysql can insert about 500 
rows per second in a database (these numbers are for a very simple table 
with 2 columns and no indexes, so depending on the table the complexity 
and the number of indexes the numbers may even be lower). Or course you 
can improve that number by using various techniques, like committing 
after bunch of rows or using multi inserts, but that will tie the 
implementation to mysql specific optimizations and raw queries.

Now consider that there are multiple components that write to the database 
(usrloc, dialog) so by adding another one we reduce the number of 
messages we can process per second even further. Being able to process 
250-500 requests per second on a load balancer does not make for a better 
idea IMO. Now even if we assume we use some mysql specific optimizations 
and get 1500 inserts per second, that is still far from the 10000 that an 
in memory, TM based implementation can do.

So IMO writing this information in the database in realtime is not going 
to work, especially for a load balancer. If we write the information to 
the db only on shutdown, like we do now with the keepalive_state file, 
then it will not be of much help in the case you mentioned (load balancer 
crashing and slave taking over, since the master will not have a chance 
to write that state anywhere).

The keepalive functionality was designed to be the least obtrusive in 
terms of affecting the performance, hence the reason to keep its state in 
memory and only save it to the disc on shutdown. Of course writing to the 
db on shutdown may not change that design, but I also do not see how it 
helps in the case you mentioned. Instead it adds a lot of unneeded extra 
complexity.

I do not use load balancers, instead I use a P2P design, where each proxy 
is equivalent to the others and each can act as a network entry point, so 
there is no load balancer (each proxy acts as one for the others) and 
there are no master-slave clusters. You may consider a similar approach, 
or you may use the old way of generating the keepalive messages on the 
proxies themselves, and not the load balancer. In the examples I gave I 
recommended to generate the keepalive messages on the border proxies 
because it's the most natural approach and it fits well with distributed 
P2P network designs, like the one I use. But it is by no means the only 
way to do it. You can generate the keepalive messages on the proxy that 
handles the user and let the load balancer only forward it to the UA. 
That way you also do not need to use t_relay on the load balancer.

-- 
Dan



More information about the Devel mailing list