[Serusers] More on load balancing: LVS + SRVs, SER + Asterisk

Greger V. Teigre greger at teigre.com
Wed May 18 07:42:15 CEST 2005


Pietro Ravasio wrote:
> Anyway, at the moment "mixing Asterisk with SER" is not my major
> concern, I'm pretty worried about trying to use a particular outbound
> proxy for calls to a NATed UA:
>
>>    I think that the dispatcher module of ser can be used
>> cheaper instead of using LVS, which will not give you call-id
>> persistence anyway.  AFAIU, your scenario is quite similar to
>> what I described in this thread:
>> http://lists.iptel.org/pipermail/serusers/2005-May/019118.html
>> Just a different implementation...
>
> Yes, I now agree with you in using this approach rather than "multiple
> internal domains" one. The only things I'd like to modify are:
> - storing location on every message, not only on REGISTER, in case a
> ser dies and lvs udp persistence sends the next UA message to another
> SER (for instance my Sipura phone sends a bunch of NOTIFY messages in
> order to keep NAT open, if a ser dies and if phone is already
> authenticated, I want to update "its SER")

I'm not sure if I like the idea of doing an sql write for every single 
message. That's will be a massive performance hit.  But the idea is good. 
What about doing LVS NAT?

> - using only one table for every SER storing in a different column
> the proxy (taken from dst_ip) who has handled the message. This way I
> don't have to preprovision all SER instances in all ser.cfg routing
> logics. At this point:

But then you have to modify SER, which is back to where we started:
1. If you have to modify SER to do load balancing, what would be the best 
way to do it?
2. The answer is most likely: Do it in a module like dispatcher, but add 
call-id persistence
3. Doing level 7 load balancing can only be effective in the kernel

So why not focus your attention on creating call-id persistence in LVS? ;-)

> .) if I use a B2B user agent in order to handle NAT (for instance
> asterisk) - I'd send INVITE to my B2B UA
> (t_relay_to_udp("b2bua_lvs_virtual_ip")) with rewritten Contact:
> field with the IP number of the (outbound) SER UAB has
> registered/sent its latest message to (taken from DB)
>
> ..) otherwise, in a "pure SER" environment,
> - I'd send INVITE directly to the SER proxy which IP:port is taken
> from DB with the same logic as above

Yes, but how do you know the server just went down and you still haven't 
updated the IP:port of the UA?  I assume you will use t_replicate and 
save_memory to make sure that all servers have the updated registration info 
in memory? (as DB writethrough works fine, but other ser's will not load the 
new location upon receiving an INVITE)  You probably have to do t_replicate 
for EVERY message to update the IP:port of the UA and you're into massive 
overload of SIP messages (used by t_replicate).

> Using only one location table could be useful in order to create a
> flexible routing logic: if the server taken from DB is not online
> anymore, send message to another one (randomly chosen), hoping that
> UAS is not behind nat. This logic could be inserted in B2BUA dialing
> plans or in ser.cfg.

I don't like the word "hoping"...

>> If it's work, it's a workaround. No better or worse than other
>> workarounds... g-)
>
> At the moment I start thinking that any technique that tries to bind
> an outbound SER for all calls to a set of user agents, is going to be
> a workaround... Am I wrong? :-/

No, I don't think so.  That is trying to do something at the application 
level that people have long figured out is better done at lower levels... 
In fact, I believe the simplest solution would be to make sure that LVS can 
select ANY SER for load balancing and use LVS NAT (which is quite scalable 
and is the way Cisco does it).  The simplest way to modify ser for this 
would be to make sure that all lookup("location") calls actually make an sql 
read.  If each ser instance flushes each save to db and a shared sql cluster 
is used as backend, you are good to go.

The other way is to implement call-id persistence in IPVS (LVS' load 
balancer).  I like this better, because you do it in kernel mode and don't 
add a whole lot of new DB reads...

g-) 




More information about the sr-users mailing list