[Serusers] request for comments

Greg Fausak greg at addabrand.com
Thu Jul 1 05:37:53 CEST 2004


On Jun 30, 2004, at 10:04 PM, Zeus Ng wrote:

>
>> More food :-)
>>
>> We distribute our SER boxes.  We did it by:
>>
>> 1) using the domain name for a login id to the postgres database
>> 2) slicing each table based on domain name, that is, instead of
>>       subscriber I have a view calls av_subscriber which only shows
>> records
>>       from the current domain.
>> 3) Each ser proxy serves a domain.
>>
>
> Personally, I don't treat this as distributed proxy but centralized 
> location
> server. Nevertheless, it serve what you want. The use of view is good. 
> Hope
> that MySQL will have this feature in their next major release.
>

Well, I guess....each proxy server is at a different IP address.
That is distributed isn't it?

Views are essential for multi-domains.  Using views I
can give customers access to the database.  Why wait for
mysql, postgres is a much better database engine :-)

>
>> Using this technique I can run dozens of SER proxies each
>> with its own view of the database...however, if any one
>> domain gets too big I will have a problem (I haven't had that
>> problem yet, I'll let you
>> know!).
>>
>> I have always thought that the way to solve the distribution
>> problem is to relax the in-memory caching of registrations.
>> Everytime some UAC registers the database is updated, and
>> everytime a call is to be delivered the location table is
>
> Well, it's true in certain sense but you can also send a REGISTER 
> request
> with no "Contact" header to get the memory version of USRLOC. But I 
> agree
> with you that using just SQL DB should be an option in ser.
>
>> queried.  Using this
>> technique
>> will tax the database more, but it would allow multiple SER
>> proxies without the need for a sticky bit set, that is a
>> round robin or least
>> used
>> SLB model.
>
> You still need to "sticky" back the original proxy front end for NAT
> transveral.
>
>>
>> At the current time because of the caching you can't have two SER
>> proxies
>> serving the same REGISTERed customer base because the
>> location table gets tromped.
>
> Not entirely true. If you can separate the proxy and location server
> function, you can have multiple proxies for the same domain.

but currently you can't.  the registration server is the only
one that knows about the location of the ua.
-g

>
>
>>
>> ---greg
>>
>> On Jun 30, 2004, at 1:06 PM, Jev wrote:
>>
>>>
>>>
>>> Zeus Ng wrote:
>>>> I may share some of my experience on a similar concept. Notice that
>>>> it's not
>>>> a solution but more an idea sharing.
>>>
>>> Solutions are nice, but I really want to hear ideas :) Provides food
>>> for thought! :)
>>>
>>>> From the experiment, I found that there
>>>> is a fundamental weakness in ser (plus UDP plus NAT) to support a
>>>> distributed SIP environment. I'm not saying it can't be done.
>>>> However, to make ser more distributed, I think there is a need to
>>>> redesign the way ser
>>>> handle user location. The lab environment I have is 4 ser
>> proxies and
>>>> 2 ser location servers. The
>>>> 4 ser proxies were used as front end for proxying SIP
>> requests. They
>>>> have a
>>>> SRV record in the DNS server for UAs which understand this record.
>>>> For UA
>>>> that doesn't understand SRV, the DNS also reply the
>> proxies IP in a
>>>> round
>>>> robin fashion.
>>>> When a UA lookup the IP of the proxy, it get one from
>> either the SRV
>>>> record
>>>> or round robin A record.
>>>> All REGISTER requests are forwarded from the proxies to
>> the primary
>>>> location
>>>> server. This is than replicated to the secondary location server by
>>>> t_replicate. So, the proxies has no knowledge of UA
>> location. Only the
>>>> location servers know where to reach the UA.
>>>> For other SIP requests, I have tried two different methods
>> to handle
>>>> them.
>>>> 1. Forward all requests to location server and use record_route to
>>>> keep the
>>>> proxy in the path:
>>>> This works great to maintain dialogue as INVITE, reINVITE, BYE,
>>>> CANCEL will
>>>> all proxy back to the location server which has the transaction
>>>> state. OTOH,
>>>> it is poor in NAT handling since the location server was never
>>>> directly
>>>> contacted by the NAT device. The nat ping will not keep a
>> hole in the
>>>> NAT
>>>> device. Also, it has no performance improvement over one single
>>>> "proxy+location" server as all requests end up in location server.
>>>
>>> So you had the backend location server contacting the UAC directly?
>>> I'm attempting to route the invite back through the
>> originating front
>>> end proxy that has the nat session already established with
>> the natted
>>> UAC. At the moment this only works because I am rewriting the
>>> (hardcoded) hostname in my config, but I'm looking at doing this
>>> dynamically so that any requests to the user location
>> server will have
>>> their hostname rewritten to the previous hop.
>>>
>>>
>>>> 2. Proxy querying UA location via SQL
>>>> In this method, I've written a small SQL script to be run by the
>>>> proxy via
>>>> exec_dst to check the UA location from the location server DB
>>>> backend. (I
>>>> know that DB is not the best place to check location but
>> it is easier
>>>> than
>>>> writing C program to query the memory USRLOC on the
>> location server.)
>>>> This
>>>> works best for performance as the proxies are sharing the
>> requests as
>>>> well
>>>> as RTP proxying. However, it is relatively poor in NAT and
>>>> transaction as
>>>> the INVITE, BYE and CANCEL can be handled by different
>> proxy due DNS
>>>> resolution.
>>>
>>> I really want to keep my operations within SIP messaging
>> only, and not
>>> having to rely on external mechanisms such as sql queries. This
>>> maintains our flexibility to use any SIP compliant device. It's a
>>> great idea thogh! :)
>>>
>>>> One way I see ser going distributed is to follow the idea of squid
>>>> plus some
>>>> enhancement. The group of proxies are put into
>> partnership. When the
>>>> proxy
>>>> receive a REGISTER request, it check whether one of its
>> partner has a
>>>> record
>>>> of that UA or not. If yes, it forward the request to the
>> other proxy
>>>> and
>>>> forget it. Otherwise, it save the location in its memory, do NAT
>>>> stuff and
>>>> becomes the authoritive proxy for that UA until the
>> REGISTER expires.
>>>> When
>>>> other request comes in, the proxy do the same check with
>> its partner
>>>> again
>>>> and forward the request to the authoritive proxy. This way, the
>>>> authoritive
>>>> proxy maintains the nat ping, shares the RTP proxying and
>> keep trace
>>>> of
>>>> transactions.
>>>> When a new proxy comes in, we just need to tell ser that
>> there is a
>>>> new
>>>> member in the partnership. (Though, we need to find a way
>> to tell ser
>>>> about
>>>> this without restarting so that it maintains the USRLOC in memory)
>>>> Instantly, this proxy can serve new UA that was never seen
>> before or
>>>> its
>>>> REGISTER has expires somewhere.
>>>
>>> This sounds like a cool idea, I'm not familiar with squids proxiy
>>> partnership model, but what you explain seems sound to me.
>> Perhaps the
>>> ser proxies could use SRV records to learn about new 'partner' ser
>>> proxies? Or would this be a miss-aplication of the SRV feature?
>>>
>>>> The only thing I haven't figured out a solution would be
>> how to pick
>>>> up UA
>>>> location when one of the proxy fails. I don't like the way
>>>> t_replicate works
>>>> as it requires hard coding other proxies in the script and needs
>>>> restarting
>>>> ser for failover.
>>>
>>> If a proxy that is maintaing a NAT session with a UAC goes
>> away, I see
>>> no way of passing off this session/location to another
>> server except
>>> just waiting for the UAC to re-register.
>>>
>>>> Zeus
>>>>> -----Original Message-----
>>>>> From: serusers-bounces at lists.iptel.org
>> [mailto:serusers-bounces at lists.iptel.org]
>>>>> On Behalf Of Jev
>>>>> Sent: Wednesday, 30 June 2004 8:53 AM
>>>>> To: Andrei Pelinescu-Onciul
>>>>> Cc: serusers at lists.iptel.org
>>>>> Subject: Re: [Serusers] request for comments
>>>>>
>>>>>
>>>>> Andrei Pelinescu-Onciul wrote:
>>>>> [snip]
>>>>>
>>>>>> So all the packets comming from the same ip will be sent to
>>>>>
>>>>> the same
>>>>>> fron end SER? (hashing after src ip)?
>>>>>
>>>>> Yes, using ciscos "Sticky IP" which I admit, I do not know about,
>>>>> but I'm told it will do this job properly.
>>>>>
>>>>>
>>>>>> Anyway there are some problems related to the nat traversal:
>>>>>>
>>>>>> 1. nat ping - nat ping needs to access usrloc, so that it
>>>>>
>>>>> would know
>>>>>> which users to ping. However on your setup the front-end
>>>>>
>>>>> servers have
>>>>>> no ideea about this, so they wouldn't be able to nat ping.
>>>>>
>>>>> The "main"
>>>>>> server (User accounts) knows who to ping but its ping won't
>>>>>
>>>>> traverse a
>>>>>> symmetric nat (the nat will have an open binding only with the
>>>>>> outbound proxy, which would be one of the load balanced
>>>>>> front-ends).
>>>>>
>>>>> I do realize this now, so I'm considering running a non-persistent
>>>>> usr_loc (no mysql back end) on all the front end servers,
>> and using
>>>>> t_replicate between all of them. I admit I have not
>> verified if this
>>>>> is possible, so please forgive me if I'm talking
>> non-sense here at
>>>>> this stage. My concern here, as I mentioned in my reply
>> to Klaus's
>>>>> post, is that if I use t_replicate will all my front end ser
>>>>> servers, will they all spit udp at a single natted client
>> when the
>>>>> client has only one udp session with one front end server?
>>>>>
>>>>>
>>>>>
>>>>>> 2. consider user A calling user B, where at least B is
>>>>>
>>>>> behind a nat.
>>>>>> The invite would reach the "main" server which will look up
>>>>>
>>>>> B and will
>>>>>> try to send the message to B's address. Unfortunately
>> B's nat will
>>>>>> drop the packet, because it has an open binding only
>>>>>
>>>>> between B and the
>>>>>> load balanced ip. (this will work only if B has a full cone
>>>>>
>>>>> nat which
>>>>>> is very very unlikely)
>>>>>
>>>>> I'm not sure on the solution here. I will need to make the call go
>>>>> via the front end ser server that has the active udp session with
>>>>> the client. I'm going to sleep on this!
>>>>>
>>>>>
>>>>>
>>>>>> 3. assuming the above stuff will work somehow, you still
>> have to be
>>>>>> very carefull to open only one rtp proxy session (since
>>>>>
>>>>> each front end
>>>>>> has its own rtp proxy you should make sure you use
>>>>>
>>>>> force_rtp_proxy on
>>>>>> only one of them, for the same call)
>>>>>
>>>>>
>>>>> I agree, and I realize that I'm making some challenging issues for
>>>>> myself :)
>>>>> Thank you Andrei for your comments!
>>>>>
>>>>> -Jev
>>>>>
>>>>> _______________________________________________
>>>>> Serusers mailing list
>>>>> serusers at lists.iptel.org http://lists.iptel.org/mailman/listinfo/serusers
>>>>>
>>>
>>> _______________________________________________
>>> Serusers mailing list
>>> serusers at lists.iptel.org http://lists.iptel.org/mailman/listinfo/serusers
>>>
>>>
>> Greg Fausak
>> www.AddaBrand.com
>> (US) 469-546-1265
>>
>
>
>
Greg Fausak
www.AddaBrand.com
(US) 469-546-1265




More information about the sr-users mailing list