Currently, usrloc is replicated via t_replicate() using db_mode=writeback.

However, our lazy-load patch would obsolete the need for t_replicate() because we have multiple MySQL servers that are active-active so __all__ replication really occurs at the database layer rather than the SIP layer.

So in this situation, when a REGISTER message hits any SIP router, SER will do the usual stuff that it currently does (excluding t_replicate), and when it persists the user contact to MySQL, the database will replicate to the other DB servers.

Then if a usrloc record is "looked-up" on any other SIP router and a match is not found in cache, the usrloc code will query MySQL for the record, which was replicated by MySQL.

By doing this we will have a "zero-delay" SER start time, regardless of the number of records in the subscriber table and we also eliminate the possiblity of t_replicate() sending a usrloc record to peer SIP routers which didn't process the request.

Jiri, do you see any pitfalls with this school of thought.

Regards,
Paul

On 5/29/05, Jiri Kuthan <jiri@iptel.org> wrote:
At 03:15 PM 5/29/2005, Java Rockx wrote:
>Actually, a minute delay would be a bad thing because replicated usrloc records, using t_replicate() would not make it in to peer SER server caches when the server is starting up.
>
>Given this fact, and given the fact that most SER modules do not hash data upon server startup [like group.so, etc, etc] we are starting to see little value in caching usrloc. Our MySQL server is hit 12 times for an INVITE message and so complete caching of usrloc is of minimal performace gain.

indeed.


>Anyhow, we're not in process of modifying SER so that:
>
>* when ser starts up usrloc is "lazy-loaded"
>* if a usrloc record is looked up in cache and is __NOT__ found, then MySQL will be queried. If found in MySQL then the usrloc record will be put in to cache for future lookups
>
>By doing these two things we should not have a problem we excessively large subscriber bases.

Appears reasonable to me.

Still -- with the way you are suggesting, CallID based load distribution, how
do you replicate UsrLoc changes across all the servers?

-jiri