[Serusers] SER Reports "out of memory"

Jiri Kuthan jiri at iptel.org
Mon May 30 10:54:14 CEST 2005


At 09:24 AM 5/30/2005, Greger V. Teigre wrote:

[...]
>> * when ser starts up usrloc is "lazy-loaded"
>> * if a usrloc record is looked up in cache and is __NOT__ found, then
>> MySQL will be queried. If found in MySQL then the usrloc record will
>> be put in to cache for future lookups  
>> 
>> By doing these two things we should not have a problem we excessively
>> large subscriber bases. 
>> 
>> Thoughts?
> 
>Makes sense.  This is how Berkeley DB and many other DBs work.  In fact, the best would be to build an abstraction cache layer around all the query functions that have data in the DB. This way you would get the optimum performance/scalability.

I have to admit I am not sufficiently familiarized with BDB. If I understand it right,
they do confgurable in-memory caching and they also support some kind of master-slave
replication. I am not sure though how this scales...(20 SERs with 20 BDBs, one of them
master and replicating UsrLoc changes to 19 slaves who are all able to identify 
inconsistent cache?)

I mean the structural problem here is dealing with r-w intensive Usrloc operations and
still desiring to replicate for reliability. There is a variety of algorithms to deal
with it and I don't know well what the respective DB systems actually do.

> 
>   However, there is one more thing: You need to decide on an algorithm for selecting a usrloc record to replace when the cache is full.  Do you store extra info in memory for each usrloc to make the right decision (ex. based on the number of lookups).

You may also purchase more memory :)

-jiri  




More information about the sr-users mailing list