[Users] memory issues

Klaus Darilion klaus.mailinglists at pernau.at
Mon Mar 19 08:42:48 CET 2007


FYI: There was a discussion on serdev recently - also with some 
performance comparisons:
http://lists.iptel.org/pipermail/serdev/2007-February/009433.html
and
http://lists.iptel.org/pipermail/serdev/2007-March/date.html (Subject: 
Using system memory allocator as the main one in SER)

regards
klaus


Daniel-Constantin Mierla wrote:
> 
> 
> On 03/18/07 23:09, Christian Schlatter wrote:
>> Daniel-Constantin Mierla wrote:
>>> Hello,
>>>
>>> that's the result of memory fragmentations, that should be the the 
>>> capacity of the server, should stay at that rate. Memory manager 
>>> allocates chunks of different sizes, but not merges then back by 
>>> default. However, it tries to avoid allocating small sizes -- you can 
>>> see in statistics used memory and real used sizes. The difference is 
>>> not used but should ensure that shared variables will not fragment 
>>> memory in small chunks.
>>
>> The memory statistics indeed show a high number of memory fragments:
>>
>> before 'out of memory':
>>
>> shmem:total_size = 536870912
>> shmem:used_size = 59607040
>> shmem:real_used_size = 60106488
>> shmem:max_used_size = 68261536
>> shmem:free_size = 476764424
>> shmem:fragments = 9897
>>
>> after 'out of memory' (about 8000 calls per process):
>>
>> shmem:total_size = 536870912
>> shmem:used_size = 4171160
>> shmem:real_used_size = 4670744
>> shmem:max_used_size = 68261536
>> shmem:free_size = 532200168
>> shmem:fragments = 57902
>>
>>>
>>> You can try to compile openser with -DQM_JOIN_FREE (add it in DEFS 
>>> variable of Makefile.defs) and test again. Free fragments should be 
>>> merged and fragmentation should not occur -- processing will be 
>>> slower. We will try for next release to provide a better solution for 
>>> that.
>>
>> Compiling openser with -DQM_JOIN_FREE did not help. I'm not sure how 
>> big of a problem this fragmentation issue is.
> What is the number of fragments with QM_JOIN_FREE after flooding?
>> Do you think it would make sense to restart our production openser 
>> instances from time to time just to make sure they're not running into 
>> this memory fragmentation limits?
> The issue will occur only when the call rate reaches the limits of the 
> proxy's memory. Otherwise the chunks are reused. Transactions and avps 
> are rounded up to be sure there will be minimized the number of 
> different sizes for memory chunks. It wasn't reported too often, maybe 
> that's why no big attention was paid to it. This memory system is in 
> place since the beginning of ser. Alternative is to use sysv shared 
> memory, but is much slower, along with libc private memory manager.
> 
> Cheers,
> Daniel
> 
>>
>> thanks,
>> Christian
>>
>>>
>>> Cheers,
>>> Daniel
>>>
>>> On 03/18/07 01:21, Christian Schlatter wrote:
>>>> Christian Schlatter wrote:
>>>> ...
>>>>>
>>>>> I always had 768MB shared memory configured though, so I still 
>>>>> can't explain the memory allocation errors I got. Some more test 
>>>>> runs revealed that I only get these errors when using a more 
>>>>> production oriented config that loads more modules than the one 
>>>>> posted in my earlier email. I now try to figure out what exactly 
>>>>> causes these memory allocation errors that happen reproducibly 
>>>>> after about 220s at 400 cps.
>>>>
>>>> I think I found the cause for the memory allocation errors. As soon 
>>>> as I include an AVP write operation in the routing script, I get 
>>>> 'out of memory' messages after a certain number of calls generated 
>>>> with sipp.
>>>>
>>>> The routing script to reproduce this behavior looks like (full 
>>>> config available at http://www.unc.edu/~cschlatt/openser/openser.cfg):
>>>>
>>>> route{
>>>>         $avp(s:ct) = $ct; # commenting this line solves
>>>>               # the memory problem
>>>>
>>>>         if (!method=="REGISTER") record_route();
>>>>         if (loose_route()) route(1);
>>>>
>>>>         if (uri==myself) rewritehost("xx.xx.xx.xx");
>>>>         route(1);
>>>> }
>>>>
>>>> route[1] {
>>>>         if (!t_relay()) sl_reply_error();
>>>>         exit;
>>>> }
>>>>
>>>> An example log file showing the 'out of memory' messages is 
>>>> available at http://www.unc.edu/~cschlatt/openser/openser.log .
>>>>
>>>> Some observations:
>>>>
>>>> - The 'out of memory' messages always appear after about 8000 test 
>>>> calls per worker process. One call consists of two SIP transactions 
>>>> and six end-to-end SIP messages. An openser with 8 children handles 
>>>> about 64'000 calls, whereas 4 children only handle about 32'000 
>>>> calls. The sipp call rate doesn't matter, only number of calls.
>>>>
>>>> - The 8000 calls per worker process are independent from the amount 
>>>> of shared memory available. Running openser with -m 128 or -m 768 
>>>> does not make a difference.
>>>>
>>>> - The more AVP writes are done in the script, the less calls go 
>>>> through. It looks like each AVP write is leaking memory (unnoticed 
>>>> by the memory statistics).
>>>>
>>>> - The fifo memory statistics do not reflect the 'out of memory' 
>>>> syslog messages. Even if openser does not route a single SIP message 
>>>> because of memory issues, the statistics still show a lot of 'free' 
>>>> memory.
>>>>
>>>>
>>>> All tests were done with openser SVN 1.2 branch on Ubuntu dapper 
>>>> x86. I think the same is true for 1.1 version but I haven't tested 
>>>> that yet.
>>>>
>>>>
>>>> Christian
>>>>
>>
>>
> 
> _______________________________________________
> Users mailing list
> Users at openser.org
> http://openser.org/cgi-bin/mailman/listinfo/users


-- 
Klaus Darilion
nic.at





More information about the Users mailing list