[SR-Users] MTREE reload question

Daniel-Constantin Mierla miconda at gmail.com
Tue Sep 11 08:27:09 CEST 2012


Hello,

On 9/10/12 2:27 PM, Uri Shacked wrote:
> thanks a lot.
> so as i undestand, the real_used_size is the right value to calculate 
> size. but, it grows because of fragmentation data.
> in the core, what does it mean "It is effective if MEM_JOIN_FREE 
> compile option is defined"
> is there something else i need to do except adding to the cfg the 
> mem_join=1 ?
no, iirc MEM_JOIN_FREE is enabled by default -- this is an option to 
turn the feature off at compile time. Let us know if the number of 
fragments is lower.

Cheers,
Daniel

>
> On Mon, Sep 10, 2012 at 2:39 PM, Daniel-Constantin Mierla 
> <miconda at gmail.com <mailto:miconda at gmail.com>> wrote:
>
>     Hello,
>
>
>     On 9/10/12 1:14 PM, Uri Shacked wrote:
>>
>>     Thanks,
>>
>>     The stuff i sent is from my test server that has 3.3.1 installed.
>>
>>     On my prod servers i use 3.2.x.
>>
>>     how would you explain that when kamailio starts (on prod with
>>     similar data and same cfg) it take around 20% of the shmem, after
>>     reload 30% and in time (6-8 weeks) it gets up to 39% ?
>>
>
>     if used_size does not have variations, then it is no leak. The
>     variations of real_used_size is from fragmentation management --
>     each fragment has a structure associated to it.
>
>     Cheers,
>     Daniel
>
>
>>     I start with 4G of shmem.
>>
>>     It feels like a leak but i have trouble investigating it....
>>     where would you look? The reload, the dialog or the avp's?
>>
>>     BR,
>>
>>     Uri
>>
>>
>>
>>     On Mon, Sep 10, 2012 at 11:06 AM, Daniel-Constantin Mierla
>>     <miconda at gmail.com <mailto:miconda at gmail.com>> wrote:
>>
>>         Hello,
>>
>>         the used size is pretty much the same after the reload, so
>>         doesn't look as leak. If you do 2-3 reloads is
>>         shmem:used_size staying around same value?
>>
>>         It is clear boost in the fragments, I would say that after
>>         start the number of fragments is quite low, because it should
>>         contain the good records in memory as well.
>>
>>         Btw, the version 3.3.x has options for defragmentation, see
>>         the core cookbook, you can enable it.
>>
>>         Cheers,
>>         Daniel
>>
>>
>>
>>         On 9/9/12 10:18 AM, Uri Shacked wrote:
>>>         Hi,
>>>         here is the statistics after kamailio starts:
>>>         shmem:fragments = 28
>>>         shmem:free_size = 3800871312
>>>         shmem:max_used_size = 494132368
>>>         shmem:real_used_size = 494095984
>>>         shmem:total_size = 4294967296
>>>         shmem:used_size = 342642072
>>>         here it is after reload of the num table:
>>>         shmem:fragments = 9161531
>>>         shmem:free_size = 3654274496
>>>         shmem:max_used_size = 959885552
>>>         shmem:real_used_size = 640692800
>>>         shmem:total_size = 4294967296
>>>         shmem:used_size = 342654552
>>>         here is the cfg part for mtree:
>>>         #------- mtree params -------------
>>>         modparam("mtree", "db_url", CFGDB)
>>>         modparam("mtree", "mtree",
>>>         "name=odr;dbtable=service_odr_view;type=0;")
>>>         modparam("mtree", "mtree",
>>>         "name=oper;dbtable=service_oper_type;type=0;")
>>>         modparam("mtree", "mtree",
>>>         "name=permis;dbtable=service_permisions_to_oper;type=0;")
>>>         modparam("mtree", "mtree",
>>>         "name=num;dbtable=service_numbers_to_areas_view;type=0;")
>>>         modparam("mtree", "char_list", "0123456789")
>>>         modparam("mtree", "pv_value", "$avp(mtval)")
>>>         modparam("mtree", "pv_values", "$avp(mtvals)")
>>>         here is the number of raws from the DB:
>>>         SELECT count(*) FROM
>>>         `service`.`service_numbers_to_areas_view` = 4195528
>>>         and attached is the memory log.
>>>         thanks,
>>>         Uri
>>>
>>>         On Fri, Sep 7, 2012 at 10:26 AM, Daniel-Constantin Mierla
>>>         <miconda at gmail.com <mailto:miconda at gmail.com>> wrote:
>>>
>>>             Hello,
>>>
>>>             is this taken only after startup? Get one at startup and
>>>             another one after reload, so it can be compared.
>>>
>>>             Cheers,
>>>             Daniel
>>>
>>>
>>>             On 9/6/12 9:22 AM, Uri Shacked wrote:
>>>>             Hi,
>>>>             here:
>>>>             shmem:fragments = 143898
>>>>             shmem:freesize = 3446570952
>>>>             shmem:max_used_size = 861854768
>>>>             shmem:real_used_size = 848396344
>>>>             shmem:total_size = 4294967296
>>>>             shmem:used_size = 319676976
>>>>
>>>>
>>>>             On Thu, Sep 6, 2012 at 9:49 AM, Daniel-Constantin
>>>>             Mierla <miconda at gmail.com <mailto:miconda at gmail.com>>
>>>>             wrote:
>>>>
>>>>                 Hello,
>>>>
>>>>
>>>>                 On 9/5/12 3:06 PM, Uri Shacked wrote:
>>>>
>>>>                     Hi,
>>>>                     I use MTREE to load 5 million rows from the
>>>>                     database. it takes about 30 sec to start
>>>>                     kamailio and it is running great.
>>>>                     Whern I check the shmem usage I see the data
>>>>                     take about 0.8G out of 4G i set on shmem.
>>>>                     When i reload the data while kamailio is
>>>>                     running, the memory usage rises to 1.3G and
>>>>                     stays there (the second and next reloads stays
>>>>                     on 1.3G as well).
>>>>                     Why doesn't it return to 0.8G after the reload
>>>>                     is completed?
>>>>
>>>>
>>>>                 can you send the statistics related to shmem?
>>>>
>>>>                 kamctl fifo get_statistics shmem:
>>>>
>>>>                 There are different values there, some usage come
>>>>                 from overhead of memory chunks management.
>>>>
>>>>                 Cheers,
>>>>                 Daniel
>>>>
>>>>                 -- 
>>>>                 Daniel-Constantin Mierla - http://www.asipto.com
>>>>                 http://twitter.com/#!/miconda
>>>>                 <http://twitter.com/#%21/miconda> -
>>>>                 http://www.linkedin.com/in/miconda
>>>>                 Kamailio Advanced Training, Berlin, Nov 5-8, 2012 -
>>>>                 http://asipto.com/u/kat
>>>>                 Kamailio Advanced Training, Miami, USA, Nov 12-14,
>>>>                 2012 - http://asipto.com/u/katu
>>>>
>>>>
>>>
>>>             -- 
>>>             Daniel-Constantin Mierla -http://www.asipto.com
>>>             http://twitter.com/#!/miconda  <http://twitter.com/#%21/miconda>  -http://www.linkedin.com/in/miconda
>>>             Kamailio Advanced Training, Berlin, Nov 5-8, 2012 -http://asipto.com/u/kat
>>>             Kamailio Advanced Training, Miami, USA, Nov 12-14, 2012 -http://asipto.com/u/katu
>>>
>>>
>>>
>>
>>         -- 
>>         Daniel-Constantin Mierla -http://www.asipto.com
>>         http://twitter.com/#!/miconda  <http://twitter.com/#%21/miconda>  -http://www.linkedin.com/in/miconda
>>         Kamailio Advanced Training, Berlin, Nov 5-8, 2012 -http://asipto.com/u/kat
>>         Kamailio Advanced Training, Miami, USA, Nov 12-14, 2012 -http://asipto.com/u/katu
>>
>>
>
>     -- 
>     Daniel-Constantin Mierla -http://www.asipto.com
>     http://twitter.com/#!/miconda  <http://twitter.com/#%21/miconda>  -http://www.linkedin.com/in/miconda
>     Kamailio Advanced Training, Berlin, Nov 5-8, 2012 -http://asipto.com/u/kat
>     Kamailio Advanced Training, Miami, USA, Nov 12-14, 2012 -http://asipto.com/u/katu
>
>

-- 
Daniel-Constantin Mierla - http://www.asipto.com
http://twitter.com/#!/miconda - http://www.linkedin.com/in/miconda
Kamailio Advanced Training, Berlin, Nov 5-8, 2012 - http://asipto.com/u/kat
Kamailio Advanced Training, Miami, USA, Nov 12-14, 2012 - http://asipto.com/u/katu

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.sip-router.org/pipermail/sr-users/attachments/20120911/f99246f6/attachment-0001.htm>


More information about the sr-users mailing list