On Mon, Jul 19, 2010 at 2:41 AM, marius zbihlei marius.zbihlei@1and1.ro wrote:
JR Richardson wrote:
On Fri, Jul 16, 2010 at 4:51 PM, JR Richardson jmr.richardson@gmail.com wrote:
Hi All,
I loaded up the PDT database with about 35K records and when I issue the commad "kamctl fifo pdt_list" I get:
3(3018) ERROR: <core> [tree.c:139]: no more pkg mem 3(3018) ERROR: mi_fifo [fifo_fnc.c:509]: command (pdt_list) processing failed
Searching around I found the http://www.kamailio.org/dokuwiki/doku.php/troubleshooting:memory which suggest I adjust the pkg mem size in config.h.
In config.h, I found:
/*used only if PKG_MALLOC is defined*/ #define PKG_MEM_POOL_SIZE 4*1024*1024
So is this what I am supposed to adjust? Maybe try: #define PKG_MEM_POOL_SIZE 4*2048*2048 or #define PKG_MEM_POOL_SIZE 8*1024*1024
I tried #define PKG_MEM_POOL_SIZE 8*1024*1024 and recompiled with good results, was able to run pdt_list just fine. So what do I look for in the memory statistics to show how much memory to configure when using large database record sets? And if I need to go to 16* or 32*, would that have any adverse affect on any other kamailio operations?
Hello JR,
Compile kamailio with memory debug support. Modify Makefile.vars and set MEMDBG to 1 . In the config file(kamailio.cfg), set memlog to a lower level than the general debug level. You can do a dump of pkg (private) memory by sending a SIGUSR1 to one of the worker process ('kamctl ps' to find the pid). Unless you are working with a huge number of calls, I can't see why you need more than 8 MB or 10 maximum. Carefull if you have a 64 bit system you might need just a little more.
Which one would be a logical adjustment? Also, is there a correlation between pkg mem and database record size as related to pdt_list?
The idea is to have a few 100K records loaded in kamailio and be able to perform "kamctl fifo pdt_list | grep 512345" to show this prefix route. But without enough memory, doesn't work.
AFAIK, these mi commands will construct a complete result tree, thus meaning it should have enough memory for 100K records. If it works with 8 MB it is ok . Keep in mind that the memory dump is not that usefull to track these OOM condition, it is usefull to track memory leaks, as after the OOM condition happens the already malloc'ed memory is free'd. It would be usefull to test that is those OOM conditions, the free is done corectly and no leaks are induced
Cheers Marius
So I started seeing this issue with only 35K records in pdt database, I increased #define PKG_MEM_POOL_SIZE 8*1024*1024 Then I was able to load 35K records with good results. I increased record count to 60K records and hit the limit again, pdt_list would not work with same "no more pkg mem" error.
I increased again: #define PKG_MEM_POOL_SIZE 16*1024*1024 This allowed me to execute pdt_list with 60K to 120K records loaded.
When I added 180K records in the database, I got the "no more pkg mem" error again. I increased again: #define PKG_MEM_POOL_SIZE 32*1024*1024 This allowed me to execute pdt_list with 180K records loaded.
I increased database record count to 240K and got the "no more pkg mem" error again.
So I don't think it is prudent to just keep increasing PKG_MEM_POOL_SIZE. Is this an architectural limitation with fifo pkg_mem, shouldn't this be a dynamic allocation if it's not within shmem and has no affect on core sip-router function? While the pdt_reload and pdt_list is going on (takes a few seconds to load and list), I don't see any problems with the sip-router executions. I guess I can just use the old fashion database query to look up routes instead of fifo pdt_list.
Thanks.
JR