[SR-Users] Slowly increasing PKG memory usage

Henning Westerholt hw at skalatan.de
Thu Aug 1 22:19:18 CEST 2019


Hi Andrew,

answer inline below

Am 01.08.19 um 15:34 schrieb Andrew White:
Thanks Daniel, you’re fantastic!

I have 4 children/workers configured with -m 128 -M 32. The machine in question has 512MB of memory, 1 core and 1GB swap on an SSD.

I restarted Kamailio with memlog=1 and I’ve been sending batches of 30 calls in. I’ve noticed 4 of the 13 Kamailio processes going up in memory after each batch, which I suspect to be the primary children/workers. Immediately post restart:

root     28531  0.7  5.5 329368 27196 ?        Sl   22:48   0:00 /usr/local/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f /etc/kamailio/kamailio.cfg -m 128 -M 32
root     28532  0.6  4.9 329368 24528 ?        Sl   22:48   0:00 /usr/local/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f /etc/kamailio/kamailio.cfg -m 128 -M 32
root     28533  0.6  5.5 329368 27244 ?        Sl   22:48   0:00 /usr/local/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f /etc/kamailio/kamailio.cfg -m 128 -M 32
root     28534  0.7  5.4 329368 26788 ?        Sl   22:48   0:00 /usr/local/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f /etc/kamailio/kamailio.cfg -m 128 -M 32

After about 90 calls:

root     28531  0.0  6.7 330688 32948 ?        Sl   22:48   0:00 /usr/local/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f /etc/kamailio/kamailio.cfg -m 128 -M 32
root     28532  0.0  6.5 330560 32264 ?        Sl   22:48   0:00 /usr/local/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f /etc/kamailio/kamailio.cfg -m 128 -M 32
root     28533  0.0  6.5 330556 32272 ?        Sl   22:48   0:00 /usr/local/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f /etc/kamailio/kamailio.cfg -m 128 -M 32
root     28534  0.0  6.6 330564 32592 ?        Sl   22:48   0:00 /usr/local/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f /etc/kamailio/kamailio.cfg -m 128 -M 32

None of the other 9 Kamailio processes are increasing at all.

I ran corex.pkg_summary against one of them and got the following dump:

https://paste.ubuntu.com/p/SqTF3K5knK/

I can see a lot of allocation to pvapi.c, does this indicate maybe I’m setting PVs that need to be unset?

Here’s another after another 60 calls:

https://paste.ubuntu.com/p/9WQXqZtfT2/


There is not that much difference between the two runs (diff):

-journal: Suppressed 19796 messages from /system.slice/kamailio.service
+journal: Suppressed 1772 messages from /system.slice/kamailio.service
 ALERT: <core> [core/pt.c:548]: mem_dump_pkg_cb(): Memory status (pkg) of process 28531:
 ALERT: qm_status: (0x7f0fc3a2a010):
 ALERT: qm_status: heap size= 33554432
-ALERT: qm_status: used= 538712, used+overhead=883168, free=32671264
-ALERT: qm_status: max used (+overhead)= 893040
+ALERT: qm_status: used= 590552, used+overhead=939064, free=32615368
+ALERT: qm_status: max used (+overhead)= 948112
 ALERT: qm_status: dumping all alloc'ed. fragments:
 ALERT: qm_status:      0. N  address=0x7f0fc3a636d8 frag=0x7f0fc3a636a0 size=1024 used=1
 ALERT: qm_status:           alloc'd from core: core/str_hash.h: str_hash_alloc(59)

The second run shows about 50k more memory usage, but from the dumped allocations they are identical.

Two suggestions:

- your logging system suppresses actually a lot of debugging output from Kamailio (have a look to the first lines each). Check if its necessary to increase the respective configuration variable in your logging conf to get all log messages.

- If you are able to execute this test on a test plattform, configure Kamailio to just use one worker child each, to spot an error easier. Then execute 100 calls, and count the memory increase, another 100 calls and count. Then you can estimate how much bytes you actually leak per call. Try another memory dump and see if you spot something. Share your results again on the list.

root     28531  0.0  6.9 330820 33928 ?        Sl   22:48   0:00 /usr/local/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f /etc/kamailio/kamailio.cfg -m 128 -M 32
root     28532  0.0  6.7 330692 33352 ?        Sl   22:48   0:00 /usr/local/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f /etc/kamailio/kamailio.cfg -m 128 -M 32
root     28533  0.0  6.7 330688 33280 ?        Sl   22:48   0:00 /usr/local/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f /etc/kamailio/kamailio.cfg -m 128 -M 32
root     28534  0.0  6.7 330696 33192 ?        Sl   22:48   0:00 /usr/local/sbin/kamailio -DD -P /var/run/kamailio/kamailio.pid -f /etc/kamailio/kamailio.cfg -m 128 -M 32

The only changes I’ve made on this config over the last couple of weeks (since I saw this issue) is removing the dispatcher module and adding in a small function in app_ruby (which I already use) to query redis (which I also already use from app_ruby and make a heap of queries per call) for some values and write $du manually. I also added in the topoh module.

It also makes a lot of sense to me to monitor the individual processes rather than the aggregate. Is there a way to identify simply from bash what processes are workers programmatically? I’d like to monitor just those individually in my monitoring.

Have a look to the output of corex.ps or core.psx - the worker childs are named "receiver".

Cheers,

Henning



Thanks!

Andrew


On 1 Aug 2019, at 8:24 pm, Daniel-Constantin Mierla <miconda at gmail.com<mailto:miconda at gmail.com>> wrote:


Hello,

if it is pkg, then you have to see which process is increasing the use of memory, because it is private memory, specific for each process. The sum is an indicator, but the debugging has to be done for a specific process/pid.

Once you indentify a process that is leaking pkg, execute the rpc command:

  - https://www.kamailio.org/docs/modules/devel/modules/corex.html#corex.rpc.pkg_summary

When that process is doing some runtime work (e.g., handling of a sip message), the syslog will get a summary with used pkg chunks. Send those log messages here for analysis. You have to set memlog core parameter to a value smaller than debug.

Cheers,
Daniel


On 01.08.19 03:43, Andrew White wrote:
Hi all,

I had a Kamailio crash the other day, and some debugging showed I ran out of PKG memory.

Since then I’ve run a simple bash script to compile the amount of memory used by all child processes, effective /usr/local/sbin/kamcmd pkg.stats | grep real_used summed together. I’ve graphed out the data, and there’s a clear growth of PKG memory going on, mostly increasing during our busier daytime hours.

https://i.imgur.com/UTzx2k1.png

Based on this, I suspect either a module loaded or something within my app_ruby conf is leaking memory.

I’ve been reading through https://www.kamailio.org/wiki/tutorials/troubleshooting/memory, but I’m a bit nervous, as I’m not really a C/deep memory type of guy. I can see a GDB script I can attach to Kamailio, but is that going to use significant resources to run or impact the running process? Is there a newer/better/alternative way to do this, and to help me break this down?

Thanks!

Andrew



_______________________________________________
Kamailio (SER) - Users Mailing List
sr-users at lists.kamailio.org<mailto:sr-users at lists.kamailio.org>
https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-users


--
Daniel-Constantin Mierla -- www.asipto.com<http://www.asipto.com/>
www.twitter.com/miconda<http://www.twitter.com/miconda> -- www.linkedin.com/in/miconda<http://www.linkedin.com/in/miconda>




_______________________________________________
Kamailio (SER) - Users Mailing List
sr-users at lists.kamailio.org<mailto:sr-users at lists.kamailio.org>
https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-users


--
Henning Westerholt - https://skalatan.de/blog/
Kamailio services - https://skalatan.de/services
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.kamailio.org/pipermail/sr-users/attachments/20190801/67c5e173/attachment.html>


More information about the sr-users mailing list