mkl262 created an issue (kamailio/kamailio#4416)
Hello,
Im asking to add the JWT Module to the default list of modules that are being built and uploaded to [deb.kamailio.org](https://deb.kamailio.org/).
Will editing the CMAKE scripts will be enough (in which case I can open a PR for it), or does it require modifying the Jenkins pipelines?
Thanks,
Michael.
--
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/4416
You are receiving this because you are subscribed to this thread.
Message ID: <kamailio/kamailio/issues/4416(a)github.com>
### Description
The Kamailio 5.4.x dialog profiles functionality can lead to dead-lock on certain high-load scenarios.
The Kamailio dialog profiles are used to track parallel channels for about 200 outgoing PSTN carrier interconnections. During high traffic times (like several thousands parallel calls) the Kamailio server will frequently (e.g. hourly) goes into an end-less loop while executing get_profile_size in the configuration script. This causes the locking for the dialog profiles never be released and Kamailio will stop serving traffic. Internal monitoring tools and RPC commands stay working, as long as they do not touch the dialog functionality.
A similar (dedicated) Kamailio setup is used for tracking parallel channels for customers. Here the dead-lock is not observed that frequently, but aparentely also some crashes happens in a much longer time interval.
### Troubleshooting
After analysis of the back-traces with GDB the get_profile_size() function was removed from the configuration script. After this change the crash did not happened anymore for several days.
#### Reproduction
Issue could not be reproduced so far.
#### Debugging Data
##### bt 1 (some data removed)
(gdb) bt
\# 0 0x00007f57cf3b00da in get_profile_size (profile=0x7f50ccbc7e80, value=0x7ffd9928f300) at dlg_profile.c:859
n = 364
i = 12
ph = 0x7f50d3e4b7d0
\# 1 0x00007f57cf419c67 in w_get_profile_size_helper (msg=0x7f57d699d418, profile=0x7f50ccbc7e80, value=0x7ffd9928f300, spd=0x7f57d6916960) at dialog.c:941
\# 2 0x00007f57cf41a459 in w_get_profile_size3 (msg=0x7f57d699d418, profile=0x7f50ccbc7e80, value=0x7f57d6935118, result=0x7f57d6916960) at dialog.c:982
\# 3 0x0000000000463fea in do_action (h=0x7ffd99293610, a=0x7f57d6936488, msg=0x7f57d699d418) at core/action.c:1094
\# 4 0x00000000004711ee in run_actions (h=0x7ffd99293610, a=0x7f57d6936488, msg=0x7f57d699d418) at core/action.c:1581
\# 5 0x000000000046058b in do_action (h=0x7ffd99293610, a=0x7f57d690fda8, msg=0x7f57d699d418) at core/action.c:700
The first back-trace was taking from a running process with gdb. The counter in f0 does not increased that much during this time, probably due the overflow of the loop counter.
##### bt2 (analysis with data structure with gdb scripts)
Here the loop counter in f0 showed a really high value. Expected size of dialog profiles hash table:
(gdb) p profile->entries[3]
$4 = {first = 0x7f9bfd4aad98, content = 2068}
(gdb) p profile->entries[7]
$3 = {first = 0x7f9c12079f70, content = 784}
(gdb) p profile->entries[12]
$6 = {first = 0x7f9c02be5d50, content = 7600}
(gdb) p profile->entries[14]
$2 = {first = 0x7f9bff636de8, content = 6764}
hash table bucket 14 shows a lot of corruption and the loop never ends (carrier names and IPs replaced). The list for hash bucket 7 got linked to the list for hash bucket 14:
counter 6755: prev 0x7f9c0b9dcde0 - current 0x7f9c02e5b378 - next 0x7f9c0a5f9ba0 - value carrier1-XX.XX - hash 14
counter 6756: prev 0x7f9c02e5b378 - current 0x7f9c0a5f9ba0 - next 0x7f9c0860b968 - value carrier1-XX.XX▒▒▒▒ - hash 14
counter 6757: prev 0x7f9c0a5f9ba0 - current 0x7f9c0860b968 - next 0x7f9bfe3f3a78 - value carrier1-XX.XX▒▒▒▒ - hash 14
counter 6758: prev 0x7f9c0860b968 - current 0x7f9bfe3f3a78 - next 0x7f9c10d977f0 - value carrier1-XX.XX - hash 14
counter 6759: prev 0x7f9bfe3f3a78 - current 0x7f9c10d977f0 - next 0x7f9c0ae198b0 - value carrier2-XX.XX▒▒▒▒ - hash 7
counter 6760: prev 0x7f9c10d977f0 - current 0x7f9c0ae198b0 - next 0x7f9c12079f70 - value carrier3-XX.XX - hash 7
counter 6761: prev 0x7f9c0ae198b0 - current 0x7f9c12079f70 - next 0x7f9c011f2540 - value-carrier2-XX.XX▒▒▒▒ - hash 7
counter 6762: prev 0x7f9c12079f70 - current 0x7f9c011f2540 - next 0x7f9bfff886f0 - value carrier2-XX.XX▒▒▒▒ - hash 7
counter 6763: prev 0x7f9c011f2540 - current 0x7f9bfff886f0 - next 0x7f9c05db00a8 - value carrier3-XX.XX= - hash 7
[...]
counter 28270: prev 0x7f9c019d06e8 - current 0x7f9bfaf18290 - next 0x7f9c12c90680 - value carrier2-XX.XX▒▒▒▒ - hash 7
counter 28271: prev 0x7f9bfaf18290 - current 0x7f9c12c90680 - next 0x7f9c086a2b58 - value-carrier2-XX.XX▒▒▒▒ - hash 7
counter 28272: prev 0x7f9c12c90680 - current 0x7f9c086a2b58 - next 0x7f9c0b4f09e8 - value carrier2-XX.XX▒▒▒▒ - hash 7
[...]
hash table bucket 7 is still consistent regarding the loop, but already shows initial sign of corruption. There is one item of the list for hash bucket 14 visible:
counter 780: prev 0x7f9c0db57ac8 - current 0x7f9c02225700 - next 0x7f9bfbf7db08 - value carrier2-XX.XX▒▒▒▒ - hash 7
counter 781: prev 0x7f9c02225700 - current 0x7f9bfbf7db08 - next 0x7f9c10d977f0 - value carrier1-XX.XX- hash 14
counter 782: prev 0x7f9bfe3f3a78 - current 0x7f9c10d977f0 - next 0x7f9c0ae198b0 - value carrier2-XX.XX▒▒▒▒ - hash 7
counter 783: prev 0x7f9c10d977f0 - current 0x7f9c0ae198b0 - next 0x7f9c12079f70 - value carrier3-XX.XX - hash 7
total size of hash table is 784
#### Log Messages
No special log messages observed.
#### SIP Traffic
SIP traffic looked ok during analysis of the core dumps.
### Possible Solutions
* adding additional safe-guards for the get_profile_size function to not access data from other hash buckets
* stopping the loop counter after some threshold
* finding and fixing the source of the internal data corruption (obviously)
* refactoring the dialog modules to use another approach for storing the dialog profile information
### Additional Information
* **Kamailio version**:
Kamailio 5.4.7, compiled from git repository
* **Operating System**:
CentOS 7.9
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/2923
### Description
We are using Kamailio 5.7.4 on Debian 12 (from http://deb.kamailio.org/kamailio57) with rtpengine as an Edgeproxy for our clients. The instance terminates SIP/TLS (with Cliencertificates) and forwards the SIP Traffic to internal systems.
After some days we are getting errors like this
`tls_complete_init(): tls: ssl bug #1491 workaround: not enough memory for safe operation: shm=7318616 threshold1=8912896`
First we thought Kamailio just doesnt have enough memory, so we doubled it..
But after some days the Logmessage (and Userissues) occured again.
So we monitored the shmmem statistics and found that used and max_used are constantly growing til it reaches the limit.
As i mentioned we are using client-certificates and so we are also using the CRL feature.
We do have a systemd-timer which fetches the CRL every hour and runs 'kamcmd tls.reload' when finished.
Our tls.cfg looks like this:
```
[server:default]
method = TLSv1.2+
private_key = /etc/letsencrypt/live/hostname.de/privkey.pem
certificate = /etc/letsencrypt/live/hostname.de/fullchain.pem
ca_list = /etc/kamailio/ca_list.pem
ca_path = /etc/kamailio/ca_list.pem
crl = /etc/kamailio/combined.crl.pem
verify_certificate = yes
require_certificate = yes
[client:default]
verify_certificate = yes
require_certificate = yes
```
After testing a bit we found that every time tls.reload is executed Kamailio consumes a bit more memory which eventually leads to all the memory being consumed which leads to issues for our users.
See following example:
```
[0][root@edgar-dev:~]# while true ; do /usr/sbin/kamcmd tls.reload ; /usr/sbin/kamcmd core.shmmem ; sleep 1 ; done
Ok. TLS configuration reloaded.
{
total: 268435456
free: 223001520
used: 41352552
real_used: 45433936
max_used: 45445968
fragments: 73
}
Ok. TLS configuration reloaded.
{
total: 268435456
free: 222377960
used: 41975592
real_used: 46057496
max_used: 46069232
fragments: 78
}
Ok. TLS configuration reloaded.
{
total: 268435456
free: 221748664
used: 42604992
real_used: 46686792
max_used: 46698080
fragments: 77
}
Ok. TLS configuration reloaded.
{
total: 268435456
free: 221110832
used: 43242408
real_used: 47324624
max_used: 47335608
fragments: 81
}
^C
[130][root@edgar-dev:~]#
```
### Troubleshooting
#### Reproduction
Everytime tls.reload is called the memory consumptions grows..
#### Debugging Data
<!--
If you got a core dump, use gdb to extract troubleshooting data - full backtrace,
local variables and the list of the code at the issue location.
gdb /path/to/kamailio /path/to/corefile
bt full
info locals
list
If you are familiar with gdb, feel free to attach more of what you consider to
be relevant.
-->
```
If you let me know what would be interesting for tracking this down, i am happy to provide logs/debugging data!
```
#### Log Messages
<!--
Check the syslog file and if there are relevant log messages printed by Kamailio, add them next, or attach to issue, or provide a link to download them (e.g., to a pastebin site).
-->
```
If you let me know what would be interesting for tracking this down, i am happy to provide logs/debugging data!
```
#### SIP Traffic
SIP doesnt seem to be relevant here
### Possible Solutions
Calling tls.reload less often or restart kamailio before memory is consumed ;)
### Additional Information
```
version: kamailio 5.7.4 (x86_64/linux)
flags: USE_TCP, USE_TLS, USE_SCTP, TLS_HOOKS, USE_RAW_SOCKS, DISABLE_NAGLE, USE_MCAST, DNS_IP_HACK, SHM_MMAP, PKG_MALLOC, MEM_JOIN_FREE, Q_MALLOC, F_MALLOC, TLSF_MALLOC, DBG_SR_MEMORY, USE_FUTEX, FAST_LOCK-ADAPTIVE_WAIT, USE_DNS_CACHE, USE_DNS_FAILOVER, USE_NAPTR, USE_DST_BLOCKLIST, HAVE_RESOLV_RES, TLS_PTHREAD_MUTEX_SHARED
ADAPTIVE_WAIT_LOOPS 1024, MAX_RECV_BUFFER_SIZE 262144, MAX_URI_SIZE 1024, BUF_SIZE 65535, DEFAULT PKG_SIZE 8MB
poll method support: poll, epoll_lt, epoll_et, sigio_rt, select.
id: unknown
compiled with gcc 12.2.0
```
* **Operating System**:
```
* Debian GNU/Linux 12 (bookworm)
* Linux edgar-dev 6.1.0-20-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.85-1 (2024-04-11) x86_64 GNU/Linux
```
--
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/3823
You are receiving this because you are subscribed to this thread.
Message ID: <kamailio/kamailio/issues/3823(a)github.com>
qasimloay created an issue (kamailio/kamailio#4495)
The Problem
When I make a call and the call ends with BYE message containing Reason: outofcredit, the system returns wrong duration: 71582788 seconds instead of the actual call time.
This is causing major billing problems with our OCS.
S-CSCF Debug Logs
Here are the debug logs (debug level) from S-CSCF when this happens:
[scscf logs with debug (ims_charging_bug) .log](https://github.com/user-attachments/files/23698499/scscf.logs.with.debug.ims_charging_bug.log)
--
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/4495
You are receiving this because you are subscribed to this thread.
Message ID: <kamailio/kamailio/issues/4495(a)github.com>
Ozzyboshi created an issue (kamailio/kamailio#4503)
Hello,
on my Kamailio installation I am experiencing a significant memory leak in SHM.
Here are the details of my system:
```
version: kamailio 6.0.3 (x86_64/linux)
flags: USE_TCP, USE_TLS, USE_SCTP, TLS_HOOKS, USE_RAW_SOCKS, DISABLE_NAGLE, USE_MCAST,
NO_SIG_DEBUG, DNS_IP_HACK, SHM_MMAP, PKG_MALLOC, MEM_JOIN_FREE,
Q_MALLOC, F_MALLOC, TLSF_MALLOC, DBG_SR_MEMORY, USE_FUTEX,
FAST_LOCK-ADAPTIVE_WAIT, USE_DNS_CACHE, USE_DNS_FAILOVER,
USE_NAPTR, USE_DST_BLOCKLIST, HAVE_RESOLV_RES, TLS_PTHREAD_MUTEX_SHARED
ADAPTIVE_WAIT_LOOPS: 1024
MAX_RECV_BUFFER_SIZE: 262144
MAX_SEND_BUFFER_SIZE: 262144
MAX_URI_SIZE: 1024
BUF_SIZE: 65535
DEFAULT PKG_SIZE: 8MB
poll method support: poll, epoll_lt, epoll_et, sigio_rt, select
compiled with gcc 14.2.0
```
The memory leak appears only when the presence feature is enabled.
When presence is active, Kamailio starts running dialog_publish(), whose code is here:
https://github.com/kamailio/kamailio/blob/9dc160d1d2bdf0542d3d9d8ae090bb135…
This function does not send the PUBLISH directly: it calls pua_send_publish(), which is a function pointer referring to the send_publish() implementation in the pua module.
Then send_publish() eventually calls set_uac_req() and tmb.t_request():
https://github.com/kamailio/kamailio/blob/9dc160d1d2bdf0542d3d9d8ae090bb135…
Digging further, tmb.t_request() maps to request() in the TM module, which calls t_uac_with_ids() and then t_uac_prepare().
Now comes the suspicious part:
If I comment out the call to t_uac_prepare(), the memory leak disappears.
This doesn’t necessarily mean the bug is inside t_uac_prepare(), but it’s a strong hint.
t_uac_prepare() allocates a new struct cell and returns it:
https://github.com/kamailio/kamailio/blob/9dc160d1d2bdf0542d3d9d8ae090bb135…
My concern is: is this cell always freed?
The matching cleanup function is free_cell(), used only here:
https://github.com/kamailio/kamailio/blob/9dc160d1d2bdf0542d3d9d8ae090bb135…
From what I can tell, free_cell() is called only if all these conditions are true:
- dst_cell == 0
- is_ack == 1
- dst_req == 0
-
In my situation no ACK is involved (Kamailio is a proxy that sends PUBLISH and immediately gets a 200 OK).
Therefore, is_ack is always false meaning the free_cell() cleanup logic is skipped entirely.
I tried forcing free_cell() unconditionally, but it leads to crashes, so clearly other parts of the code still rely on this structure.
Does the current free_cell() logic look correct to you?
Is it expected that struct cell allocated by t_uac_prepare() remains unfreed in cases where PUBLISH → 200 OK occurs without ACK?
Any guidance on how to proceed or where else to look would be greatly appreciated.
Thanks
--
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/4503
You are receiving this because you are subscribed to this thread.
Message ID: <kamailio/kamailio/issues/4503(a)github.com>
dilyanpalauzov created an issue (kamailio/kamailio#4317)
In the 6.0 branch the FLAVOUR option allows in cmake-gui (ccmake) setting `kamailio` as only option. As it is no-op, it should not be displayed in cmake-gui.
The option MODULE_GROUP_NAME is documented as „Group of modules to build (one or multiple space seperated group)“. In cmake-gui it allows with enter to toggle between several values: DEFAULT, STANDARD, COMMON, KSTANDARD, KMINI, KPCRE, KMYSQL, KPOSTGRES, …. Using ccmake it is not possible to to set multiple values. The help string should mention all possible values. It should be plural: groups.
The options in ccmake-gui `XSLT_VALIDATE,` and `XSLT_VERBOSE,` have a comma in their names too much.
The option `CMAKE_BUILD_TYPE` allows in Kamailio toggling between four possible values: Debug, Release, MinSizeRel and RelWithDebInfo. ClamAV adds one more value for cmake-gui: `None`. This toggling is not yet available in the build system of cmake (when building cmake using cmake) - https://gitlab.kitware.com/cmake/cmake/-/issues/26434 . Is it reasonable to have for CMAKE_BUILD_TYPE also `None` as permitted value? I do not know the answer, that is why I am raising this question.
--
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/4317
You are receiving this because you are subscribed to this thread.
Message ID: <kamailio/kamailio/issues/4317(a)github.com>
Den4t created an issue (kamailio/kamailio#4391)
Hi !
Kamailio v.5.8.6, used as webrtc registrar/GW, wolfssl v.5.7.4, high call load, OS: Ubuntu 22
Periodically kamailio dumps core with assertion:
```
Program terminated with signal SIGABRT, Aborted.
#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140585212159808) at ./nptl/pthread_kill.c:44
44 ./nptl/pthread_kill.c: No such file or directory.
(gdb) backtrace
#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140585212159808) at ./nptl/pthread_kill.c:44
#1 __pthread_kill_internal (signo=6, threadid=140585212159808) at ./nptl/pthread_kill.c:78
#2 __GI___pthread_kill (threadid=140585212159808, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
#3 0x00007fdc8ba5f476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#4 0x00007fdc8ba457f3 in __GI_abort () at ./stdlib/abort.c:79
#5 0x00007fdc8ba4571b in __assert_fail_base (fmt=0x7fdc8bbfa130 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x7fdc7a6b299f "nw == bytes_read", file=0x7fdc7a6b089a "tls_server.c", line=1056, function=<optimized out>) at ./assert/assert.c:94
#6 0x00007fdc8ba56e96 in __GI___assert_fail (assertion=0x7fdc7a6b299f "nw == bytes_read", file=0x7fdc7a6b089a "tls_server.c", line=1056, function=0x7fdc7a6b5068 "tls_h_read_f") at ./assert/assert.c:103
#7 0x00007fdc7a48cd89 in tls_h_read_f () from /usr/lib64/kamailio/modules/tls_wolfssl.so
#8 0x00005622baa30e8e in ?? ()
#9 0x00005622baa3273b in tcp_read_req ()
#10 0x00005622baa3c7c2 in ?? ()
#11 0x00005622babf61d6 in ?? ()
#12 0x00005622baa42b87 in tcp_receive_loop ()
#13 0x00005622baa24188 in tcp_init_children ()
#14 0x00005622ba7b1b33 in main_loop ()
#15 0x00005622ba7a5888 in main ()
```
I couldn't reproduce this issue in a lab, but I think that possibly the issue depends on TLS traffic burst behavior.
We have some buggy webrtc clients, which floods using MESSAGE requests with high CPS periodically.
I assume that might be this traffic somehow fills the BIO buffers in wolfssl. As result wolfSSL_BIO_write function returns with error.
I have no time to debug deeply, so I made a quick fix here:
https://github.com/Den4t/kamailio/commit/20579f02354356dedb5f5ac1f9bff65314…
I see following logs after the fix has been applied:
```
2025-09-02T09:24:04.393019+03:00 iptel /usr/sbin/kamailio[892554]: BUG: tls_wolfssl [tls_server.c:1058]: tls_h_read_f(): tls_h_read_f assertion check nw == bytes_read (nw=-1, bytes_read=7115,
npos=-1)
2025-09-02T10:44:21.556481+03:00 iptel /usr/sbin/kamailio[892559]: BUG: tls_wolfssl [tls_server.c:1058]: tls_h_read_f(): tls_h_read_f assertion check nw == bytes_read (nw=-1, bytes_read=6201,
npos=-1)
2025-09-02T10:58:31.563456+03:00 iptel /usr/sbin/kamailio[892558]: BUG: tls_wolfssl [tls_server.c:1058]: tls_h_read_f(): tls_h_read_f assertion check nw == bytes_read (nw=-1, bytes_read=7587,
npos=-1)
```
But service is alive.
It is better to lose one session than the whole service.
In any case this is just a crutch for quick problem fix, so it would be nice if tls_wolfssl module's author took a deeper look at the problem.
P.S.
I assume that module's logic should check BIO flags after the call to wolfSSL_BIO_write, f.e. this function can return error with bio flag WOLFSSL_BIO_FLAG_RETRY set.
--
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/4391
You are receiving this because you are subscribed to this thread.
Message ID: <kamailio/kamailio/issues/4391(a)github.com>
### Introduction
First, I apologize for the lack of detail here. I am unable to get a core dump and as a production box, I was unable to run in debug mode.
### Description
Using Kamailio 5.8.4 on Debian 12, every 60-80 minutes, the software would crash, blaming tls_wolfssl, such as:
```
2024-12-16T14:58:27.554497+00:00 ip-172-16-0-52 /usr/local/sbin/kamailio[9084]: CRITICAL: <core> [core/mem/q_malloc.c:535]: qm_free(): BUG: bad pointer 0x1 (out of memory block!) called from tls_wolfssl: tls_init.c: ser_free(240) - ignoring
2024-12-16T14:58:27.554555+00:00 ip-172-16-0-52 /usr/local/sbin/kamailio[9084]: CRITICAL: <core> [core/mem/q_malloc.c:126]: qm_debug_check_frag(): BUG: qm: fragm. 0x7f3440ba7560 (address 0x7f3440ba75a0) beginning overwritten (23)! Memory allocator was called from tls_wolfssl: tls_init.c:240. Fragment marked by :29. Exec from core/mem/q_malloc.c:546.
2024-12-16T14:58:27.616558+00:00 ip-172-16-0-52 /usr/local/sbin/kamailio[9089]: CRITICAL: <core> [core/pass_fd.c:281]: receive_fd(): EOF on 103
2024-12-16T14:58:27.617684+00:00 ip-172-16-0-52 /usr/local/sbin/kamailio[8992]: ALERT: <core> [main.c:805]: handle_sigs(): child process 9084 exited by a signal 6
2024-12-16T14:58:27.618339+00:00 ip-172-16-0-52 /usr/local/sbin/kamailio[8992]: ALERT: <core> [main.c:809]: handle_sigs(): core was not generated
2024-12-16T14:58:27.618477+00:00 ip-172-16-0-52 /usr/local/sbin/kamailio[8992]: INFO: <core> [main.c:832]: handle_sigs(): terminating due to SIGCHLD
2024-12-16T14:58:27.618589+00:00 ip-172-16-0-52 /usr/local/sbin/kamailio[9088]: INFO: <core> [main.c:888]: sig_usr(): signal 15 received
```
```
2024-12-16T14:57:52.029277+00:00 ip-172-16-0-73 /usr/local/sbin/kamailio[21605]: CRITICAL: <core> [core/mem/q_malloc.c:535]: qm_free(): BUG: bad pointer 0x1 (out of memory block!) called from tls_wolfssl: tls_init.c: ser_free(240) - ignoring
2024-12-16T14:57:52.029347+00:00 ip-172-16-0-73 /usr/local/sbin/kamailio[21605]: CRITICAL: <core> [core/mem/q_malloc.c:126]: qm_debug_check_frag(): BUG: qm: fragm. 0x7f085996b500 (address 0x7f085996b540) beginning overwritten (0)! Memory allocator was called from tls_wolfssl: tls_init.c:240. Fragment marked by (null):0. Exec from core/mem/q_malloc.c:546.
2024-12-16T14:57:52.048597+00:00 ip-172-16-0-73 /usr/local/sbin/kamailio[21611]: CRITICAL: <core> [core/pass_fd.c:281]: receive_fd(): EOF on 101
2024-12-16T14:57:52.048921+00:00 ip-172-16-0-73 /usr/local/sbin/kamailio[21512]: ALERT: <core> [main.c:805]: handle_sigs(): child process 21605 exited by a signal 6
2024-12-16T14:57:52.048953+00:00 ip-172-16-0-73 /usr/local/sbin/kamailio[21512]: ALERT: <core> [main.c:809]: handle_sigs(): core was not generated
2024-12-16T14:57:52.048975+00:00 ip-172-16-0-73 /usr/local/sbin/kamailio[21512]: INFO: <core> [main.c:832]: handle_sigs(): terminating due to SIGCHLD
2024-12-16T14:57:52.048997+00:00 ip-172-16-0-73 /usr/local/sbin/kamailio[21609]: INFO: <core> [main.c:888]: sig_usr(): signal 15 received
```
```
2024-12-16T16:21:35.967576+00:00 ip-172-16-0-52 /usr/local/sbin/kamailio[10591]: CRITICAL: <core> [core/mem/q_malloc.c:126]: qm_debug_check_frag(): BUG: qm: fragm. 0x7f442778d110 (address 0x7f442778d150) beginning overwritten (0)! Memory allocator was called from tls_wolfssl: tls_init.c:240. Fragment marked by (null):0. Exec from core/mem/q_malloc.c:546.
```
```
2024-12-16T13:31:55.995222+00:00 ip-172-16-0-52 /usr/local/sbin/kamailio[7602]: CRITICAL: <core> [core/mem/q_malloc.c:555]: qm_free(): BUG: freeing already freed pointer (0x7f8e55830a70), called from tls_wolfssl: tls_init.c: ser_free(240), first free tls_wolfssl: tls_init.c: ser_malloc(228) - ignoring
2024-12-16T13:32:40.093954+00:00 ip-172-16-0-52 /usr/local/sbin/kamailio[7603]: CRITICAL: <core> [core/mem/q_malloc.c:535]: qm_free(): BUG: bad pointer 0x7a100000000 (out of memory block!) called from tls_wolfssl: tls_init.c: ser_free(240) - ignoring
2024-12-16T13:32:40.094047+00:00 ip-172-16-0-52 /usr/local/sbin/kamailio[7603]: CRITICAL: <core> [core/mem/q_malloc.c:126]: qm_debug_check_frag(): BUG: qm: fragm. 0x7f8e55762720 (address 0x7f8e55762760) beginning overwritten (0)! Memory allocator was called from tls_wolfssl: tls_init.c:240. Fragment marked by (null):0. Exec from core/mem/q_malloc.c:546.
2024-12-16T13:32:40.260913+00:00 ip-172-16-0-52 /usr/local/sbin/kamailio[7504]: CRITICAL: <core> [core/mem/q_malloc.c:555]: qm_free(): BUG: freeing already freed pointer (0x7f8e55761a70), called from core: core/tcp_main.c: _tcpconn_free(1666), first free tls_wolfssl: tls_init.c: ser_free(240) - ignoring
2024-12-16T13:32:40.263520+00:00 ip-172-16-0-52 /usr/local/sbin/kamailio[7504]: CRITICAL: <core> [core/mem/q_malloc.c:126]: qm_debug_check_frag(): BUG: qm: fragm. 0x7f8e55762720 (address 0x7f8e55762760) beginning overwritten (0)! Memory allocator was called from tls_wolfssl: tls_init.c:240. Fragment marked by (null):0. Exec from core/mem/q_malloc.c:546.
```
The system was handling ~900 TLS registrations with ~2700 presence SUBSCRIBE events.
**Running the same config with `tls.so` does not result in crash.**
### Additional Information
* **Kamailio Version** - 5.8.4
```
version: kamailio 5.8.4 (x86_64/linux) 598105-dirty
flags: USE_TCP, USE_TLS, USE_SCTP, TLS_HOOKS, USE_RAW_SOCKS, DISABLE_NAGLE, USE_MCAST, DNS_IP_HACK, SHM_MMAP, PKG_MALLOC, MEM_JOIN_FREE, Q_MALLOC, F_MALLOC, TLSF_MALLOC, DBG_SR_MEMORY, USE_FUTEX, FAST_LOCK-ADAPTIVE_WAIT, USE_DNS_CACHE, USE_DNS_FAILOVER, USE_NAPTR, USE_DST_BLOCKLIST, HAVE_RESOLV_RES, TLS_PTHREAD_MUTEX_SHARED
ADAPTIVE_WAIT_LOOPS 1024, MAX_RECV_BUFFER_SIZE 262144, MAX_SEND_BUFFER_SIZE 262144, MAX_URI_SIZE 1024, BUF_SIZE 65535, DEFAULT PKG_SIZE 8MB
poll method support: poll, epoll_lt, epoll_et, sigio_rt, select.
id: 598105 -dirty
compiled on 04:31:34 Dec 16 2024 with gcc 12.2.0
```
* **Operating System**:
```
Distributor ID: Debian
Description: Debian GNU/Linux 12 (bookworm)
Release: 12
Codename: bookworm
Linux ip-172-16-0-52 6.1.0-28-cloud-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.119-1 (2024-11-22) x86_64 GNU/Linux
```
* TCP Settings
```
children=16
disable_tcp=no
tcp_connection_lifetime=3605
tcp_accept_no_cl=yes
tcp_max_connections=8192
tcp_defer_accept=yes
tcp_accept_unique=0
tcp_connection_lifetime=3605
tcp_connection_match=1
tcp_connect_timeout=10
tcp_crlf_ping=yes
tcp_linger2=1
tcp_keepalive=yes
tcp_reuse_port=yes
tcp_keepidle=20
tcp_keepintvl=15
tcp_rd_buf_size=65536
tcp_msg_data_timeout=40
tcp_msg_read_timeout=40
tcp_check_timer=20
tls_threads_mode=2
tls_max_connections=8192
```
--
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/4074
You are receiving this because you are subscribed to this thread.
Message ID: <kamailio/kamailio/issues/4074(a)github.com>
THIS IS AN AUTOMATED MESSAGE, DO NOT REPLY.
A user has added themself to the list of users assigned to this task.
FS#100 - Assignment operators don't work
User who did this - Alex Hermann (axlh)
http://sip-router.org/tracker/index.php?do=details&task_id=100
You are receiving this message because you have requested it from the Flyspray bugtracking system. If you did not expect this message or don't want to receive mails in future, you can change your notification settings at the URL shown above.