- avoid parallel calls to gencookie from generating the same cookie for rtpengine
#### Pre-Submission Checklist
<!-- Go over all points below, and after creating the PR, tick all the checkboxes that apply -->
<!-- All points should be verified, otherwise, read the CONTRIBUTING guidelines from above-->
<!-- If you're unsure about any of these, don't hesitate to ask on sr-dev mailing list -->
- [ ] Commit message has the format required by CONTRIBUTING guide
- [ ] Commits are split per component (core, individual modules, libs, utils, ...)
- [ ] Each component has a single commit (if not, squash them into one commit)
- [ ] No commits to README files for modules (changes must be done to docbook files
in `doc/` subfolder, the README file is autogenerated)
#### Type Of Change
- [ ] Small bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds new functionality)
- [ ] Breaking change (fix or feature that would change existing functionality)
#### Checklist:
<!-- Go over all points below, and after creating the PR, tick the checkboxes that apply -->
- [ ] PR should be backported to stable branches
- [ ] Tested changes locally
- [ ] Related to issue #XXXX (replace XXXX with an open issue number)
#### Description
<!-- Describe your changes in detail -->
You can view, comment on, or merge this pull request online at:
https://github.com/kamailio/kamailio/pull/4139
-- Commit Summary --
* rtpengine: fix race condn assigning same ip:port due to gencookie in parallel forks
-- File Changes --
M src/modules/rtpengine/rtpengine.c (3)
-- Patch Links --
https://github.com/kamailio/kamailio/pull/4139.patchhttps://github.com/kamailio/kamailio/pull/4139.diff
--
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/pull/4139
You are receiving this because you are subscribed to this thread.
Message ID: <kamailio/kamailio/pull/4139(a)github.com>
#### Pre-Submission Checklist
- [x] Commit message has the format required by CONTRIBUTING guide
- [x] Commits are split per component (core, individual modules, libs, utils, ...)
- [x] Each component has a single commit (if not, squash them into one commit)
- [x] No commits to README files for modules (changes must be done to docbook files
in `doc/` subfolder, the README file is autogenerated)
#### Type Of Change
- [x] Small bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds new functionality)
- [ ] Breaking change (fix or feature that would change existing functionality)
#### Checklist:
- [x] PR should be backported to stable branches
- [x] Tested changes locally
- [ ] Related to issue #XXXX (replace XXXX with an open issue number)
#### Description
I have refactored the rpm packaging and am now using the GitHub actions.
Now `tar.gz` archive with rpm packages, source rpm and deps rpm files attached to the release message like.
You can view, comment on, or merge this pull request online at:
https://github.com/kamailio/kamailio/pull/4194
-- Commit Summary --
* generate rpm packages using github actions
-- File Changes --
A .github/workflows/rpm.yml (57)
-- Patch Links --
https://github.com/kamailio/kamailio/pull/4194.patchhttps://github.com/kamailio/kamailio/pull/4194.diff
--
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/pull/4194
You are receiving this because you are subscribed to this thread.
Message ID: <kamailio/kamailio/pull/4194(a)github.com>
sergey-safarov created an issue (kamailio/kamailio#4180)
### Description
I have installed two `rtpengine` instances and want to use them as an `active backup`. I want to use `backup` only when `active` is unavailable.
Closest settings for this rtpengine node `weight` and want to use settings like
```
MariaDB [kamailio]> select * from rtpengine;
+----+-------+---------------------------------+--------+----------+---------------------+
| id | setid | url | weight | disabled | stamp |
+----+-------+---------------------------------+--------+----------+---------------------+
| 1 | 0 | udp6:[2005:84c0:bf:11::20]:2223 | 1 | 0 | 1900-01-01 00:00:01 |
| 2 | 0 | udp6:[2005:84c0:bf:11::21]:2223 | 0 | 0 | 1900-01-01 00:00:01 |
+----+-------+---------------------------------+--------+----------+---------------------+
```
But in case `weight=0` node is not selected and handled as disabled.
It will be fine do not use rtpengine nodes with `weight=0` when other rtpengine with `weight!=0` are available. But if no other nodes are available, use `weight=0` nodes as last resort.
### Expected behavior
Do not use rtpengine nodes with `weight=0` when other rtpengine with `weight!=0` are available. But if no other nodes are available, use `weight=0` nodes as last resort.
#### Actual observed behavior
The node is not selected and handled as disabled when `weight=0`.
#### Log Messages
```
rtpengine [rtpengine.c:3430]: rtpp_test(): rtpengine instance <udp6:[2005:84c0:bf:11::21]:2223> found, support for it enabled
rtpengine [rtpengine.c:3883]: select_rtpp_node(): rtpengine failed to select new for calllen=27 callid=1-10824@2005:84c0:bf:11::22
rtpengine [rtpengine.c:3183]: rtpp_function_call(): no available proxies
```
--
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/4180
You are receiving this because you are subscribed to this thread.
Message ID: <kamailio/kamailio/issues/4180(a)github.com>
ChristianBergerSipgate created an issue (kamailio/kamailio#4167)
Hi,
we have found a potential bug. The pseudo variables return null for WSS connections.
Example config:
`xlog("L_INFO", "<mainLogic> REGISTER: TLS or WSS detected: fU=$fU tls_version=$tls_version tls_cipher_info=$tls_cipher_info ua=$ua\n");`
Log output:
`Mar 7 08:37:28 sip-tcploadbalancer01 /usr/sbin/kamailio[1268940]: { "level": "ERROR", "module": "tls", "file": "tls_select.c", "line": 131, "function": "get_cur_connection", "message": "Transport protocol is not TLS (bug in config)\n" }
Mar 7 08:37:28 sip-tcploadbalancer01 /usr/sbin/kamailio[1268940]: { "level": "INFO", "module": "tls", "file": "tls_select.c", "line": 310, "function": "get_version", "message": "TLS connection not found in select_version\n" }
Mar 7 08:37:28 sip-tcploadbalancer01 /usr/sbin/kamailio[1268940]: { "level": "ERROR", "module": "tls", "file": "tls_select.c", "line": 131, "function": "get_cur_connection", "message": "Transport protocol is not TLS (bug in config)\n" }
Mar 7 08:37:28 sip-tcploadbalancer01 /usr/sbin/kamailio[1268940]: { "level": "INFO", "module": "tls", "file": "tls_select.c", "line": 201, "function": "get_cipher", "message": "TLS connection not found in select_cipher\n" }
Mar 7 08:37:28 sip-tcploadbalancer01 /usr/sbin/kamailio[1268940]: { "level": "INFO", "module": "xlog", "file": "xlog.c", "line": 278, "function": "", "message": "<mainLogic> REGISTER: TLS or WSS detected: fU=1125411e0 tls_version=<null> tls_cipher_info=<null> ua=webphone\n" }`
As you can see both tls_version and tls_cipher_info return <null> even though the underlying connection is via WSS.
We have looked into it, and it seems like `get_cur_connection` in `modules/tls/tls_select.c` only checks for TLS, but not WSS.
```
struct tcp_connection *get_cur_connection(struct sip_msg *msg)
{
struct tcp_connection *c;
if(_tls_pv_con != 0)
return _tls_pv_con;
if(msg->rcv.proto != PROTO_TLS) {
ERR("Transport protocol is not TLS (bug in config)\n");
return 0;
}
c = tcpconn_get(msg->rcv.proto_reserved1, 0, 0, 0,
cfg_get(tls, tls_cfg, con_lifetime));
if(c && c->type != PROTO_TLS) {
ERR("Connection found but is not TLS\n");
tcpconn_put(c);
return 0;
}
return c;
}
```
We think that checking for `PROTO_WSS` might solve the issue.
--
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/4167
You are receiving this because you are subscribed to this thread.
Message ID: <kamailio/kamailio/issues/4167(a)github.com>
Opening this generic issue to track issues when trying to switch deb package generation to cmake:
--
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/4053
You are receiving this because you are subscribed to this thread.
Message ID: <kamailio/kamailio/issues/4053(a)github.com>
<!--
Kamailio Project uses GitHub Issues only for bugs in the code or feature requests. Please use this template only for bug reports.
If you have questions about using Kamailio or related to its configuration file, ask on sr-users mailing list:
* https://lists.kamailio.org/mailman3/postorius/lists/sr-users.lists.kamailio…
If you have questions about developing extensions to Kamailio or its existing C code, ask on sr-dev mailing list:
* https://lists.kamailio.org/mailman3/postorius/lists/sr-dev.lists.kamailio.o…
Please try to fill this template as much as possible for any issue. It helps the developers to troubleshoot the issue.
Note that an issue report may be closed automatically after about 2 months
if there is no interest from developers or community users on pursuing it, being
considered expired. In such case, it can be reopened by writing a comment that includes
the token `/notexpired`. About two weeks before considered expired, the issue is
marked with the label `stale`, trying to notify the submitter and everyone else
that might be interested in it. To remove the label `stale`, write a comment that
includes the token `/notstale`. Also, any comment postpone the `expire` timeline,
being considered that there is interest in pursuing the issue.
If there is no content to be filled in a section, the entire section can be removed.
You can delete the comments from the template sections when filling.
You can delete next line and everything above before submitting (it is a comment).
-->
### Description
We are now using kamailio on the edge as a presence proxy to scale presence. We have 3 internal kamailio instances handling the actual subscriptions and publishing. This works well, however, once per 24-36hours we experience a crash on the border proxy instance. We did not experience this when using the border proxy as a presence server (there was no proxying SUBSCRIBE).
### Troubleshooting
#### Reproduction
I believe this is happening when a customers network randomly experiences issues where it doesn't receive our replies to subscribes (and continues to send a subscribe). Logs are below.
#### Debugging Data
<!--
If you got a core dump, use gdb to extract troubleshooting data - full backtrace,
local variables and the list of the code at the issue location.
gdb /path/to/kamailio /path/to/corefile
bt full
info locals
list
If you are familiar with gdb, feel free to attach more of what you consider to
be relevant.
-->
Here's the output while replacing any sensitive info in the payload.
```
GNU gdb (Debian 13.1-3) 13.1
Copyright (C) 2023 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/sbin/kamailio...
Reading symbols from /usr/lib/debug/.build-id/ce/c19d0ed1a928e4e25a84ae20d77e9c904a4892.debug...
warning: Can't open file /dev/zero (deleted) during file-backed mapping note processing
[New LWP 26]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `kamailio -DD -E -m 8000 -M 512 -f /etc/kamailio/kamailio.cfg -P /var/run/kamail'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 via_builder (len=len@entry=0x7ffed7e3ecb8, msg=msg@entry=0x7ff9529548b0, send_info=send_info@entry=0x7ff764d20bb0, branch=branch@entry=0x7ffed7e3ecc0, extra_params=0x0,
hp=hp@entry=0x7ffed7e3ebb0) at core/msg_translator.c:2910
2910 core/msg_translator.c: No such file or directory.
(gdb) bt full
#0 via_builder (len=len@entry=0x7ffed7e3ecb8, msg=msg@entry=0x7ff9529548b0, send_info=send_info@entry=0x7ff764d20bb0, branch=branch@entry=0x7ffed7e3ecc0, extra_params=0x0,
hp=hp@entry=0x7ffed7e3ebb0) at core/msg_translator.c:2910
via_len = <optimized out>
extra_len = <optimized out>
line_buf = <optimized out>
max_len = <optimized out>
via_prefix_len = <optimized out>
address_str = <optimized out>
port_str = 0x0
send_sock = 0x0
comp_len = <optimized out>
comp_name_len = <optimized out>
port = <optimized out>
proto = <optimized out>
ip = {af = 536870912, len = 0, u = {addrl = {534820896, 1586192}, addr32 = {534820896, 0, 1586192, 0}, addr16 = {47136, 8160, 0, 0, 13328, 24, 0, 0},
addr = " \270\340\037\000\000\000\000\0204\030\000\000\000\000"}}
from = 0x0
local_addr = {s = {sa_family = 16, sa_data = "#\000\000\000\000\000\020\000\000\000\000\000\000"}, sin = {sin_family = 16, sin_port = 35, sin_addr = {s_addr = 0},
sin_zero = "\020\000\000\000\000\000\000"}, sin6 = {sin6_family = 16, sin6_port = 35, sin6_flowinfo = 0, sin6_addr = {__in6_u = {
__u6_addr8 = "\020\000\000\000\000\000\000\000\003\000\000\000\000\000\000", __u6_addr16 = {16, 0, 0, 0, 3, 0, 0, 0}, __u6_addr32 = {16, 0, 3, 0}}},
sin6_scope_id = 3000082688}, sas = {ss_family = 16,
__ss_padding = "#\000\000\000\000\000\020\000\000\000\000\000\000\000\003\000\000\000\000\000\000\000\000\241\321\262cg\357`@\000\000\000\000\000\000\000X\f-\177!V\000\000\020\016\217R\371\177\000\000\001\000\000\000!V\000\000*\000\000\000\000\000\000\000\001\000\000\000\376\177\000\000\004\360;\177!V\000\000\250,A\177!V\000\000\000\355\343\327\376\177\000\000\000\241\321\262cg\357`h\256\006R\371\177\000", __ss_align = 94701866020241}}
con = 0x0
rxavp = 0x0
xname = {s = 0x0, len = -2141140176}
__func__ = "via_builder"
#1 0x000056217f155030 in create_via_hf (len=len@entry=0x7ffed7e3ecb8, msg=msg@entry=0x7ff9529548b0, send_info=send_info@entry=0x7ff764d20bb0, branch=branch@entry=0x7ffed7e3ecc0)
at core/msg_translator.c:3232
via = <optimized out>
extra_params = {s = 0x0, len = 0}
hp = {host = 0x56217f4fb380 <default_global_address>, port = 0x56217f4fb370 <default_global_port>}
sbuf = "\331\023\000\000\001", '\000' <repeats 18 times>
--Type <RET> for more, q to quit, c to continue without paging--info locals
slen = <optimized out>
xparams = <optimized out>
id_buf = <optimized out>
id_len = 0
__func__ = "create_via_hf"
__llevel = <optimized out>
__kld = <optimized out>
#2 0x000056217f1592f7 in build_req_buf_from_sip_req (msg=msg@entry=0x7ff9529548b0, returned_len=returned_len@entry=0x7ffed7e3eeac, send_info=send_info@entry=0x7ff764d20bb0,
mode=mode@entry=128) at core/msg_translator.c:2096
len = 581
new_len = <optimized out>
received_len = 0
rport_len = 0
uri_len = 0
via_len = 0
body_delta = 0
line_buf = 0x0
received_buf = 0x0
rport_buf = 0x0
new_buf = 0x0
buf = 0x56217f5866c0 <buf> "SUBSCRIBE sip:74@sip.domain.co SIP/2.0\r\nVia: SIP/2.0/UDP 12.171.207.82:25744;branch=z9hG4bK1805670803;rport\r\nFrom: \"Heather Myers\" <sip:1086448@sip.domain.co>;tag=1805518213\r\nTo: <sip:74@sip.domain.co>\r\nCa"...
path_buf = {s = 0x0, len = 0}
offset = 0
s_offset = 0
size = <optimized out>
via_anchor = 0x7ff952951300
via_lump = <optimized out>
via_rm = <optimized out>
via_insert_param = 0x0
path_anchor = <optimized out>
path_lump = <optimized out>
branch = {s = 0x7ff952954ed8 "z9hG4bKc7ed.5819b4295fe83f981a2fc19b7f739d7e.0", len = 46}
flags = 262273
--Type <RET> for more, q to quit, c to continue without paging--list
udp_mtu = <optimized out>
di = {send_sock = 0x0, to = {s = {sa_family = 18608, sa_data = "\225R\371\177\000\000\002\000\023\331\n4\001\r"}, sin = {sin_family = 18608, sin_port = 21141, sin_addr = {
s_addr = 32761}, sin_zero = "\002\000\023\331\n4\001\r"}, sin6 = {sin6_family = 18608, sin6_port = 21141, sin6_flowinfo = 32761, sin6_addr = {__in6_u = {
__u6_addr8 = "\002\000\023\331\n4\001\r\000\000\000\000\000\000\000", __u6_addr16 = {2, 55571, 13322, 3329, 0, 0, 0, 0}, __u6_addr32 = {3641901058, 218182666, 0, 0}}},
sin6_scope_id = 0}, sas = {ss_family = 18608, __ss_padding = "\225R\371\177\000\000\002\000\023\331\n4\001\r", '\000' <repeats 103 times>, __ss_align = 0}}, id = 0,
send_flags = {f = 0, blst_imask = 0}, proto = 0 '\000', proto_pad0 = 0 '\000', proto_pad1 = 0}
ret = <optimized out>
__func__ = "build_req_buf_from_sip_req"
error00 = <optimized out>
#3 0x00007ff951fbe90d in prepare_new_uac (t=t@entry=0x7ff764d20880, i_req=i_req@entry=0x7ff9529548b0, branch=branch@entry=0, uri=<optimized out>, uri@entry=0x7ff9529548e8,
path=<optimized out>, path@entry=0x7ff952954f60, next_hop=<optimized out>, fsocket=0x7ff9528f0e10, snd_flags=..., fproto=<optimized out>, flags=<optimized out>,
instance=<optimized out>, ruid=<optimized out>, location_ua=<optimized out>) at ./src/modules/tm/t_fwd.c:482
shbuf = 0x0
add_rm_backup = <optimized out>
body_lumps_backup = <optimized out>
parsed_uri_bak = {user = {
s = 0x56217f5866ce <buf+14> "74(a)sip.domain.co SIP/2.0\r\nVia: SIP/2.0/UDP 12.171.207.82:25744;branch=z9hG4bK1805670803;rport\r\nFrom: \"Heather Myers\" <sip:1086448@sip.domain.co>;tag=1805518213\r\nTo: <sip:74@sip.domain.co>\r\nCall-ID: 0_18056"..., len = 2}, passwd = {s = 0x0, len = 0}, host = {
s = 0x56217f5866d1 <buf+17> "sip.domain.co SIP/2.0\r\nVia: SIP/2.0/UDP 12.171.207.82:25744;branch=z9hG4bK1805670803;rport\r\nFrom: \"Heather Myers\" <sip:1086448@sip.domain.co>;tag=1805518213\r\nTo: <sip:74@sip.domain.co>\r\nCall-ID: 0_18056078"..., len = 11}, port = {s = 0x0, len = 0}, params = {s = 0x0, len = 0}, sip_params = {s = 0x0, len = 0}, headers = {
s = 0x0, len = 0}, port_no = 0, proto = 0, type = SIP_URI_T, flags = 0, transport = {s = 0x0, len = 0}, ttl = {s = 0x0, len = 0}, user_param = {s = 0x0, len = 0}, maddr = {
s = 0x0, len = 0}, method = {s = 0x0, len = 0}, lr = {s = 0x0, len = 0}, r2 = {s = 0x0, len = 0}, gr = {s = 0x0, len = 0}, transport_val = {s = 0x0, len = 0}, ttl_val = {
s = 0x0, len = 0}, user_param_val = {s = 0x0, len = 0}, maddr_val = {s = 0x0, len = 0}, method_val = {s = 0x0, len = 0}, lr_val = {s = 0x0, len = 0}, r2_val = {s = 0x0,
len = 0}, gr_val = {s = 0x0, len = 0}}
ret = -1
len = 32761
parsed_uri_ok_bak = <optimized out>
free_new_uri = 1
msg_uri_bak = {s = 0x0, len = 0}
dst_uri_bak = {s = 0x7ff9529525f0 "sip:10.52.1.13:5081", len = 19}
dst_uri_backed_up = 1
path_bak = {s = 0x0, len = 0}
free_path = 1
instance_bak = {s = 0x0, len = 0}
```
#### Log Messages
<!--
Check the syslog file and if there are relevant log messages printed by Kamailio, add them next, or attach to issue, or provide a link to download them (e.g., to a pastebin site).
-->
```
ERROR 2025-01-10T06:00:31.847348563Z [resource.labels.containerName: kamailio] 8(28) CRITICAL: {1 1 SUBSCRIBE 0_1907143137(a)192.168.100.28} tm [timer.h:188]: _set_fr_retr(): already added: 0x7ff763e62250 , tl=0x7ff763e62270!!!
ERROR 2025-01-10T06:00:31.847388483Z [resource.labels.containerName: kamailio] 8(28) CRITICAL: {1 1 SUBSCRIBE 0_1907143137(a)192.168.100.28} tm [t_fwd.c:1613]: t_send_branch(): BUG: retransmission already started for: 0x7ff763e62250
ERROR 2025-01-10T06:00:31.847394809Z [resource.labels.containerName: kamailio] 8(28) ERROR: {1 1 SUBSCRIBE 0_1907143137(a)192.168.100.28} sl [sl_funcs.c:428]: sl_reply_error(): stateless error reply used: No error (0/SL)
ERROR 2025-01-10T06:00:31.847400738Z [resource.labels.containerName: kamailio] 8(28) BUG: {1 1 SUBSCRIBE 0_1907143137(a)192.168.100.28} tm [t_lookup.c:2052]: t_unref(): REQ_ERR DELAYED should have been caught much earlier for 0x7ff763e61f70: 24 (hex 18)
ERROR 2025-01-10T06:00:47.346744773Z [resource.labels.containerName: kamailio] 45(65) ERROR: <core> [core/tcp_main.c:4818]: tcpconn_main_timeout(): connect 64.63.142.90:12190 failed (timeout)
ERROR 2025-01-10T06:00:58.274915089Z [resource.labels.containerName: kamailio] 45(65) CRITICAL: <core> [core/pass_fd.c:281]: receive_fd(): EOF on 20
ERROR 2025-01-10T06:01:02.715789076Z [resource.labels.containerName: kamailio] 0(1) ALERT: <core> [main.c:805]: handle_sigs(): child process 26 exited by a signal 11
ERROR 2025-01-10T06:01:02.715827333Z [resource.labels.containerName: kamailio] 0(1) ALERT: <core> [main.c:809]: handle_sigs(): core was generated
ERROR 2025-01-10T06:01:02.718815287Z [resource.labels.containerName: kamailio] 45(65) CRITICAL: <core> [core/pass_fd.c:281]: receive_fd(): EOF on 25
ERROR 2025-01-10T06:01:02.720985493Z [resource.labels.containerName: kamailio] 45(65) CRITICAL: <core> [core/pass_fd.c:281]: receive_fd(): EOF on 23
ERROR 2025-01-10T06:01:02.725450552Z [resource.labels.containerName: kamailio] 45(65) CRITICAL: <core> [core/pass_fd.c:281]: receive_fd(): EOF on 30
```
#### SIP Traffic
<!--
If the issue is exposed by processing specific SIP messages, grab them with ngrep or save in a pcap file, then add them next, or attach to issue, or provide a link to download them (e.g., to a pastebin site).
-->
```
(paste your sip traffic here)
```
### Possible Solutions
I can atleast say that when this segfault happens that these logs are always present right before the crash. However, I also see instances of these logs without a crash immediately following.
On the proxy we are doing this is withindlg route to send subscribes to the presence route:
```
if KSR.is_SUBSCRIBE() and KSR.is_myself_ruri() then
-- in-dialog subscribe requests
ksr_route_presence();
KSR.x.exit();
end
```
Any then this is our ksr_route_presence():
```
function ksr_route_presence()
if not KSR.is_SUBSCRIBE() then return 1; end
local fuser = KSR.kx.get_fuser();
if fuser=='' then
KSR.xlog.xinfo('Subscription attempt without username from IP ' .. KSR.kx.get_srcip() .. ' - Rejecting\n');
KSR.sl.sl_send_reply(404, 'Not Found');
KSR.x.exit();
end
KSR.pv.sets('$avp(s:tenant)', string.match(fuser, '-(.*)'));
local setId = '6' .. KSR.kx.get_def('DEFAULT_DISPATCH_SET');
local modeId = '7';
if KSR.dispatcher.ds_select_dst(setId, modeId)<0 then
KSR.sl.send_reply(404, 'No destination');
KSR.x.exit();
end
KSR.corex.set_send_socket(KSR.kx.get_def('PRIVATE_LISTEN_IP') .. ':' .. KSR.kx.get_def('INTERNAL_PORT'));
KSR.tm.t_on_failure('ksr_route_rtf_dispatch');
ksr_route_relay();
end
```
### Additional Information
* **Kamailio Version** - output of `kamailio -v`
```
version: kamailio 5.8.4 (x86_64/linux)
flags: USE_TCP, USE_TLS, USE_SCTP, TLS_HOOKS, USE_RAW_SOCKS, DISABLE_NAGLE, USE_MCAST, DNS_IP_HACK, SHM_MMAP, PKG_MALLOC, MEM_JOIN_FREE, Q_MALLOC, F_MALLOC, TLSF_MALLOC, DBG_SR_MEMORY, USE_FUTEX, FAST_LOCK-ADAPTIVE_WAIT, USE_DNS_CACHE, USE_DNS_FAILOVER, USE_NAPTR, USE_DST_BLOCKLIST, HAVE_RESOLV_RES, TLS_PTHREAD_MUTEX_SHARED
ADAPTIVE_WAIT_LOOPS 1024, MAX_RECV_BUFFER_SIZE 262144, MAX_SEND_BUFFER_SIZE 262144, MAX_URI_SIZE 1024, BUF_SIZE 65535, DEFAULT PKG_SIZE 8MB
poll method support: poll, epoll_lt, epoll_et, sigio_rt, select.
id: unknown
compiled with gcc 12.2.0
```
* **Operating System**:
<!--
Details about the operating system, the type: Linux (e.g.,: Debian 8.4, Ubuntu 16.04, CentOS 7.1, ...), MacOS, xBSD, Solaris, ...;
Kernel details (output of `lsb_release -a` and `uname -a`)
-->
```
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
```
```
Linux gke-us-south1-external-sip-800ca69e-qyb3 5.15.0-1048-gke #53-Ubuntu SMP Tue Nov 28 00:39:01 UTC 2023 x86_64 GNU/Linux
```
--
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/4102
You are receiving this because you are subscribed to this thread.
Message ID: <kamailio/kamailio/issues/4102(a)github.com>
<!--
Kamailio Project uses GitHub Issues only for bugs in the code or feature requests. Please use this template only for bug reports.
If you have questions about using Kamailio or related to its configuration file, ask on sr-users mailing list:
* https://lists.kamailio.org/mailman3/postorius/lists/sr-users.lists.kamailio…
If you have questions about developing extensions to Kamailio or its existing C code, ask on sr-dev mailing list:
* https://lists.kamailio.org/mailman3/postorius/lists/sr-dev.lists.kamailio.o…
Please try to fill this template as much as possible for any issue. It helps the developers to troubleshoot the issue.
Note that an issue report may be closed automatically after about 2 months
if there is no interest from developers or community users on pursuing it, being
considered expired. In such case, it can be reopened by writing a comment that includes
the token `/notexpired`. About two weeks before considered expired, the issue is
marked with the label `stale`, trying to notify the submitter and everyone else
that might be interested in it. To remove the label `stale`, write a comment that
includes the token `/notstale`. Also, any comment postpone the `expire` timeline,
being considered that there is interest in pursuing the issue.
If there is no content to be filled in a section, the entire section can be removed.
You can delete the comments from the template sections when filling.
You can delete next line and everything above before submitting (it is a comment).
-->
### Description
I have run in to an issue when sending diameter requests using the ims_diameter_server
module. Shared memory keeps increasing and are not being freed until restart of kamailio.
This leads to all shared memory is being used and no more are able to be allocated.
This only happens when the request is sent synchronously. When sent asynchronously shared
memory is being freed as far as I can see.
Following functions have been used for sending requests.
diameter_request([peer], appid, commandcode, message)
diameter_request_async([peer], appid, commandcode, message)
### Troubleshooting
Sent a bunch of diameter requests and verified that shared memory for the following functions are not freed.
do_receive(600):
AAATranslateMessage(500):
AAACreateAVP(137):
#### Reproduction
It's possible to reproduce by sending sync diameter request and verify shared memory.
Example of diameter request block
```
route[CHECK_LOCATION] {
diameter_request("hss.ims.test.local", "16777217", "306",
"[ " +
"{ \"avpCode\":260, \"vendorId\":0, \"Flags\":64,
\"list\":[ { \"avpCode\":266, \"vendorId\":0,
\"Flags\":64, \"int32\":10415 }, { \"avpCode\":258,
\"vendorId\":0, \"Flags\":64, \"int32\":16777217 } ] },
" +
"{ \"avpCode\":277, \"vendorId\":0, \"Flags\":64,
\"int32\":1 }, " +
"{ \"avpCode\":283, \"vendorId\":0, \"Flags\":64,
\"string\":\"ims.test.local\"}, " +
"{ \"avpCode\":706, \"vendorId\":10415,
\"Flags\":192, \"int32\":0 }, " +
"{ \"avpCode\":700, \"vendorId\":10415,
\"Flags\":192, \"list\": [ { \"avpCode\":601,
\"vendorId\":10415, \"Flags\":192,
\"string\":\"tel:" + $fU + "\" }]}, " +
"{ \"avpCode\":707, \"vendorId\":10415,
\"Flags\":192, \"int32\":0 }, " +
"{ \"avpCode\":703, \"vendorId\":10415,
\"Flags\":192, \"int32\":14 } ]");
xlog("L_INFO", "Sent Diameter request");
}
```
Stats
cdp:replies_received = 343310
shmem:fragments = 142
shmem:free_size = 1221489992
shmem:max_used_size = 948242168
shmem:real_used_size = 925993656
shmem:total_size = 2147483648
shmem:used_size = 558205656
kamcmd mod.stats cdp shm
do_receive(600): 256245808
AAATranslateMessage(500): 64420712
AAACreateAVP(137): 218100320
Total: 538774000
#### Debugging Data
<!--
If you got a core dump, use gdb to extract troubleshooting data - full backtrace,
local variables and the list of the code at the issue location.
gdb /path/to/kamailio /path/to/corefile
bt full
info locals
list
If you are familiar with gdb, feel free to attach more of what you consider to
be relevant.
-->
```
(paste your debugging data here)
```
#### Log Messages
<!--
Check the syslog file and if there are relevant log messages printed by Kamailio, add them next, or attach to issue, or provide a link to download them (e.g., to a pastebin site).
-->
```
(paste your log messages here)
```
#### SIP Traffic
<!--
If the issue is exposed by processing specific SIP messages, grab them with ngrep or save in a pcap file, then add them next, or attach to issue, or provide a link to download them (e.g., to a pastebin site).
-->
```
(paste your sip traffic here)
```
### Possible Solutions
<!--
If you found a solution or workaround for the issue, describe it. Ideally, provide a pull request with a fix.
-->
### Additional Information
* **Kamailio Version** - output of `kamailio -v`
```
version: kamailio 5.7.4 (x86_64/linux) dc393e
flags: USE_TCP, USE_TLS, USE_SCTP, TLS_HOOKS, USE_RAW_SOCKS, DISABLE_NAGLE, USE_MCAST, DNS_IP_HACK, SHM_MMAP, PKG_MALLOC, MEM_JOIN_FREE, Q_MALLOC, F_MALLOC, TLSF_MALLOC, DBG_SR_MEMORY, USE_FUTEX, FAST_LOCK-ADAPTIVE_WAIT, USE_DNS_CACHE, USE_DNS_FAILOVER, USE_NAPTR, USE_DST_BLOCKLIST, HAVE_RESOLV_RES, TLS_PTHREAD_MUTEX_SHARED
ADAPTIVE_WAIT_LOOPS 1024, MAX_RECV_BUFFER_SIZE 262144, MAX_URI_SIZE 1024, BUF_SIZE 65535, DEFAULT PKG_SIZE 8MB
poll method support: poll, epoll_lt, epoll_et, sigio_rt, select.
id: dc393e
compiled on 13:56:27 Feb 27 2024 with gcc 8.5.0
```
* **Operating System**:
<!--
Details about the operating system, the type: Linux (e.g.,: Debian 8.4, Ubuntu 16.04, CentOS 7.1, ...), MacOS, xBSD, Solaris, ...;
Kernel details (output of `lsb_release -a` and `uname -a`)
-->
```
Linux bmatkamas1 4.18.0-553.16.1.el8_10.x86_64 #1 SMP Thu Aug 1 04:16:12 EDT 2024 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux release 8.10 (Ootpa)
```
--
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/4116
You are receiving this because you are subscribed to this thread.
Message ID: <kamailio/kamailio/issues/4116(a)github.com>
<!-- Kamailio Pull Request Template -->
<!--
IMPORTANT:
- for detailed contributing guidelines, read:
https://github.com/kamailio/kamailio/blob/master/.github/CONTRIBUTING.md
- pull requests must be done to master branch, unless they are backports
of fixes from master branch to a stable branch
- backports to stable branches must be done with 'git cherry-pick -x ...'
- code is contributed under BSD for core and main components (tm, sl, auth, tls)
- code is contributed GPLv2 or a compatible license for the other components
- GPL code is contributed with OpenSSL licensing exception
-->
#### Pre-Submission Checklist
<!-- Go over all points below, and after creating the PR, tick all the checkboxes that apply -->
<!-- All points should be verified, otherwise, read the CONTRIBUTING guidelines from above-->
<!-- If you're unsure about any of these, don't hesitate to ask on sr-dev mailing list -->
- [x] Commit message has the format required by CONTRIBUTING guide
- [x] Commits are split per component (core, individual modules, libs, utils, ...)
- [x] Each component has a single commit (if not, squash them into one commit)
- [x] No commits to README files for modules (changes must be done to docbook files
in `doc/` subfolder, the README file is autogenerated)
#### Type Of Change
- [x] Small bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds new functionality)
- [ ] Breaking change (fix or feature that would change existing functionality)
#### Checklist:
<!-- Go over all points below, and after creating the PR, tick the checkboxes that apply -->
- [ ] PR should be backported to stable branches
- [x] Tested changes locally
- [ ] Related to issue #XXXX (replace XXXX with an open issue number)
#### Description
We've been running the ims_charging module in production for quite some time, without any issues.
A couple of days ago the whole instance freezed, but I was able to get a dump before the stuck processes before restarting. The traffic volume has gradually increased over time, so it's likely related to that.
I then did a summary of the different types of stuck processes. Which locks they're holding, and what they're waiting for:
```
holding lock:
AAASessionsLock
waiting for lock:
lock_get(peer_list_lock);
#1 0x00007fe70eba8cde in futex_get (lock=0x7fe6e4a5c680) at ../../core/futexlock.h:108
#2 0x00007fe70ebaaca2 in get_peer_by_fqdn (fqdn=0x7fe6e4a5ba30) at peermanager.c:259
#3 0x00007fe70ebb64b7 in get_first_connected_route (cdp_session=0x7fe6e5cc5350, r=0x7fe6e4a5ba30, app_id=4, vendor_id=10415) at routing.c:115
#4 0x00007fe70ebb9a37 in get_routing_peer (cdp_session=0x7fe6e5cc5350, m=0x7fe6e5447ab0) at routing.c:293
#5 0x00007fe70ebcaf8c in AAASendMessage (message=0x7fe6e5447ab0, callback_f=0x7fe707a9e323 <resume_on_initial_ccr>, callback_param=0x7fe6e5297110) at diameter_comm.c:139
#6 0x00007fe707a9d3e0 in Ro_Send_CCR (msg=0x7fe70f011100, dlg=0x7fe6e5c571f0, dir=0, reservation_units=30, incoming_trunk_id=0x7ffe50efb060, outgoing_trunk_id=0x7ffe50efb050, pani=0x7ffe50efaee0, action=0x7fe70efa4730, tindex=10484, tlabel=677172593) at ims_ro.c:1511
#7 0x00007fe707a8285d in ki_ro_ccr (msg=0x7fe70f011100, s_route_name=0x7ffe50efb080, s_direction=0x7ffe50efb070, reservation_units=30, s_incoming_trunk_id=0x7ffe50efb060, s_outgoing_trunk_id=0x7ffe50efb050) at ims_charging_mod.c:742
#8 0x00007fe707a7bf01 in w_ro_ccr (msg=0x7fe70f011100, c_route_name=0x7fe70ef8b8d0 "\220\311\371\016\347\177", c_direction=0x7fe70ef8b980 "p\240\371\016\347\177", reservation_units=30, c_incoming_trunk_id=0x7fe70ef8ba30 "p\241\371\016\347\177", c_outgoing_trunk_id=0x7fe70ef8bae0 "\360\241\371\016\347\177") at ims_charging_mod.c:507
#9 0x00000000004858d8 in do_action (h=0x7ffe50efb970, a=0x7fe70ef99e20, msg=0x7fe70f011100) at core/action.c:1144
#10 0x00000000004928d6 in run_actions (h=0x7ffe50efb970, a=0x7fe70ef99e20, msg=0x7fe70f011100) at core/action.c:1618
#11 0x0000000000492f52 in run_actions_safe (h=0x7ffe50eff1e0, a=0x7fe70ef99e20, msg=0x7fe70f011100) at core/action.c:1681
#12 0x0000000000450156 in rval_get_long (h=0x7ffe50eff1e0, msg=0x7fe70f011100, i=0x7ffe50efbec8, rv=0x7fe70ef9c1d8, cache=0x0) at core/rvalue.c:973
#13 0x0000000000454d24 in rval_expr_eval_long (h=0x7ffe50eff1e0, msg=0x7fe70f011100, res=0x7ffe50efbec8, rve=0x7fe70ef9c1d0) at core/rvalue.c:1854
#14 0x0000000000454d52 in rval_expr_eval_long (h=0x7ffe50eff1e0, msg=0x7fe70f011100, res=0x7ffe50efc448, rve=0x7fe70ef9b920) at core/rvalue.c:1864
#15 0x00000000004850ce in do_action (h=0x7ffe50eff1e0, a=0x7fe70ef9b070, msg=0x7fe70f011100) at core/action.c:1097
--
holding lock:
AAASessionsLock
waiting for lock:
lock_get(peer_list_lock);
#1 0x00007fe70eba8d4f in futex_get (lock=0x7fe6e4a5c680) at ../../core/futexlock.h:121
#2 0x00007fe70ebaaca2 in get_peer_by_fqdn (fqdn=0x7ffe50efab90) at peermanager.c:259
#3 0x00007fe70ebb8e89 in get_routing_peer (cdp_session=0x7fe6e5ab6910, m=0x7fe6e5435be0) at routing.c:252
#4 0x00007fe70ebcaf8c in AAASendMessage (message=0x7fe6e5435be0, callback_f=0x7fe707a95edc <resume_on_termination_ccr>, callback_param=0x0) at diameter_comm.c:139
#5 0x00007fe707a95b02 in send_ccr_stop_with_param (ro_session=0x7fe6e5ab65e0, code=0, reason=0x7ffe50efb060) at ims_ro.c:1181
#6 0x00007fe707a72ff7 in dlg_terminated (dlg=0x7fe6e623d7a0, type=64, termcode=0, reason=0x7fe707ab72b3 "normal call clearing", _params=0x7fe707f67280 <params>) at dialog.c:249
#7 0x00007fe707a6a729 in dlg_callback_received (dlg=0x7fe6e623d7a0, type=64, _params=0x7fe707f67280 <params>) at dialog.c:25
#8 0x00007fe707d341b9 in run_dlg_callbacks (type=64, dlg=0x7fe6e623d7a0, req=0x7fe70f011100, rpl=0x0, dir=1, dlg_data=0x0) at dlg_cb.c:271
#9 0x00007fe707cf4db4 in dlg_terminated (req=0x7fe70f011100, dlg=0x7fe6e623d7a0, dir=1) at dlg_handlers.c:413
#10 0x00007fe707cfddeb in dlg_onroute (req=0x7fe70f011100, route_params=0x7ffe50efb6d0, param=0x0) at dlg_handlers.c:1097
#11 0x00007fe70ad285f6 in run_rr_callbacks (req=0x7fe70f011100, rr_param=0x7ffe50efb7c0) at rr_cb.c:96
#12 0x00007fe70ad3ae92 in after_loose (_m=0x7fe70f011100, preloaded=0) at loose.c:1021
#13 0x00007fe70ad3b5ce in loose_route_mode (_m=0x7fe70f011100, _mode=0) at loose.c:1056
#14 0x00007fe70ad3f74f in w_loose_route (msg=0x7fe70f011100, p1=0x0, p2=0x0) at rr_mod.c:273
#15 0x00000000004855ff in do_action (h=0x7ffe50efc390, a=0x7fe70efe0d40, msg=0x7fe70f011100) at core/action.c:1121
--
holding lock:
lock_get(peer_list_lock);
waiting for lock:
lock_get(p->lock);
#1 0x00007fe70eba8d4f in futex_get (lock=0x7fe6e4a5cbd0) at ../../core/futexlock.h:121
#2 0x00007fe70ebab0ae in peer_timer (now=1742807320, ptr=0x0) at peermanager.c:286
#3 0x00007fe70ebd0f39 in timer_loop () at timer.c:116
#4 0x00007fe70ebd21b2 in timer_process (returns=0) at timer.c:216
#5 0x00007fe70eb8ccf8 in diameter_peer_start (blocking=0) at diameter_peer.c:350
#6 0x00007fe70eb7cbb2 in cdp_child_init (rank=0) at cdp_mod.c:272
--
holding lock:
lock_get(p->lock)
waiting for lock:
AAASessionsLock
#1 0x00007fe70ebf2597 in futex_get (lock=0x7fe6e4a5d490) at ../../core/futexlock.h:108
#2 0x00007fe70ebf26f1 in AAASessionsLock (hash=0) at session.c:79
#3 0x00007fe70ebf5e6e in cdp_get_session (id=...) at session.c:316
#4 0x00007fe70eba6892 in Snd_Message (p=0x7fe6e4a5c880, msg=0x7fe6e63998d0) at peerstatemachine.c:1237
#5 0x00007fe70eba003e in sm_process (p=0x7fe6e4a5c880, event=Send_Message, msg=0x7fe6e63998d0, peer_locked=0, sock=0) at peerstatemachine.c:429
#6 0x00007fe70ebcbdc6 in AAASendMessage (message=0x7fe6e63998d0, callback_f=0x7fe707a8f208 <resume_on_interim_ccr>, callback_param=0x7fe6e6190a90) at diameter_comm.c:166
#7 0x00007fe707a8edb3 in send_ccr_interim (ro_session=0x7fe6e5399160, used=60, reserve=30) at ims_ro.c:847
#8 0x00007fe707a68bd6 in ro_session_ontimeout (tl=0x7fe6e5399200) at ro_timer.c:513
#9 0x00007fe707a63078 in ro_timer_routine (ticks=114862426, attr=0x0) at ro_timer.c:279
#10 0x00000000004fd33e in compat_old_handler (ti=1837798827, tl=0x7fe6e4cf6260, data=0x7fe6e4cf6260) at core/timer.c:980
#11 0x00000000004fde7a in slow_timer_main () at core/timer.c:1103
#12 0x000000000042e4e7 in main_loop () at main.c:1911
#13 0x000000000043876c in main (argc=10, argv=0x7ffe50f001c8) at main.c:3236
```
In `get_first_connected_route()` in `routing.c` of the cdp module there are two places `get_peer_by_fqdn()` are called. One of them has an unlock/relock of the session list before and after (and a comment about holding two locks at a time), while the other doesn't.
I'm currently testing two version of this. The first by just doing the some relock for the other `get_peer_by_fqdn()`, but this PR got an approach for maybe fixing more latent issues. `sm_process()` also got some strange handling by `Rcv_Process()` after the peer lock is released, which seems to be about the same thing. My deadlock is for the `Snd_Message()` equivalent.
I've now removed this queueing behaviour, and instead doing a re-lock of the peer to (hopefully) have the same locking order as the other operations.
The problem with this thing is how rare it occurs. Just wanted to share my findings for others with more knowledge to the cdp module for comments, suggestions and hopefully some extra testing.
You can view, comment on, or merge this pull request online at:
https://github.com/kamailio/kamailio/pull/4191
-- Commit Summary --
* cdp: restructure locking order to prevent rare deadlock
-- File Changes --
M src/modules/cdp/peerstatemachine.c (25)
M src/modules/cdp/routing.c (6)
-- Patch Links --
https://github.com/kamailio/kamailio/pull/4191.patchhttps://github.com/kamailio/kamailio/pull/4191.diff
--
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/pull/4191
You are receiving this because you are subscribed to this thread.
Message ID: <kamailio/kamailio/pull/4191(a)github.com>
### Description
We are using Kamailio 5.7.4 on Debian 12 (from http://deb.kamailio.org/kamailio57) with rtpengine as an Edgeproxy for our clients. The instance terminates SIP/TLS (with Cliencertificates) and forwards the SIP Traffic to internal systems.
After some days we are getting errors like this
`tls_complete_init(): tls: ssl bug #1491 workaround: not enough memory for safe operation: shm=7318616 threshold1=8912896`
First we thought Kamailio just doesnt have enough memory, so we doubled it..
But after some days the Logmessage (and Userissues) occured again.
So we monitored the shmmem statistics and found that used and max_used are constantly growing til it reaches the limit.
As i mentioned we are using client-certificates and so we are also using the CRL feature.
We do have a systemd-timer which fetches the CRL every hour and runs 'kamcmd tls.reload' when finished.
Our tls.cfg looks like this:
```
[server:default]
method = TLSv1.2+
private_key = /etc/letsencrypt/live/hostname.de/privkey.pem
certificate = /etc/letsencrypt/live/hostname.de/fullchain.pem
ca_list = /etc/kamailio/ca_list.pem
ca_path = /etc/kamailio/ca_list.pem
crl = /etc/kamailio/combined.crl.pem
verify_certificate = yes
require_certificate = yes
[client:default]
verify_certificate = yes
require_certificate = yes
```
After testing a bit we found that every time tls.reload is executed Kamailio consumes a bit more memory which eventually leads to all the memory being consumed which leads to issues for our users.
See following example:
```
[0][root@edgar-dev:~]# while true ; do /usr/sbin/kamcmd tls.reload ; /usr/sbin/kamcmd core.shmmem ; sleep 1 ; done
Ok. TLS configuration reloaded.
{
total: 268435456
free: 223001520
used: 41352552
real_used: 45433936
max_used: 45445968
fragments: 73
}
Ok. TLS configuration reloaded.
{
total: 268435456
free: 222377960
used: 41975592
real_used: 46057496
max_used: 46069232
fragments: 78
}
Ok. TLS configuration reloaded.
{
total: 268435456
free: 221748664
used: 42604992
real_used: 46686792
max_used: 46698080
fragments: 77
}
Ok. TLS configuration reloaded.
{
total: 268435456
free: 221110832
used: 43242408
real_used: 47324624
max_used: 47335608
fragments: 81
}
^C
[130][root@edgar-dev:~]#
```
### Troubleshooting
#### Reproduction
Everytime tls.reload is called the memory consumptions grows..
#### Debugging Data
<!--
If you got a core dump, use gdb to extract troubleshooting data - full backtrace,
local variables and the list of the code at the issue location.
gdb /path/to/kamailio /path/to/corefile
bt full
info locals
list
If you are familiar with gdb, feel free to attach more of what you consider to
be relevant.
-->
```
If you let me know what would be interesting for tracking this down, i am happy to provide logs/debugging data!
```
#### Log Messages
<!--
Check the syslog file and if there are relevant log messages printed by Kamailio, add them next, or attach to issue, or provide a link to download them (e.g., to a pastebin site).
-->
```
If you let me know what would be interesting for tracking this down, i am happy to provide logs/debugging data!
```
#### SIP Traffic
SIP doesnt seem to be relevant here
### Possible Solutions
Calling tls.reload less often or restart kamailio before memory is consumed ;)
### Additional Information
```
version: kamailio 5.7.4 (x86_64/linux)
flags: USE_TCP, USE_TLS, USE_SCTP, TLS_HOOKS, USE_RAW_SOCKS, DISABLE_NAGLE, USE_MCAST, DNS_IP_HACK, SHM_MMAP, PKG_MALLOC, MEM_JOIN_FREE, Q_MALLOC, F_MALLOC, TLSF_MALLOC, DBG_SR_MEMORY, USE_FUTEX, FAST_LOCK-ADAPTIVE_WAIT, USE_DNS_CACHE, USE_DNS_FAILOVER, USE_NAPTR, USE_DST_BLOCKLIST, HAVE_RESOLV_RES, TLS_PTHREAD_MUTEX_SHARED
ADAPTIVE_WAIT_LOOPS 1024, MAX_RECV_BUFFER_SIZE 262144, MAX_URI_SIZE 1024, BUF_SIZE 65535, DEFAULT PKG_SIZE 8MB
poll method support: poll, epoll_lt, epoll_et, sigio_rt, select.
id: unknown
compiled with gcc 12.2.0
```
* **Operating System**:
```
* Debian GNU/Linux 12 (bookworm)
* Linux edgar-dev 6.1.0-20-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.85-1 (2024-04-11) x86_64 GNU/Linux
```
--
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/3823
You are receiving this because you are subscribed to this thread.
Message ID: <kamailio/kamailio/issues/3823(a)github.com>