Hi all,
I was reviewing the `ims_ipsec_pcscf` code and noticed that this module creates a pool of sockets using different ports (ipsec_max_connections). I'm unclear on the necessity of this approach. Can't we simply create one server listener and one client listener to handle all UE connections? If this is feasible, is there still a need to create the pool at startup?
I'm curious if this is due to an architectural limitation or if the IPSec module can be modified to replace the current implementation with a more efficient one.
Thank you for your insights.
Best regards,Hossein
Rtpengine support-
I am re-posting again, as I noticed on the SR-USERs web page no HTML
or inline images are showing
(https://lists.kamailio.org/mailman3/hyperkitty/list/sr-users@lists.kamailio…).
The Wireshark screencap that demonstrates the issue is uploaded at:
https://www.signalogic.com/images/rtpengine_wireshark_capture_timestamp_jum…
all other info remains the same.
Thanks, Jeff
----- Forwarded message from Jeff Brower <jbrower(a)signalogic.com> -----
Date: Fri, 17 May 2024 17:10:10 +0000
From: Jeff Brower <jbrower(a)signalogic.com>
Subject: Fwd: [SR-Users] rtpengine timestamp jumps
To: sr-users(a)lists.kamailio.org
Reposting this. If there is an issue with HTML format and/or wireshark
screen cap and I need to upload that separately somewhere else, please
let me know. Thanks.
-Jeff
----- Forwarded message from Jeff Brower <jbrower(a)signalogic.com> -----
Date: Thu, 09 May 2024 05:32:27 +0000
From: Jeff Brower <jbrower(a)signalogic.com>
Subject: [SR-Users] rtpengine timestamp jumps
To: sr-users(a)lists.kamailio.org
Hi rtpengine experts,
We have some customers processing long multi-party call pcaps using
mediaMin who are reporting large amounts of packets with timestamp
jumps but no packet loss (for instance 10% of packets over a 1 hr 45
min call). For example, in the Wireshark excerpt shown below, packets
6 and 8 sent by rtpengine show a timestamp increment of 640, but
sequence number increment of 1:
[screencap link at
https://www.signalogic.com/images/wireshark_capture_timestamp_jump.png]
In the mediaMin output packet log we typically see sections similar to:
:
:
Seq num 98584 timestamp = 3902252372, rtp pyld len = 33 media-R
Seq num 98585 timestamp = 3902252692, rtp pyld len = 33 media
Seq num 98586 timestamp = 3902253012, rtp pyld len = 33 media-R
Seq num 98587 timestamp = 3902253332, rtp pyld len = 33 media
Seq num 98588 timestamp = 3902253652, rtp pyld len = 33 media-R
Seq num 98589 timestamp = 3902253972, rtp pyld len = 33 media
Seq num 98590 timestamp = 3902254292, rtp pyld len = 33 media
Seq num 98591 timestamp = 3902254612, rtp pyld len = 33 media-R
Seq num 98592 timestamp = 3902254932, rtp pyld len = 33 media
Seq num 98593 timestamp = 3902255252, rtp pyld len = 33 media
Seq num 98594 timestamp = 3902255572, rtp pyld len = 33 media-R
Seq num 98595 timestamp = 3902255892, rtp pyld len = 33 media
Seq num 98596 timestamp = 3902256212, rtp pyld len = 33 media
Seq num 98597 timestamp = 3902256532, rtp pyld len = 33 media
Seq num 98598 timestamp = 3902256852, rtp pyld len = 33 media-R
Seq num 98599 timestamp = 3902257172, rtp pyld len = 33 media
Seq num 98600 timestamp = 3902257492, rtp pyld len = 33 media-R
Seq num 98601 timestamp = 3902257812, rtp pyld len = 33 media
Seq num 98602 timestamp = 3902258132, rtp pyld len = 33 media
Seq num 98603 timestamp = 3902258452, rtp pyld len = 33 media
Seq num 98604 timestamp = 3902258772, rtp pyld len = 33 media-R
Seq num 98605 timestamp = 3902259092, rtp pyld len = 33 media
:
:
where media-R packets are timestamp gap repairs (i.e. frame loss
concealment). The behavior tends to be bursty, but once it gets going
it goes for a while and seems relatively consistent.
Is this expected behavior for rtpengine ? If so is rptengine in turn
dealing with some type of "slow packet rate" issue from a remote
sender ?
Thanks, Jeff
----- End forwarded message -----
----- End forwarded message -----
We've noticed an issue with UDP endpoints and the usrloc module not sending keepalive packets to them for a long time. Eventually, usrloc will begin doing it, but sometimes it's anywhere between 10 minutes and multiple hours before the endpoint starts receiving an OPTIONS packet.
TCP and TLS endpoints do not have this issue - they being receiving OPTIONS after registration + ka_timeout elapsed.
Has anyone else seen this? It can be very easily seen with a pcap on a device registering via UDP.
Hi everyone,
I have a kamailio configured to send invites from different phone numbers registered in kamailio to different endpoints.
The configuration that I have right now is a big if chain with the phone numbers that then sends the invite using dispatcher module.
Example:
If (phone number in invite = phone number X) { dispatcher sends to endpoint X}
If (phone number in invite = phone number Y) { dispatcher sends to endpoint Y}
If (phone number in invite = phone number Z) { dispatcher sends to endpoint Z}
Im not sure if this is the correct way of doing this or if there is any module that can do this allowing me to configure it using BBDD instead of kamailio configuration.
I started doing it like this on our test environment while I was learning how to use kamailio and I'd like to know if this is the right way to do it or if I should do it with other methods
Thank you in advance
Hi,
I recently updated my testing env to 5.8.1 and after that I’ve been receiving dns_cache.c warning “record not alone”.
I found out that it’s been added to quite recent commit: https://github.com/kamailio/kamailio/commit/d8a35b3b6c837b36779e232b65fce61…
And that’s probably the why I haven’t seen it before.
As far as I know, I have no other problem with dns or anything else than the warning appearing in the logs 😊
I can’t quite get what the warning is about?
Clearly refers to dns queries, and with debug I found out that maybe its somehow related to ip which has several domains to pointing it (and queried by one of those domains by this kamailio). Can it be something with the PTR-record?
And this warning comes ones every hour, so probably when some ttl expires, and meanwhile the cache is ok with the situation.
And as a background, I haven’t done any specific dns-configurations in my config, so settings should be pretty much in defaults.
Any advice what I should fix here to get the warning disappear?
-Pyry
Hi,
Hope to find you well,
I have a question about the 'db_flatstore' module in Kamailio. Here's my
script for log rotation:
for f in acc_cdrs_*.log; do
mv -- "$f" "${f%.log}.temp"
done
docker exec -it pcscf kamcmd flatstore.k_rotate
rm *.temp
I want to know if the kamcmd command is blocking and the files can be
removed after that or if I should wait for it to be done before removing
the renamed files.
Regards,
Farzaneh
Dear IMS enthusiasts :-)
Has anyone experience with Kamailio IMS (CSCF) setups, where users with different domain names can be registered at one and the same S-CSCF?
Is this feasible at all?
I mean one IMS for users <user>@ims1.example.com, <user>@ims2.example.com, <user>@ims3.example.com,........ in one and the same S-CSCF, I-CSCF, P-CSCF.
Thanks,
Christoph
Hi there,
We are encountering consistent segfaults after rebooting our Kamailio instance with incoming traffic, specifically when using Kamailio 5.7.4. We think this issue did not occur with version 5.7.2, so it seems to have been introduced in either 5.7.3 or 5.7.4.
Due to team bandwidth constraints and the potential impact on production traffic, we don't want to spend time on trying to reproduce the issue. So we have decided to downgrade to 5.6.4, which we confirmed to be stable. (Probably 5.7.2 would be too - but we didn't try).
Unfortunately, our logging was only set to WARNING level, and we did not capture a core dump, so we cannot provide additional details beyond the following logs:
This was with tcp_reuse_ports=yes:
2024-05-17T15:42:55.582475541Z Listening on
2024-05-17T15:42:55.582512370Z [redacted]
2024-05-17T15:42:55.582538161Z tls: 10.X.X.X:5061 advertise Y.Y.Y:5061
2024-05-17T15:42:55.582543750Z Aliases:
2024-05-17T15:42:55.582549081Z tls: [redacted]:5061
2024-05-17T15:42:55.582574890Z
2024-05-17T15:42:55.587876630Z 0(1) WARNING: tls [tls_init.c:978]: tls_h_mod_init_f(): openssl bug #1491 (crash/mem leaks on low memory) workaround enabled (on low memory tls operations will fail preemptively) with free memory thresholds 18874368 and 9437184 bytes
2024-05-17T15:42:55.703927049Z 35(41) CRITICAL: <core> [core/pass_fd.c:281]: receive_fd(): EOF on 23
2024-05-17T15:42:55.703972029Z 0(1) ALERT: <core> [main.c:791]: handle_sigs(): child process 15 exited by a signal 11
2024-05-17T15:42:55.703978409Z 0(1) ALERT: <core> [main.c:795]: handle_sigs(): core was generated
2024-05-17T15:42:55.705049839Z 35(41) CRITICAL: <core> [core/pass_fd.c:281]: receive_fd(): EOF on 17
2024-05-17T15:42:55.705074209Z 35(41) CRITICAL: <core> [core/pass_fd.c:281]: receive_fd(): EOF on 21
2024-05-17T15:42:55.705081209Z 35(41) CRITICAL: <core> [core/pass_fd.c:281]: receive_fd(): EOF on 22
2024-05-17T15:42:55.705085879Z 35(41) CRITICAL: <core> [core/pass_fd.c:281]: receive_fd(): EOF on 20
2024-05-17T15:42:55.705090319Z 35(41) CRITICAL: <core> [core/pass_fd.c:281]: receive_fd(): EOF on 18
2024-05-17T15:42:55.705094649Z 35(41) CRITICAL: <core> [core/pass_fd.c:281]: receive_fd(): EOF on 19
2024-05-17T15:42:55.705098879Z 35(41) CRITICAL: <core> [core/pass_fd.c:281]: receive_fd(): EOF on 16
2024-05-17T15:42:55.705207399Z 35(41) CRITICAL: <core> [core/pass_fd.c:281]: receive_fd(): EOF on 15
2024-05-17T15:42:55.705459439Z 35(41) CRITICAL: <core> [core/pass_fd.c:281]: receive_fd(): EOF on 27
Without tcp_reuse_ports=yes, the segfault was always preceded by the following line if any existing TLS connections were stuck in TIME_WAIT:
2024-05-16T19:18:51.654447639Z 9(14) WARNING: {1 1 INVITE XXX(a)0.0.0.0} <core> [core/tcp_main.c:1301]: find_listening_sock_info(): binding to source address 10.X.X>X:5061 failed: Address already in use [98]
2024-05-16T19:18:51.746994728Z 0(1) ALERT: <core> [main.c:791]: handle_sigs(): child process 14 exited by a signal 11
When the server wasn't handling any traffic, the issue didn't occur even in 5.7.4.
Does anyone have any insights or suggestions on how to address this issue?
Kind regards
Stefan