When unable to ping RTPEngine instances kamailio crashes.
Determined issue was related to the ping failures, resolved AWS SG issue, allowing successful pings. Issue subsequently went away.
Use rtpengine with RDS backend
Load rtpengine with ~ 20 destinations you do not expect a response from on
Use rtimer to call rtpengine.reload via jsonrpcs:
modparam("rtimer", "timer", "name=rtpengine;interval=60;mode=0;")
modparam("rtimer", "exec", "timer=rtpengine;route=RELOAD_RTPENGINE")
route[RELOAD_RTPENGINE] {
xinfo("RTPEngine database reloading");
jsonrpc_exec('{"jsonrpc": "2.0", "method": "rtpengine.reload", "id": 1}');
}
Got a core dump -- although I deleted quickly (not thinking) as they were running instances out of disk quickly. bt full did not show much to step through all though clearly pointed to rtpengine.
send_rtpp_command(): timeout waiting reply for command "ping" from RTPEngine <udp:10.0.X.X>
TBD / do not have ping failures under heavy load.
kamailio -v
version: kamailio 5.8.2 (x86_64/linux)
flags: USE_TCP, USE_TLS, USE_SCTP, TLS_HOOKS, USE_RAW_SOCKS, DISABLE_NAGLE, USE_MCAST, DNS_IP_HACK, SHM_MMAP, PKG_MALLOC, MEM_JOIN_FREE, Q_MALLOC, F_MALLOC, TLSF_MALLOC, DBG_SR_MEMORY, USE_FUTEX, FAST_LOCK-ADAPTIVE_WAIT, USE_DNS_CACHE, USE_DNS_FAILOVER, USE_NAPTR, USE_DST_BLOCKLIST, HAVE_RESOLV_RES, TLS_PTHREAD_MUTEX_SHARED
ADAPTIVE_WAIT_LOOPS 1024, MAX_RECV_BUFFER_SIZE 262144, MAX_SEND_BUFFER_SIZE 262144, MAX_URI_SIZE 1024, BUF_SIZE 65535, DEFAULT PKG_SIZE 8MB
poll method support: poll, epoll_lt, epoll_et, sigio_rt, select.
id: unknown
compiled with gcc 10.2.1
uname -a
Linux 5.10.0-27-cloud-amd64 #1 SMP Debian 5.10.205-2 (2023-12-31) x86_64 GNU/Linux
cat /proc/version
Linux version 5.10.0-27-cloud-amd64 (debian-kernel@lists.debian.org) (gcc-10 (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2) #1 SMP Debian 5.10.205-2 (2023-12-31)
lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 11 (bullseye)
Release: 11
Codename: bullseye
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.