[SR-Users] : hight shared memory usage

Laurent Schweizer laurent.schweizer at peoplefone.com
Wed Oct 24 15:54:09 CEST 2018


On more info on the shared mem usage :

        tcpconn_new(957): -1274304104



kamcmd mod.stats all shm
Module: core
{
        sip_msg_shm_clone(495): 196272
        msg_lump_cloner(984): 55648
        create_avp(175): 424
        tcpconn_new(957): -1274304104
        build_req_buf_from_sip_req(2162): 42704
        counters_prefork_init(207): 51200
        cfg_clone_str(130): 160
        cfg_shmize(217): 984
        init_pt(113): 8
        init_pt(108): 8
        init_pt(107): 7000
        cfg_parse_str(906): 80
        register_timer(1012): 360
        cfg_register_ctx(47): 96
        init_tcp(4649): 8192
        init_tcp(4643): 32768
        init_tcp(4635): 8
        init_tcp(4628): 8
        init_tcp(4621): 8
        init_tcp(4615): 8
        init_tcp(4603): 8
        init_avps(90): 8
        init_avps(89): 8
        init_dst_blacklist(437): 16384
        init_dst_blacklist(430): 8
        timer_alloc(515): 96
        init_dns_cache(369): 8
        init_dns_cache(361): 16384
        init_dns_cache(354): 16
        init_dns_cache(345): 8
        init_timer(284): 8
        init_timer(283): 16384
        init_timer(282): 8
        init_timer(281): 8
        init_timer(270): 8
        init_timer(238): 8
        init_timer(221): 278544
        init_timer(220): 8
        init_timer(207): 8
        cfg_child_cb_new(830): 64
        sr_cfg_init(361): 8
        sr_cfg_init(354): 8
        sr_cfg_init(347): 8
        sr_cfg_init(335): 8
        sr_cfg_init(323): 8
        shm_core_lock_init(153): 8
        Total: -1273580144
}

De : sr-users <sr-users-bounces at lists.kamailio.org> De la part de Laurent Schweizer
Envoyé : mercredi, 24 octobre 2018 15:24
À : sr-users at lists.kamailio.org
Objet : [SR-Users] : hight shared memory usage

Dear all,

I have 2 proxy running  and we have a very big difference of memory usage between them, moreover we have more connected users on the proxy with less memory usage.

We have this issue on the first proxy, today we have to increase the shared memory  as it was no more possible to setup new TCP connection.

We use the same version of the proxy, script is quite the same, one difference, on the proxy with the issue we increase the number of max TLS connection to 20K

Any idea ?
is TLS consuming more memory than TCP or UDP ?

BR

Laurent

First proxy (with high memory usage):


[cycle #: 3; if constant make sure server lives]
Kamailio Runtime Details:
kamailio 5.0.6 (x86_64/linux)
    now:  Wed Oct 24 15:18:55 2018
    up_since: Wed Oct 24 13:21:12 2018
    uptime: 7063

Transaction Statistics:
    tmx:UAS_transactions = 56240            tmx:active_transactions = 4
    tmx:UAC_transactions = 0                tmx:inuse_transactions = 18

Stateless Server Statistics:
    sl:sent_replies = 134317        sl:sent_err_replies = 0

UsrLoc Statistics:
    usrloc:location_ch-contacts = 1463      usrloc:location_ch-users = 1181
    usrloc:location_ch-expires = 1076       usrloc:registered_users = 1181

Core Statistics:
    core:rcv_requests = 214220      core:rcv_replies = 179145
    core:fwd_requests = 5495        core:fwd_replies = 73

Shared Memory Statistics:
    shmem:fragments = 529                   shmem:real_used_size = 15862690496
    shmem:free_size = 36566109504           shmem:total_size = 52428800000
    shmem:max_used_size = 16198927224       shmem:used_size = 15854961104



cat /proc/net/netstat | cut -d" " -f12-17 ; cat /proc/net/sockstat
TW TWRecycled TWKilled PAWSPassive PAWSActive PAWSEstab
339118 0 684 3195 0 4648
InBcastOctets OutBcastOctets InCsumErrors InNoECTPkts InECT1Pkts InECT0Pkts
14620680 0 0 19142166496 9683 133135
sockets: used 1951
TCP: inuse 1029 orphan 0 tw 2 alloc 1036 mem 15
UDP: inuse 540 mem 446
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0



-------------------------------------------


Second proxy (memory is OK) :

[cycle #: 2; if constant make sure server lives]
Kamailio Runtime Details:
kamailio 5.0.6 (x86_64/linux)
    now:  Wed Oct 24 15:19:39 2018
    up_since: Mon Jul  2 00:31:48 2018
    uptime: 9902871

Transaction Statistics:
    tmx:UAS_transactions = 86712048         tmx:active_transactions = 92
    tmx:UAC_transactions = 0                tmx:inuse_transactions = 178

Stateless Server Statistics:
    sl:sent_replies = 641635175     sl:sent_err_replies = 0

UsrLoc Statistics:
    usrloc:location-contacts = 3971         usrloc:location-users = 3845
    usrloc:location-expires = 286685        usrloc:registered_users = 3845

Core Statistics:
    core:rcv_requests = 752644955           core:rcv_replies = 1181197213
    core:fwd_requests = 14482059            core:fwd_replies = 206909

Shared Memory Statistics:
    shmem:fragments = 28437                 shmem:real_used_size = 56447456
    shmem:free_size = 2091036192            shmem:total_size = 2147483648
    shmem:max_used_size = 82099768          shmem:used_size = 43354160


cat /proc/net/netstat | cut -d" " -f12-17 ; cat /proc/net/sockstat
TW TWRecycled TWKilled PAWSPassive PAWSActive PAWSEstab
273625 0 0 0 0 3524
InBcastOctets OutBcastOctets InCsumErrors InNoECTPkts InECT1Pkts InECT0Pkts
2010 0 1 179031265416 81564 21432766
sockets: used 4964
TCP: inuse 576 orphan 0 tw 3 alloc 681 mem 105
UDP: inuse 3908 mem 3117
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.kamailio.org/pipermail/sr-users/attachments/20181024/43827ae7/attachment.html>


More information about the sr-users mailing list