<p>Thank you <a class="user-mention" data-hovercard-type="user" data-hovercard-url="/hovercards?user_id=3033729" data-octo-click="hovercard-link-click" data-octo-dimensions="link_type:self" href="https://github.com/charlesrchance">@charlesrchance</a> , I did some tests with this setup:<br>
kamailio.cfg (meaningful lines):</p>
<pre><code>fork=yes
children=1
tcp_connection_lifetime=3605
pv_buffer_size=8192

# ----- dmq params -----
modparam("dmq", "server_address", DMQ_SERVER_ADDRESS)
modparam("dmq", "notification_address", DMQ_NOTIFICATION_ADDRESS)
modparam("dmq", "multi_notify", 1)
modparam("dmq", "num_workers", 1)
modparam("dmq", "ping_interval", 15)
modparam("dmq", "worker_usleep", 1000)

# ----- htable params -----
modparam("htable", "enable_dmq", 1)
modparam("htable", "dmq_init_sync", 1)
modparam("htable", "htable", "ht=>size=16;dmqreplicate=1;autoexpire=10800;")               # Keep track of concurrent channels for accounts. Should be same as dialog
modparam("htable", "htable", "ht1=>size=16;dmqreplicate=1;autoexpire=10800;")               # Keep track of concurrent channels for accounts. Should be same as dialog
modparam("htable", "htable", "ht2=>size=16;dmqreplicate=1;autoexpire=10800;")               # Keep track of concurrent channels for accounts. Should be same as dialog
modparam("htable", "htable", "ht3=>size=16;dmqreplicate=1;autoexpire=10800;")               # Keep track of concurrent channels for accounts. Should be same as dialog

#!define ONEK "really 1 k chars, believe me :)"

event_route[htable:mod-init] {
  $var(name) = POD_NAME + "\n";
  xlog("L_ALERT", "$var(name)");
  if(POD_NAME == "kama-0") {
    $var(count) = 0;
    while($var(count) < 99) {
      $sht(ht=>$var(count)) = ONEK;
      $sht(ht1=>$var(count)) = ONEK;
      $sht(ht2=>$var(count)) = ONEK;
      $sht(ht3=>$var(count)) = ONEK;
      $var(count) = $var(count)+1;
    }
  }
}

request_route {
  if ($rm == "KDMQ"){
    dmq_handle_message();
  }
  exit;
}
</code></pre>
<p>Started kama-0 which has now 4 htables of ~99K size each<br>
Started 10 kubernetes pods and launched kamailio 100 times with a timeout of 3 seconds on each pod<br>
So we have roughly 1000 kamailios trying to get these htables from kama-0<br>
I didn't see any dangerous CPU spike and the loop doesn't happen anymore.</p>
<p>There's something I'm worried of though: the memory of the DMQ worker (measured from top), which usually stays around 0.1% is now stable at 1.4% and it's not going down again</p>
<p>I fear there's a memory leak somewhere but I'm not sure where, I had some doubts while debugging the loop issue about how the json_t structures are freed but it could be caused by me not knowing well the code; can you give us any hint to help you understand this issue better?</p>
<p>Thanks</p>

<p style="font-size:small;-webkit-text-size-adjust:none;color:#666;">—<br />You are receiving this because you are subscribed to this thread.<br />Reply to this email directly, <a href="https://github.com/kamailio/kamailio/issues/1863#issuecomment-466341124">view it on GitHub</a>, or <a href="https://github.com/notifications/unsubscribe-auth/AF36ZSI6AxfvXDCJMpye_BVmZxVWAHUMks5vP767gaJpZM4bHiKi">mute the thread</a>.<img src="https://github.com/notifications/beacon/AF36Zbrw3s1P73zakc7TLhkyTH4fDCBrks5vP767gaJpZM4bHiKi.gif" height="1" width="1" alt="" /></p>
<script type="application/json" data-scope="inboxmarkup">{"api_version":"1.0","publisher":{"api_key":"05dde50f1d1a384dd78767c55493e4bb","name":"GitHub"},"entity":{"external_key":"github/kamailio/kamailio","title":"kamailio/kamailio","subtitle":"GitHub repository","main_image_url":"https://github.githubassets.com/images/email/message_cards/header.png","avatar_image_url":"https://github.githubassets.com/images/email/message_cards/avatar.png","action":{"name":"Open in GitHub","url":"https://github.com/kamailio/kamailio"}},"updates":{"snippets":[{"icon":"PERSON","message":"@fnurglewitz in #1863: Thank you @charlesrchance , I did some tests with this setup:\r\nkamailio.cfg (meaningful lines):\r\n```\r\nfork=yes\r\nchildren=1\r\ntcp_connection_lifetime=3605\r\npv_buffer_size=8192\r\n\r\n# ----- dmq params -----\r\nmodparam(\"dmq\", \"server_address\", DMQ_SERVER_ADDRESS)\r\nmodparam(\"dmq\", \"notification_address\", DMQ_NOTIFICATION_ADDRESS)\r\nmodparam(\"dmq\", \"multi_notify\", 1)\r\nmodparam(\"dmq\", \"num_workers\", 1)\r\nmodparam(\"dmq\", \"ping_interval\", 15)\r\nmodparam(\"dmq\", \"worker_usleep\", 1000)\r\n\r\n# ----- htable params -----\r\nmodparam(\"htable\", \"enable_dmq\", 1)\r\nmodparam(\"htable\", \"dmq_init_sync\", 1)\r\nmodparam(\"htable\", \"htable\", \"ht=\u003esize=16;dmqreplicate=1;autoexpire=10800;\")               # Keep track of concurrent channels for accounts. Should be same as dialog\r\nmodparam(\"htable\", \"htable\", \"ht1=\u003esize=16;dmqreplicate=1;autoexpire=10800;\")               # Keep track of concurrent channels for accounts. Should be same as dialog\r\nmodparam(\"htable\", \"htable\", \"ht2=\u003esize=16;dmqreplicate=1;autoexpire=10800;\")               # Keep track of concurrent channels for accounts. Should be same as dialog\r\nmodparam(\"htable\", \"htable\", \"ht3=\u003esize=16;dmqreplicate=1;autoexpire=10800;\")               # Keep track of concurrent channels for accounts. Should be same as dialog\r\n\r\n#!define ONEK \"really 1 k chars, believe me :)\"\r\n\r\nevent_route[htable:mod-init] {\r\n  $var(name) = POD_NAME + \"\\n\";\r\n  xlog(\"L_ALERT\", \"$var(name)\");\r\n  if(POD_NAME == \"kama-0\") {\r\n    $var(count) = 0;\r\n    while($var(count) \u003c 99) {\r\n      $sht(ht=\u003e$var(count)) = ONEK;\r\n      $sht(ht1=\u003e$var(count)) = ONEK;\r\n      $sht(ht2=\u003e$var(count)) = ONEK;\r\n      $sht(ht3=\u003e$var(count)) = ONEK;\r\n      $var(count) = $var(count)+1;\r\n    }\r\n  }\r\n}\r\n\r\nrequest_route {\r\n  if ($rm == \"KDMQ\"){\r\n    dmq_handle_message();\r\n  }\r\n  exit;\r\n}\r\n```\r\n\r\nStarted kama-0 which has now 4 htables of ~99K size each\r\nStarted 10 kubernetes pods and launched kamailio 100 times with a timeout of 3 seconds on each pod\r\nSo we have roughly 1000 kamailios trying to get these htables from kama-0\r\nI didn't see any dangerous CPU spike and the loop doesn't happen anymore.\r\n\r\nThere's something I'm worried of though: the memory of the DMQ worker (measured from top), which usually stays around 0.1% is now stable at 1.4% and it's not going down again\r\n\r\nI fear there's a memory leak somewhere but I'm not sure where, I had some doubts while debugging the loop issue about how the json_t structures are freed but it could be caused by me not knowing well the code; can you give us any hint to help you understand this issue better?\r\n\r\nThanks\r\n"}],"action":{"name":"View Issue","url":"https://github.com/kamailio/kamailio/issues/1863#issuecomment-466341124"}}}</script>
<script type="application/ld+json">[
{
"@context": "http://schema.org",
"@type": "EmailMessage",
"potentialAction": {
"@type": "ViewAction",
"target": "https://github.com/kamailio/kamailio/issues/1863#issuecomment-466341124",
"url": "https://github.com/kamailio/kamailio/issues/1863#issuecomment-466341124",
"name": "View Issue"
},
"description": "View this Issue on GitHub",
"publisher": {
"@type": "Organization",
"name": "GitHub",
"url": "https://github.com"
}
}
]</script>