[SR-Users] Evapi dispatcher is consuming all the private memory

Daniel-Constantin Mierla miconda at gmail.com
Tue Aug 20 12:07:05 CEST 2019


Hello,

Thanks for testing and feedback! I backported the patches to branch 5.2.

Cheers,
Daniel

On 16.08.19 18:11, José Seabra wrote:
> Hello Daniel,
>
> Those patches works ok.
> Thank you for your support.
>
> Cheers
> José Seabra
>
> Daniel-Constantin Mierla <miconda at gmail.com
> <mailto:miconda at gmail.com>> escreveu no dia sexta, 16/08/2019 à(s) 13:26:
>
>     Yes, those are the patches.
>
>     Cheers, Daniel
>
>     On 16.08.19 14:08, José Seabra wrote:
>>     yes i need backport them, are these the correct patchs ?
>>
>>       * https://github.com/kamailio/kamailio/commit/4a904b5fe90253d0d5517cf1a5cdca3695e59ec3.patch
>>       * https://github.com/kamailio/kamailio/commit/d7bd8a187546f132547788c47ff9a3947be49957.patch
>>       * https://github.com/kamailio/kamailio/commit/769dce2fad66efdbc6cce6951bc60b60bf453a19.patch
>>
>>     Thank you
>>     Regards
>>     José
>>
>>     Daniel-Constantin Mierla <miconda at gmail.com
>>     <mailto:miconda at gmail.com>> escreveu no dia sexta, 16/08/2019
>>     à(s) 12:36:
>>
>>         Can you try with latest master? I pushed some patches there
>>         for a proper fix.
>>
>>         If you need to backport, are the patches done to the files
>>         core/fmsg.{c,h} and to evapi module, because I pushed some
>>         other patches not related to this issue.
>>
>>         Cheers,
>>         Daniel
>>
>>         On 16.08.19 12:03, José Seabra wrote:
>>>         Updating my last email with the output of coredump generated
>>>         by kamailio.
>>>
>>>         Please check out the following link to get the coredump
>>>         output:  https://pastebin.com/gsyPc0eg
>>>
>>>         Let me know if you need further information
>>>
>>>         Thank you
>>>         José 
>>>
>>>         José Seabra <joseseabra4 at gmail.com
>>>         <mailto:joseseabra4 at gmail.com>> escreveu no dia sexta,
>>>         16/08/2019 à(s) 10:37:
>>>
>>>             Hi Daniel,
>>>             After apply your patch i noticed the following situations:
>>>
>>>             When i restart my external application kamailio prints
>>>             the following logs:
>>>
>>>                 /edge-sip-proxy[13581]: INFO: ESP_LOG: 123:
>>>                 [evapi:connection-closed] EVAPI 127.0.0.1:52906
>>>                 <http://127.0.0.1:52906> connection has been closed/
>>>
>>>                 /edge-sip-proxy[13581]: CRITICAL: ESP_LOG: 123:
>>>                 <core> [core/mem/q_malloc.c:514]: qm_free(): BUG:
>>>                 freeing already freed pointer (0x7f0ab9914a88),
>>>                 called from core: core/parser/parse_via.c:
>>>                 free_via_list(2738), first free core:
>>>                 core/parser/parse_via.c: free_via_list(2738) - ignoring/
>>>
>>>                 /edge-sip-proxy[13581]: CRITICAL: ESP_LOG: 123:
>>>                 <core> [core/mem/q_malloc.c:514]: qm_free(): BUG:
>>>                 freeing already freed pointer (0x7f0ab99103c0),
>>>                 called from core: core/parser/hf.c:
>>>                 free_hdr_field_lst(217), first free core:
>>>                 core/parser/hf.c: free_hdr_field_lst(217) - ignoring/
>>>
>>>                 /edge-sip-proxy[13581]: INFO: ESP_LOG: 123: evapi
>>>                 [evapi_dispatch.c:381]: evapi_recv_client(): client
>>>                 closing connection - pos [0] addr [127.0.0.1:52906
>>>                 <http://127.0.0.1:52906>]/
>>>
>>>                 /edge-sip-proxy[13581]: INFO: ESP_LOG: 123:
>>>                 [evapi:connection-new] New EVAPI connection from
>>>                 127.0.0.1:52980 <http://127.0.0.1:52980>/
>>>
>>>                 /edge-sip-proxy[13581]: CRITICAL: ESP_LOG: 123:
>>>                 <core> [core/mem/q_malloc.c:514]: qm_free(): BUG:
>>>                 freeing already freed pointer (0x7f0ab9914a88),
>>>                 called from core: core/parser/parse_via.c:
>>>                 free_via_list(2738), first free core:
>>>                 core/parser/parse_via.c: free_via_list(2738) - ignoring/
>>>
>>>                 /edge-sip-proxy[13581]: CRITICAL: ESP_LOG: 123:
>>>                 <core> [core/mem/q_malloc.c:514]: qm_free(): BUG:
>>>                 freeing already freed pointer (0x7f0ab99103c0),
>>>                 called from core: core/parser/hf.c:
>>>                 free_hdr_field_lst(217), first free core:
>>>                 core/parser/hf.c: free_hdr_field_lst(217) - ignoring/
>>>
>>>                 /
>>>                 /
>>>
>>>             Then when kamailio starts sending/receiving events
>>>             to/from external application it breaks(stops) and prints
>>>             the following logs:
>>>
>>>                 /edge-sip-proxy[13571]: INFO: ESP_LOG:
>>>                 7-32554 at 10.225.121.206
>>>                 <mailto:7-32554 at 10.225.121.206>: FUZECI1-x1006
>>>                 registered/
>>>
>>>                 /edge-sip-proxy[13581]: INFO: ESP_LOG: 123:
>>>                 [evapi:connection-received] EVAPI msg received ->
>>>                 {"aor":"sip:FUZECI1-x1000 at uc.fuze.domain","contact":"<sip:FUZECI1-x1000 at 10.225.121.206:5060;transport=UDP>;expires=172800"}/
>>>
>>>                 /edge-sip-proxy[13581]: CRITICAL: ESP_LOG: 123:
>>>                 <core> [core/parser/hf.c:200]: clean_hdr_field():
>>>                 BUG: unknown header type 1885958972/
>>>
>>>                 / kernel: traps: edge-sip-proxy[13581] general
>>>                 protection ip:6a8dac sp:7ffdebb1a8b0 error:0 in
>>>                 kamailio[400000+460000]/
>>>
>>>                 /edge-sip-proxy[13589]: CRITICAL: <core>
>>>                 [core/pass_fd.c:277]: receive_fd(): EOF on 27/
>>>
>>>                 /edge-sip-proxy[13567]: ALERT: <core> [main.c:756]:
>>>                 handle_sigs(): child process 13581 exited by a
>>>                 signal 11/
>>>
>>>                 /edge-sip-proxy[13567]: ALERT: <core> [main.c:759]:
>>>                 handle_sigs(): core was not generated/
>>>
>>>                 /edge-sip-proxy[13567]: INFO: <core> [main.c:781]:
>>>                 handle_sigs(): terminating due to SIGCHLD/
>>>
>>>                 /edge-sip-proxy[13573]: INFO: ESP_LOG:
>>>                 317f64550726afacb315390fd37ece05: <core>
>>>                 [main.c:836]: sig_usr(): signal 15 received/
>>>
>>>                 /edge-sip-proxy[13568]: INFO: <core> [main.c:836]:
>>>                 sig_usr(): signal 15 received/
>>>
>>>                 /edge-sip-proxy[13576]: INFO: <core> [main.c:836]:
>>>                 sig_usr(): signal 15 received/
>>>
>>>                 /edge-sip-proxy[13571]: INFO: <core> [main.c:836]:
>>>                 sig_usr(): signal 15 received/
>>>
>>>                 /edge-sip-proxy[13577]: INFO: <core> [main.c:836]:
>>>                 sig_usr(): signal 15 received/
>>>
>>>                 /edge-sip-proxy[13579]: INFO: ESP_LOG: 123: <core>
>>>                 [main.c:836]: sig_usr(): signal 15 received/
>>>
>>>                 /edge-sip-proxy[13572]: INFO: ESP_LOG:
>>>                 0dcb5707418c4142-13572 at 10.225.121.128
>>>                 <mailto:0dcb5707418c4142-13572 at 10.225.121.128>:
>>>                 <core> [main.c:836]: sig_usr(): signal 15 received/
>>>
>>>                 /edge-sip-proxy[13575]: INFO: <core> [main.c:836]:
>>>                 sig_usr(): signal 15 received/
>>>
>>>                 /edge-sip-proxy[13582]: INFO: <core> [main.c:836]:
>>>                 sig_usr(): signal 15 received/
>>>
>>>                 /edge-sip-proxy[13583]: INFO: <core> [main.c:836]:
>>>                 sig_usr(): signal 15 received/
>>>
>>>                 /edge-sip-proxy[13578]: INFO: ESP_LOG: 123: <core>
>>>                 [main.c:836]: sig_usr(): signal 15 received/
>>>
>>>                 /edge-sip-proxy[13580]: INFO: ESP_LOG: 123: <core>
>>>                 [main.c:836]: sig_usr(): signal 15 received/
>>>
>>>                 /edge-sip-proxy[13586]: INFO: <core> [main.c:836]:
>>>                 sig_usr(): signal 15 received/
>>>
>>>                 /edge-sip-proxy[13584]: INFO: <core> [main.c:836]:
>>>                 sig_usr(): signal 15 received/
>>>
>>>                 /edge-sip-proxy[13585]: INFO: <core> [main.c:836]:
>>>                 sig_usr(): signal 15 received/
>>>
>>>                 /edge-sip-proxy[13570]: INFO: <core> [main.c:836]:
>>>                 sig_usr(): signal 15 received/
>>>
>>>                 /edge-sip-proxy[13587]: INFO: <core> [main.c:836]:
>>>                 sig_usr(): signal 15 received/
>>>
>>>                 /edge-sip-proxy[13569]: INFO: <core> [main.c:836]:
>>>                 sig_usr(): signal 15 received/
>>>
>>>                 /edge-sip-proxy[13588]: INFO: <core> [main.c:836]:
>>>                 sig_usr(): signal 15 received/
>>>
>>>                 /edge-sip-proxy[13574]: INFO: <core> [main.c:836]:
>>>                 sig_usr(): signal 15 received/
>>>
>>>                 /edge-sip-proxy[13589]: INFO: <core> [main.c:836]:
>>>                 sig_usr(): signal 15 received/
>>>
>>>                 /edge-sip-proxy[13567]: INFO: <core>
>>>                 [core/sctp_core.c:53]: sctp_core_destroy(): SCTP API
>>>                 not initialized/
>>>
>>>
>>>             Best Regards
>>>             José Seabra
>>>
>>>             Daniel-Constantin Mierla <miconda at gmail.com
>>>             <mailto:miconda at gmail.com>> escreveu no dia quinta,
>>>             15/08/2019 à(s) 07:36:
>>>
>>>                 Hello,
>>>
>>>                 can you try with master branch? The issue should be
>>>                 fixed by the commit:
>>>
>>>                   *
>>>                 https://github.com/kamailio/kamailio/commit/d36a4b00c6becd42cfcec4fc7eb398371e1cc812
>>>
>>>                 If works ok, then I will backport to stable branches.
>>>
>>>                 Cheers,
>>>                 Daniel
>>>
>>>                 On 14.08.19 12:19, José Seabra wrote:
>>>>                 Hello,
>>>>                 Please find out the logs in the following link:
>>>>
>>>>                   * https://pastebin.com/h2UNHwYC
>>>>
>>>>                 Let me know if you need something else.
>>>>
>>>>                 Cheers
>>>>                 José Seabra
>>>>
>>>>                 Daniel-Constantin Mierla <miconda at gmail.com
>>>>                 <mailto:miconda at gmail.com>> escreveu no dia quarta,
>>>>                 14/08/2019 à(s) 10:59:
>>>>
>>>>                     Hello,
>>>>
>>>>                     that's memory status, printing details for each
>>>>                     allocated chunk. I could spot a lot of chunks
>>>>                     from the parser and get header field functions,
>>>>                     but to be sure there is nothing else, can you
>>>>                     set global parameter:
>>>>
>>>>                     mem_summary=12
>>>>
>>>>                     and then reproduce again. The logs printed in
>>>>                     syslog should have smaller size, giving the
>>>>                     summary of allocated chunks - how many times
>>>>                     and the size for each location in the code.
>>>>
>>>>                     Cheers,
>>>>                     Daniel
>>>>
>>>>                     On 14.08.19 11:51, José Seabra wrote:
>>>>>                     Hello Daniel,
>>>>>                     Please find out the attached zip file with the
>>>>>                     output pkg_summary.
>>>>>
>>>>>                     Let me know if you need something else.
>>>>>
>>>>>                     Thank you for your support
>>>>>
>>>>>                     Cheers
>>>>>                     José Seabra
>>>>>
>>>>>                     Daniel-Constantin Mierla <miconda at gmail.com
>>>>>                     <mailto:miconda at gmail.com>> escreveu no dia
>>>>>                     quarta, 14/08/2019 à(s) 10:03:
>>>>>
>>>>>                         Hello,
>>>>>
>>>>>                         can you enable pkg summary and generate it
>>>>>                         once the out of memory happens?
>>>>>
>>>>>                         Set memlog to lower value than debug, when
>>>>>                         the memory happens, do:
>>>>>
>>>>>                         https://www.kamailio.org/docs/modules/devel/modules/corex.html#corex.rpc.pkg_summary
>>>>>
>>>>>                         Then send some traffic to that process and
>>>>>                         watch the syslog.
>>>>>
>>>>>                         Alternative, enable memory summary via
>>>>>                         core parameter and stop kamailio when the
>>>>>                         errors happen, but you will get summary
>>>>>                         for all processes, so it might be harder
>>>>>                         to extract those log messages related the
>>>>>                         evapi process.
>>>>>
>>>>>                         Cheers,
>>>>>                         Daniel
>>>>>
>>>>>                         On 14.08.19 10:37, José Seabra wrote:
>>>>>>                         Hello there,
>>>>>>                         I'm using kamailio 5.2.1 with evapi
>>>>>>                         module to send and receive events from an
>>>>>>                         external application.
>>>>>>                          
>>>>>>                         During my stress tests to kamailio i have
>>>>>>                         been noticing that at some point kamailio
>>>>>>                         is getting out of memory:
>>>>>>
>>>>>>                         edge-sip-proxy[20565]ERROR: ESP_LOG:
>>>>>>                         <null>: <core> [core/mem/q_malloc.c:291]:
>>>>>>                         qm_find_free():
>>>>>>                         qm_find_free(0x7fb8730b6010, 64); Free
>>>>>>                         fragment not found!
>>>>>>
>>>>>>                         edge-sip-proxy[20565]: ERROR: ESP_LOG:
>>>>>>                         <null>: <core> [core/mem/q_malloc.c:425]:
>>>>>>                         qm_malloc(): qm_malloc(0x7fb8730b6010,
>>>>>>                         64) called from core:
>>>>>>                         core/parser/msg_parser.c:
>>>>>>                         parse_headers(325), module: core; Free
>>>>>>                         fragment not found!
>>>>>>
>>>>>>                         edge-sip-proxy[20565]: ERROR: ESP_LOG:
>>>>>>                         <null>: <core>
>>>>>>                         [core/parser/msg_parser.c:328]:
>>>>>>                         parse_headers(): memory allocation error
>>>>>>
>>>>>>                         edge-sip-proxy[20565]: ERROR: ESP_LOG:
>>>>>>                         <null>: pv [pv_core.c:704]:
>>>>>>                         pv_get_callid(): cannot parse Call-Id header
>>>>>>
>>>>>>                         edge-sip-proxy[20565]: INFO: ESP_LOG:
>>>>>>                         <null>: [evapi:connection-received] EVAPI
>>>>>>                         msg received ->
>>>>>>                         {"aor":"sip:FUZECI1-x1516 at uc.fuze.domain","contact":"<sip:FUZECI1-x1516 at 10.225.121.206:5060;transport=UDP>"}
>>>>>>
>>>>>>                         edge-sip-proxy[20565]: ERROR: ESP_LOG:
>>>>>>                         <null>: <core> [core/mem/q_malloc.c:291]:
>>>>>>                         qm_find_free():
>>>>>>                         qm_find_free(0x7fb8730b6010, 64); Free
>>>>>>                         fragment not found!
>>>>>>
>>>>>>                         edge-sip-proxy[20565]: ERROR: ESP_LOG:
>>>>>>                         <null>: <core> [core/mem/q_malloc.c:425]:
>>>>>>                         qm_malloc(): qm_malloc(0x7fb8730b6010,
>>>>>>                         64) called from core:
>>>>>>                         core/parser/msg_parser.c:
>>>>>>                         parse_headers(325), module: core; Free
>>>>>>                         fragment not found!
>>>>>>
>>>>>>
>>>>>>                         The process id 20565 is running out of
>>>>>>                         memory as you can see bellow:
>>>>>>
>>>>>>
>>>>>>                         output of kamcmd pkg.stats 
>>>>>>
>>>>>>
>>>>>>                         entry: 14
>>>>>>
>>>>>>                         pid: 
>>>>>>
>>>>>>                         rank: -2
>>>>>>
>>>>>>                         used: 6058696
>>>>>>
>>>>>>                         free: 824
>>>>>>
>>>>>>                         real_used: 8387784
>>>>>>
>>>>>>                         total_size: 8388608
>>>>>>
>>>>>>                         total_frags: 30
>>>>>>
>>>>>>
>>>>>>                         This process ID is related with Evapi
>>>>>>                         Dispatcher
>>>>>>
>>>>>>
>>>>>>
>>>>>>                         kamcmd ps | grep 20565
>>>>>>
>>>>>>
>>>>>>                         20565EvAPI Dispatcher
>>>>>>
>>>>>>
>>>>>>                         After stop the sipp that was generating
>>>>>>                         traffic to kamailio the evapi process
>>>>>>                         still have all the memory consumed
>>>>>>
>>>>>>                         Can it be some memory leak in Evapi module?
>>>>>>
>>>>>>                         Best Regards
>>>>>>                         José Seabra
>>>>>>                         -- 
>>>>>>                         Cumprimentos
>>>>>>                         José Seabra
>>>>>>
>>>>>>                         _______________________________________________
>>>>>>                         Kamailio (SER) - Users Mailing List
>>>>>>                         sr-users at lists.kamailio.org <mailto:sr-users at lists.kamailio.org>
>>>>>>                         https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-users
>>>>>
>>>>>                         -- 
>>>>>                         Daniel-Constantin Mierla -- www.asipto.com <http://www.asipto.com>
>>>>>                         www.twitter.com/miconda <http://www.twitter.com/miconda> -- www.linkedin.com/in/miconda <http://www.linkedin.com/in/miconda>
>>>>>
>>>>>
>>>>>
>>>>>                     -- 
>>>>>                     Cumprimentos
>>>>>                     José Seabra
>>>>
>>>>                     -- 
>>>>                     Daniel-Constantin Mierla -- www.asipto.com <http://www.asipto.com>
>>>>                     www.twitter.com/miconda <http://www.twitter.com/miconda> -- www.linkedin.com/in/miconda <http://www.linkedin.com/in/miconda>
>>>>
>>>>
>>>>
>>>>                 -- 
>>>>                 Cumprimentos
>>>>                 José Seabra
>>>
>>>                 -- 
>>>                 Daniel-Constantin Mierla -- www.asipto.com <http://www.asipto.com>
>>>                 www.twitter.com/miconda <http://www.twitter.com/miconda> -- www.linkedin.com/in/miconda <http://www.linkedin.com/in/miconda>
>>>
>>>
>>>
>>>             -- 
>>>             Cumprimentos
>>>             José Seabra
>>>
>>>
>>>
>>>         -- 
>>>         Cumprimentos
>>>         José Seabra
>>
>>         -- 
>>         Daniel-Constantin Mierla -- www.asipto.com <http://www.asipto.com>
>>         www.twitter.com/miconda <http://www.twitter.com/miconda> -- www.linkedin.com/in/miconda <http://www.linkedin.com/in/miconda>
>>
>>
>>
>>     -- 
>>     Cumprimentos
>>     José Seabra
>
>     -- 
>     Daniel-Constantin Mierla -- www.asipto.com <http://www.asipto.com>
>     www.twitter.com/miconda <http://www.twitter.com/miconda> -- www.linkedin.com/in/miconda <http://www.linkedin.com/in/miconda>
>
>
>
> -- 
> Cumprimentos
> José Seabra

-- 
Daniel-Constantin Mierla -- www.asipto.com
www.twitter.com/miconda -- www.linkedin.com/in/miconda

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.kamailio.org/pipermail/sr-users/attachments/20190820/d0f99b2e/attachment.html>


More information about the sr-users mailing list