i didn's set any global parameter, just the defaults.
yes, i can reproduce it consistently, i just send an INVITE and CANCEL
while kamailio is trying the destinations. (serial forks to non-existent IP)
after i send the CANCEL, kamailio will crash.
below is the MEMDBG=1 syslog:
Dec 9 11:12:14 kelvin /usr/local/sbin/kamailio[22191]: WARNING: tm
[t_lookup.c:1564]: t_unref(): WARNING: script writer didn't release
transaction
Dec 9 11:12:14 kelvin /usr/local/sbin/kamailio[22189]: : <core>
[mem/q_malloc.c:468]: qm_free(): BUG: qm_free: freeing already freed
pointer (0x7f35a60e01e0), called from tm: h_table.c: free_cell(157), first
free tm: h_table.c: free_cell(157) - aborting
Dec 9 11:12:14 kelvin /usr/local/sbin/kamailio[22209]: : <core>
[pass_fd.c:293]: receive_fd(): ERROR: receive_fd: EOF on 13
Dec 9 11:12:14 kelvin /usr/local/sbin/kamailio[22186]: ALERT: <core>
[main.c:788]: handle_sigs(): child process 22189 exited by a signal 6
Dec 9 11:12:14 kelvin /usr/local/sbin/kamailio[22186]: ALERT: <core>
[main.c:791]: handle_sigs(): core was generated
Dec 9 11:12:14 kelvin /usr/local/sbin/kamailio[22186]: ERROR: ctl
[ctl.c:379]: mod_destroy(): ERROR: ctl: could not delete unix socket
/tmp/kamailio_ctl: Operation not permitted (1)
Dec 9 11:12:14 kelvin /usr/local/sbin/kamailio[22186]: : <core>
[mem/q_malloc.c:468]: qm_free(): BUG: qm_free: freeing already freed
pointer (0x7f35a60e01e0), called from tm: h_table.c: free_cell(157), first
free tm: h_table.c: free_cell(157)
Kelvin Chua
On Mon, Dec 9, 2013 at 11:54 PM, Daniel-Constantin Mierla <miconda(a)gmail.com
wrote:
> Can you compile with MEMDBG=1 in Makefile.defs and try again? Might be a
> memory overwritten issues somewhere. Look in the syslog for memory related
> message.
>
> Few more details needed ... do you have memjoin global parameter set? Is
> the situation reproducible, or it happens sporadically?
>
> Cheers,
> Daniel
>
>
> On 09/12/13 16:45, Kelvin Chua wrote:
>
> tried latest branch 4.0, also segfault
> similar backtrace
>
> Kelvin Chua
>
>
> On Mon, Dec 9, 2013 at 7:13 PM, Daniel-Constantin Mierla <
> miconda(a)gmail.com
wrote:
>
>> Hello,
>>
>> can you try latest branch 4.0? There were some fixes for similar case.
>> Soon will be a 4.0.5 release.
>>
>> Cheers,
>> Daniel
>>
>>
>> On 09/12/13 10:35, Kelvin Chua wrote:
>>
>> following is backtrace:
>>
>> #0 qm_detach_free (frag=0x7f91a4656170, qm=<optimized out>) at
>> mem/q_malloc.c:269
>> prev = 0x7f91a4656170
>> next = 0x0
>> #1 qm_malloc (qm=0x7f91a429b000, size=3840) at mem/q_malloc.c:386
>> f = 0x7f91a4656170
>> hash = -1536859792
>> #2 0x000000000050977f in shm_malloc (size=3832) at
>> parser/../mem/shm_mem.h:262
>> p = <optimized out>
>> #3 sip_msg_shm_clone (org_msg=0x7f91ae9d7630,
>> sip_msg_len=0x7fff56a2dde0, clone_lumps=0) at sip_msg_clone.c:502
>> len = 3832
>> hdr = <optimized out>
>> new_hdr = <optimized out>
>> last_hdr = <optimized out>
>> via = <optimized out>
>> prm = <optimized out>
>> to_prm = <optimized out>
>> new_to_prm = <optimized out>
>> new_msg = <optimized out>
>> p = <optimized out>
>> __FUNCTION__ = "sip_msg_shm_clone"
>> #4 0x00007f91ac989062 in build_cell (p_msg=0x7f91ae9d7630) at
>> h_table.c:372
>> new_cell = 0x7f91a4656180
>> sip_msg_len = <optimized out>
>> old = <optimized out>
>> cbs = <optimized out>
>> cbs_tmp = <optimized out>
>> xold = <optimized out>
>> #5 0x00007f91ac9b5fc6 in new_t (p_msg=0x7f91ae9d7630) at t_lookup.c:1357
>> new_cell = <optimized out>
>> #6 t_newtran (p_msg=0x7f91ae9d7630) at t_lookup.c:1497
>> lret = <optimized out>
>> my_err = <optimized out>
>> canceled = <optimized out>
>> __FUNCTION__ = "t_newtran"
>> #7 0x00007f91ac9a99e0 in t_forward_cancel (p_msg=0x7f91ae9d7630,
>> proxy=0x0, proto=0, tran=0x7fff56a2e2c0) at t_fwd.c:1607
>> t_invite = <optimized out>
>> t = 0x0
>> ret = <optimized out>
>> new_tran = <optimized out>
>> dst = {send_sock = 0x63371f, to = {s = {sa_family = 54224,
>> sa_data = "\003\255\221\177\000\000\020\300\220\256\221\177\000"}, sin
=
>> {sin_family = 54224,
>> sin_port = 44291, sin_addr = {s_addr = 32657}, sin_zero =
>> "\020\300\220\256\221\177\000"}, sin6 = {sin6_family = 54224, sin6_port
=
>> 44291,
>> sin6_flowinfo = 32657, sin6_addr = {__in6_u = {__u6_addr8 =
>> "\020\300\220\256\221\177\000\000\070\261W\000\000\000\000",
__u6_addr16 =
>> {49168, 44688,
>> 32657, 0, 45368, 87, 0, 0}, __u6_addr32 =
>> {2928721936, 32657, 5747000, 0}}}, sin6_scope_id = 0}}, id = 0, proto = 96
>> '`', send_flags = {
>> f = 225 '\341', blst_imask = 162 '\242'}}
>> host = {s = 0x0, len = 0}
>> port = <optimized out>
>> __FUNCTION__ = "t_forward_cancel"
>> #8 0x00007f91ac99ae15 in t_relay_to (p_msg=0x7f91ae9d7630, proxy=0x0,
>> proto=0, replicate=0) at t_funcs.c:264
>> ret = 0
>> new_tran = <optimized out>
>> t = <optimized out>
>> dst = {send_sock = 0x0, to = {s = {sa_family = 0, sa_data =
>> '\000' <repeats 13 times>}, sin = {sin_family = 0, sin_port = 0,
sin_addr =
>> {s_addr = 0},
>> sin_zero = "\000\000\000\000\000\000\000"}, sin6 =
>> {sin6_family = 0, sin6_port = 0, sin6_flowinfo = 0, sin6_addr = {__in6_u = {
>> __u6_addr8 = '\000' <repeats 15 times>,
__u6_addr16 =
>> {0, 0, 0, 0, 0, 0, 0, 0}, __u6_addr32 = {0, 0, 0, 0}}}, sin6_scope_id =
>> 0}}, id = 0,
>> proto = 64 '@', send_flags = {f = 243 '\363',
blst_imask = 162
>> '\242'}}
>> port = <optimized out>
>> host = {s = 0x7fff56a2eab8 "0[*\244\221\177", len =
-1398876638}
>> __FUNCTION__ = "t_relay_to"
>>
>>
>>
>> Kelvin Chua
>>
>>
>> _______________________________________________
>> SIP Express Router (SER) and Kamailio (OpenSER) - sr-users mailing
listsr-users@lists.sip-router.orghttp://lists.sip-router.org/cgi-bin/mailman/listinfo/sr-users
>>
>>
>> --
>> Daniel-Constantin Mierla -
http://www.asipto.comhttp://twitter.com/#!/miconda -
http://www.linkedin.com/in/miconda
>>
>>
>> _______________________________________________
>> SIP Express Router (SER) and Kamailio (OpenSER) - sr-users mailing list
>> sr-users(a)lists.sip-router.org
>>
http://lists.sip-router.org/cgi-bin/mailman/listinfo/sr-users
>>
>>
>
> --
> Daniel-Constantin Mierla -
http://www.asipto.comhttp://twitter.com/#!/miconda -
http://www.linkedin.com/in/miconda
>
>