[Serusers] Mediaproxy consuming RAM resources.
Java Rockx
javarockx at gmail.com
Fri Mar 11 20:08:52 CET 2005
Ricardo,
I think the release notes on mediaproxy-1.2 say that some resource
leaks have been fixed.
Perhaps you can try version 1.2
Regards,
Paul
On Fri, 11 Mar 2005 16:01:59 -0400, Ricardo Martinez
<rmartinez at redvoiss.net> wrote:
> Hello list.
> I was looking the process in my SER sip-proxy with the mediaproxy running.
> Every time i look, the percetage of used RAM is increasing. It seems like
> for some reason mediaproxy is not releasing the resources used in RAM. You
> can look this in the next lines :
>
> PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND
> 1952 root 15 0 284M 266M 1628 S 0.0 26.4 19:22 0
> mediaproxy.py
>
> Can someone tell me if this is a bug of mediaproxy or maybe a
> missconfiguration in my ser.cfg?
>
> I want to ask a question regarding to tear down sessions with mediaproxy
> also. In my ser.cfg i have :
> if (loose_route()) {
> if (method=="BYE" || method=="CANCEL") {
> log(1, "NAT: BYE o CANCEL recibido --> terminando la
> sesion de media\n");
> end_media_session();
> setflag(1);
> };
> t_relay();
> break;
> };
>
> Is this ok? I think in some cases (i'm still unable to determine which
> ones), SER does not tear down a mediaproxy session. You can see for example
> this two session, both of them were ended, but it seems to be active even
> before that. (i ommited the source, via and destination IP's in the output
> of the session.py)
>
> Status Duration Codec Type Traffic
> ----------------------------------------------------------------------------
> ---
> hold (2430) 40'40" G729 Audio 11.53k/29.58k/18.45k
> inactive 0'02" Unknown Audio 0/0/0
>
> Can someone tell me what's happening?
> I'm runnig :
> [root mediaproxy]# ./mediaproxy.py --version
> mediaproxy.py 1.0
>
> Thanks in advance.
> Regards,
>
> Ricardo M.
>
> _______________________________________________
> Serusers mailing list
> serusers at lists.iptel.org
> http://lists.iptel.org/mailman/listinfo/serusers
>
More information about the sr-users
mailing list