[Devel] Re: [Serdev] OpenSER/SER with OSP performance test results

Bogdan-Andrei Iancu bogdan at voice-system.ro
Tue May 8 13:57:28 CEST 2007


Hi Di-Shi,

Particular I'm still fighting to with the PDD chart; same reasons as 
Olaf - I do not understand the understand the exponential increase of 
PDD value for OpenSER 1.2 at 220 cps.
Why I'm puzzled is because all the response time included in the PDF are 
for openser less than for ser, but the chart shows the opposite - 
strictly referring for the 220 cps case.
 
Based on explanations, I understand that PDD is the delay between the 
INVITE and the final response (200 OK), including all the sequential 
tries, right?
So, my questions is - how is the PDD. computed - I see no reference for 
it in the PDF. So, it is computed or it is measured?

Regards,
Bogdan

di-shi at transnexus.com wrote:
> Hi Olaf and Jiri,
>
> Thank you for your comments about the test results.
>
> As we had mentioned, this test was designed to understand the 
> performance of
> OpenSER/SER in product environments. Some of the facts were random. 
> The PDD
> is an indirect measure and not very precise. But there are some 
> details may
> be useful to understand the PDD graph.
> For every call,
> 1. The source is a SIPp client.
> 2. OpenSER/SER receives the INVITE message from the SIPp client and 
> requests
> routing info from a set of OSP servers.
> 3. There are 5 destinations configured on the OSP servers.
>     a. 3 unavailable devices for different reasons. All will cause
> OpenSER/SER fr_timer timeout which we set to 2 sec.
>     b. 1 device rejects the call and replies 404.
>     c. 1 good device. SIPp server.
>     OpenSER/SER gets these 5 destinations in random order. The worst
> condition, OpenSER/SER trys the SIPp server as the last destination. 
> The PPD
> should be 6 sec. For OpenSER 1.1/1.2 test, it is clear that the PDD 
> depends
> on the load. It is reasonable. For SER, it can be explained as the PPD is
> just a little longer than 6 sec for the worst condition. The 6 sec 
> threshold
> is not a good value. It should be set to 6.1 sec. Unforunately, we did not
> realize it until we finished the test for SER.
>
> For the OpenSER/SER configurations we used in the test, you can find them
> under module/osp/etc/sample-osp-openser.cfg and
> module/osp/etc/sample-osp-ser.cfg. We only changed the fr_timer to 2 sec.
> and set the OSP server IPs and the local device IP.
>
> Thanks
>
> Di-Shi Sun.
>
>
> ----- Original Message -----
> From: "Jiri Kuthan" <jiri at iptel.org <mailto:jiri at iptel.org>>
> To: "Olaf Bergmann" <Olaf.Bergmann at freenet-ag.de 
> <mailto:Olaf.Bergmann at freenet-ag.de>>; "Di-Shi Sun"
> <di-shi at transnexus.com <mailto:di-shi at transnexus.com>>
> Cc: <devel at openser.org <mailto:devel at openser.org>>; 
> <serdev at lists.iptel.org <mailto:serdev at lists.iptel.org>>
> Sent: Friday, May 04, 2007 6:29 PM
> Subject: Re: [Serdev] OpenSER/SER with OSP performance test results
>
>
> > At 09:16 04/05/2007, Olaf Bergmann wrote:
> > >Di-Shi Sun wrote:
> > >> Hi All,
> > >>
> > >> We have performed a benchmark test on OpenSER V1.1, V1.2 and SER 
> 2.0 to
> > >> understand and compare the performance of the three releases in a
> > >> simulated production environment.
> > >
> > >Nice, thanks for this interesting piece of work.
> > >
> > >> Summary of the test results:
> > >> ============================
> > >> * The performance of OpenSER V1.2 and SER 2.0 are not materially
> > >> different, however, there are two minor differences.
> > >>   - SER V2.0 requires less memory.
> > >>   - OpenSER V1.2 has less post dial delay.
> > >
> > >Could you please comment on the PDD graph? For my understanding, the
> > >6+ seconds are caused by your failure scenarios? I wonder why the
> > >SER graph seems to be constant while the OpenSER looks like 
> exponential?
> >
> > I have been struggling with the measurement too  (actually I'm even
> > missing PDD definition in the document). In a private conversation with
> > authors I learned that the test scenario is actually about randomized-
> > order forking, with some of the destinations being unavailable. That
> explains why
> > SER is having a constant failure rate but it does not explain why
> > openser is doing better initially (perhaps blacklisting turned on
> > by default in openser?) and going like exponential later.
> >
> > Some few more results would be good in this context too (graph showing
> > the actual delay as opposed to percentage exceeding a threshold -- which
> > is fine for the 'big picture' but hard to disaggregate for tracing
> > what's actually going on).
> >
> > Other thing which came out of a private chat with authors is that ser
> > measurements are in SER's debugging mode (which is set by default
> > on CVS). SER is compiled with PKG_MALLOC, DBG_QM_MALLOC while
> > openser without (F_MALLOC).
> >
> > Otherwise for sake of completenss, I would enjoy attached (open)ser 
> config
> files
> > (among others it shall reveal if black-listing is turned on as that 
> should
> > have dramatic impact on the results), described call-flows, and 
> described
> > scenario (I mean details about this randomized-order forking).
> >
> > Apart from this piece of critique, I think it is a great 
> contribution and
> > with some work along the lines I have suggested it will be an excellent
> > document.
> >
> ------------------------------------------------------------------------
>
> _______________________________________________
> Devel mailing list
> Devel at openser.org
> http://openser.org/cgi-bin/mailman/listinfo/devel
>   




More information about the Devel mailing list