[Devel] Re: [Serdev] OpenSER/SER with OSP performance test results

Jiri Kuthan jiri at iptel.org
Mon May 7 12:20:00 CEST 2007


Hi Di-Shi,

Thanks for the clarification. Would you mind including this in the document
so that its readers find it conveniently and are not puzzled like myself :-) ?
Ideally with remeasurements including the feedback we provided (debug mode
and blacklisting)? Plus would you include an accurate definition of PDD? 
(client's request emission till first >100 is received?)

I mean it is really very useful work, just for best value the document shall
be more complete. If there is something we can help with, we will gladly do
-- performance and performance stability has been indeed our focus for quite
some years :-)

-jiri

At 04:43 07/05/2007, di-shi at transnexus.com wrote:
>Hi Olaf and Jiri,
>
>Thank you for your comments about the test results.
>
>As we had mentioned, this test was designed to understand the performance of
>OpenSER/SER in product environments. Some of the facts were random. The PDD
>is an indirect measure and not very precise. But there are some details may
>be useful to understand the PDD graph.
>For every call,
>1. The source is a SIPp client.
>2. OpenSER/SER receives the INVITE message from the SIPp client and requests
>routing info from a set of OSP servers.
>3. There are 5 destinations configured on the OSP servers.
>    a. 3 unavailable devices for different reasons. All will cause
>OpenSER/SER fr_timer timeout which we set to 2 sec.
>    b. 1 device rejects the call and replies 404.
>    c. 1 good device. SIPp server.
>    OpenSER/SER gets these 5 destinations in random order. The worst
>condition, OpenSER/SER trys the SIPp server as the last destination. The PPD
>should b 6 sec. For OpenSER 1.1/1.2 test, it is clear that the PDD depends
>on the load. It is reasonable. For SER, it can be explained as the PPD is
>just a little longer than 6 sec for the worst condition. The 6 sec threshold
>is not a good value. It should be set to 6.1 sec. Unforunately, we did not
>realize it until we finished the test for SER.
>
>For the OpenSER/SER configurations we used in the test, you can find them
>under module/osp/etc/sample-osp-openser.cfg and
>module/osp/etc/sample-osp-ser.cfg. We only changed the fr_timer to 2 sec.
>and set the OSP server IPs and the local device IP.
>
>Thanks
>
>Di-Shi Sun.
>
>
>----- Original Message ----- 
>From: "Jiri Kuthan" <<mailto:jiri at iptel.org>jiri at iptel.org>
>To: "Olaf Bergmann" <<mailto:Olaf.Bergmann at freenet-ag.de>Olaf.Bergmann at freenet-ag.de>; "Di-Shi Sun"
><<mailto:di-shi at transnexus.com>di-shi at transnexus.com>
>Cc: <<mailto:devel at openser.org>devel at openser.org>; <<mailto:serdev at lists.iptel.org>serdev at lists.iptel.org>
>Sent: Friday, May 04, 2007 6:29 PM
>Subject: Re: [Serdev] OpenSER/SER with OSP performance test results
>
>
>> At 09:16 04/05/2007, Olaf Bergmann wrote:
>> >Di-Shi Sun wrote:
>> >> Hi All,
>> >>
>> >> We have performed a benchmark test on OpenSER V1.1, V1.2 and SER 2.0 to
>> >> understand and compare the performance of the three releases in a
>> >> simulated production environment.
>> >
>> >Nice, thanks for this interesting piece of work.
>> >
>> >> Summary of the test results:
>> >> ============================
>> >> * The performance of OpenSER V1.2 and SER 2.0 are not materially
>> >> different, however, there are two minor differences.
>> >>   - SER V2.0 requires less memory.
>> >>   - OpenSER V1.2 has less post dial delay.
>> >
>> >Could you please comment on the PDD graph? For my understanding, the
>> >6+ seconds are caused by your failure scenarios? I wonder why the
>> >SER graph seems to be constant while the OpenSER looks like exponential?
>>
>> I have been struggling with the measurement too  (actually I'm even
>> missing PDD definition in the document). In a private conversation with
>> authors I learned that the test scenario is actually about randomized-
>> order forking, with some of the destinations being unavailable. That
>explains why
>> SER is having a constant failure rate but it does not explain why
>> openser is doing better initially (perhaps blacklisting turned on
>> by default in openser?) and going like exponential later.
>>
>> Some few more results would be good in this context too (graph showing
>> the actual delay as opposed to percentage exceeding a threshold -- which
>> is fine for the 'big picture' but hard to disaggregate for tracing
>> what's actually going on).
>>
>> Other thing which came out of a private chat with authors is that ser
>> measurements are in SER's debugging mode (which is set by default
>> on CVS). SER is compiled with PKG_MALLOC, DBG_QM_MALLOC while
>> openser without (F_MALLOC).
>>
>> Otherwise for sake of completenss, I would enjoy attached (open)ser config
>files
>> (among others it shall reveal if black-listing is turned on as that should
>> have dramatic impact on the results), described call-flows, and described
>> scenario (I mean details about this randomized-order forking).
>>
>> Apart from this piece of critique, I think it is a great contribution and
>> with some work along the lines I have suggested it will be an excellent
>> document.
>>
>> -jiri
>>
>>
>>
>> --
>> Jiri Kuthan            <http://iptel.org/%7Ejiri/>http://iptel.org/~jiri/
>>
>>
>>



--
Jiri Kuthan            http://iptel.org/~jiri/




More information about the Devel mailing list