Hello,
On 12/15/06 21:59, Jiri Kuthan wrote:
At 00:33 28/11/2006, Vaclav Kubart wrote:
On Mon, Nov 27, 2006 at 11:16:01PM +0200,
Daniel-Constantin Mierla wrote:
> So which are right, these ones, the
> previous ones or the next testing results? You tested something, which
> (I suppose) you are very familiar with (ser), against something that you
> do not know properly to configure. There are some internals that differ
> a lot and may have quite a lot of performance impact.
>
Hi Daniel,
just to make sure your concerns about accurancy of the measurements do not
remain unattanded -- has been the version mismatch concern been addressed
for you? Are there still any internals-oriented aspects which haven't been
reflected and make the benchmarking results possibly inaccurate?
as you should know, there are many internal parameters that can make the
results totally different (e.g., number of processes, size of memory,
size of hash tables, and so on ..). A comparison will be accurate when
every parameter will be correlated, which is hard to do and I have not
time for it -- so, my concerns are still on. Using same config does not
mean all internal parameters have same values. Also, as proved, some
misbehaviors or bugs in software can increase (decrease) the
performances to high levels, but useless or worse.
We will do performance tests for OpenSER only once we get in testing
phase with current development. Till then, since that moment is not that
far (see roadmap), makes no sense to spend time in testing a moving
target. The tests will be just for OpenSER, because I consider that the
right knowledge will be applied, avoiding in this way to mislead users
(intentionally or not).
Daniel