Right, these tests were not objective. I don't know really objective
tests. But I tried to do them. Please, does anybody has a suggestion
what should I set up (config, compilation parameters) to get more
objective tests?
thanks,
Vaclav
P.S. Resending message because it was marked as spam, sorry.
On Tue, Nov 28, 2006 at 12:28:12AM +0100, Andrei Pelinescu-Onciul wrote:
On Nov 27, 2006 at 23:16, Daniel-Constantin Mierla
<daniel(a)voice-system.ro> wrote:
As I can see, you get better and better with
openser, maybe you can get
some training so you will be able to configure and tune it properly to
fit your needs and get the appropriate results (googling will reveal
some doing trainings for openser). So which are right, these ones, the
previous ones or the next testing results? You tested something, which
(I suppose) you are very familiar with (ser), against something that you
do not know properly to configure. There are some internals that differ
a lot and may have quite a lot of performance impact.
Such as?
I really think that you should try to chill out a little.
Instead of being happy that someone ran some benchmarks (which you
should probably do more often), you continue to try attack our tests,
but you fail to point what's wrong with them (you can see all the configs
and the compile parameters used).
As a matter of fact there was an error, but this involved ser. ser.cvs
was compiled with malloc debugging (-DDBG_QM_MALLOC and without
-DF_MALLOC). So ser pre-release is even faster.
These were only some benchmarks, you can fix the performance problems in
a new release or you can ignore them and concentrate on stuff you find
more important.
If you dislike so much being benchmarked, then put something like
"please don't publish benchmark results for openser" in the README and/
or the web page.
Just after you sent this mail, I have seen a commit to usrloc which
saves some problems I pointed in my previous email in this thread (so,
yes, we are concerned about performance, and see you need to catch up
now in some directions). The difference is now that ser's usrloc hash
table size is by default 2^14 (16384) while in openser is 2^9 (512). So
I guess some tests you can do now will be faster for ser, but to change
the hash value in ser you have to re-compile, as opposite to openser
where you can do it via module parameter.
To answer in the same spirit you wrote all your mails in this thread: if
this was not a cosmetic fix (you've just replaced the hash with the hash
from core) and you were really concerned about the performance then why
haven't you changed the hash table size too?
512 is far too small (I won't give any perfromance numbers I don't want
to make you even more angrier :-))
Why would you want to change the hash size from the config? Do you
really know somebody who wanted/needed to do this? If you use a variable
for the hash size the compiler will not be able to optimize the modulo
operation (x % hash_size) and it will have to implement it using slow
DIVs. Using a 2^k constant the whole modulo operation will be optimized
to & (2^k - 1). A DIV takes minimum 56 cycles on a P4 and 16 on an
AMD64 (and this if the operands are in _registers_ which will probably
not happen for the variable containing the hash size). An "and" takes
only 1 cycle (and the operand is an immediate constant).
Look also at the rest of the hash function: it uses only XORs, shifts
and additions, operations that all execute in 1 cycle => for normal small
uris your DIV operation will take a very significant time from the
total ammount spent hashing.
Note also that there is another difference: we use a different hash
function in ser, one that gives very good distributions also for numbers
(.e.g. 12345(a)foo.bar) and not only strings.
So please, try to make sure that the corresponding parameters have same
values, from number of processes, to memory, hash tables sizes, and so
on ... for future tests. If you are not able to do fair tests, it is
better to leave it for some impartial entities. With the description of
your tests a lot of parameters and variables are hidden, and do not
reflect the real capacity of the applications (I can say it now at least
for openser).
We never claimed this was the most optimized setup possible. As I
pointed out ser was the one which was handicaped.
I would however appreciate if you would share some of the secret openser
tunning that would have changed the test results. Speaking of the
hashes: it's not our fault if you choose the wrong value for them. Hash
size is always a compromise between memory usage and speed (and usually
some testing is necessary to find the best values). openser was neither
exactly a "low memory" user or fast.
Note: I have no ideea if you did really choose bad hash sizes for tm
(since this is what you seem to imply), but if you kept the values from
0.9.4 you should be safe.
And just to remember, as proved, you got very good performances but
wrong processing. I will ask you and users: do you prefer to have *high
performances* tied to *invalid processing* of the requests?
http://openser.org/pipermail/users/2006-November/007843.html
Well, maybe for some folks it is a 'proud' to say: *ser offers very fast
_invalid_ functionalities* or "ser can scale to millions of _offline_
users*.
Let me thank once again for the bug report. However let me point out
that what you call invalid functionalities are some minor variable
timer problems that in the worst case will extend a final reponse timer
for ser 0.9.x (if variable timers are used).
You owe to say to the users that the performances were obtained with a
*buggy SIP transaction processing*, and correct the web pages posted at
the link below.
Do you mean in openser? Sorry, all this time I thought you were in
denial, I haven't expected such a straight confession :-)
Actually I replied to this mail because of some accuses that 'we spread
the rumor "ser is dead"'. Nowhere you can find such statement from our
side. After some investigations proved that the phrase in charge is:
"Soon after, /iptelorg.com Gmbh/ was sold to /Tekelec/, which had no
intention to continue the development of the public project."
from oepnser history
(
http://www.openser.org/index.php?option=com_content&task=view&id=40…)p;Itemid=61).
Indeed, you're not saying directly that ser is dead, you're only
spreading FUD.
This is quite different than "ser is dead", and many of you know the
statement made in "openser history" is true. I cannot name some private
channels I have, but there are publicly spoken ones. I paste from
https://mail.internet2.edu/wws/arc/sip.edu/2006-10/msg00003.html:
<snip>
Christian says that he was surprised at the continued development of
SER after speaking with people from Tekelec, as they mentioned that
they were not as interested in continuing SER as an open source
project and were more interested in integrating SER into their IMS
offerings. He asks about the future of SER as an open source
project. Jiri feels that there should be no issues, and the main
contributors are still making contributions. He's hesitant to speak on
the behalf of the company, but feels that based on past performance
nothing should be changing.
</snip>
I would polite ask those persons not to twist the phrases without giving
good proof.
Users can see now who and how tries to hoke up the reality.
Cheers,
Daniel
Andrei, amazed on how much anger a few simple benchmarks can produce
[...]