On Dec 23, 2024, at 3:14 pm, Ben Kaufman
<bkaufman(a)bcmone.com> wrote:
The difference in performance is substantial. Your indication that the performance of
the two modules as near equal is incorrect. Both in theory and in practice it is the
better of the two options.
If you confine the scope of your evaluation to Kamailio itself, then of course it's
substantial; you've deputised the event mux/polling workload into kernel space. While
that makes it invisible, it doesn't obviate the clock cycles. I'm talking about
formal performance.
This is like saying that Node can handle a tremendous amount of requests with a single
process. I suppose it can, but only because it's able to farm out the work of
monitoring sockets for data to the OS, and erase it from the ledger, if you like, of
userspace costs. They still go somewhere, and that somewhere is constrained by available
resources and dimensioning.
Assuming you've used this repo and assets to make your case:
https://github.com/whosgonna/kamailio_http_async
this is not a fair or reasonable comparison. In one case, you're using an external
polling loop, and in the other, you're blocking your worker processes by definition. I
doubt you could get 3 CPS through that config if the async shvar is set to 0.
Furthermore, the idea of an HTTP service that responds like a metronome in 1 sec, over
local sockets and without any of the overhead of connection setup over a real-world
network, is so contrived as to be tautological. You've neutered the synchronous
approach to the maximum possible extent, while testing the asynchronous one in highly
idealised conditions. In fact, tried your repo, followed your instructions on a hex-core
server with 16 GB of RAM (2 GB of SHM allocated to Kamailio) and was able to get about
1600 CPS -- more than twice the OP's ask -- before seeing any retransmissions.
It's so contrived as to be tautological, as if to say that being rich, young and
healthy is better than being old, ill and poor. I cannot agree more.
If you empower the synchronous http_client approach with a comparable degree of
parallelism, i.e. a large pool of worker processes with minimum package memory, you'll
get comparable throughput. I agree that there are memory limits around that in
Kamailio's concurrency model, and that was never in dispute. But you're going to
pay it somewhere either way:
Mem: 16175052K used, 217632K free, 266764K shrd, 219644K buff, 10489916K cached
CPU: 22% usr 15% sys 0% nic 58% idle 0% io 0% irq 3% sirq
Load average: 0.46 0.54 0.45 7/555 32
PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND
14 1 root R 2076m 12% 3 14% kamailio -dDDE -de -n 2 -m 2048
10 1 root R 2073m 12% 1 3% kamailio -dDDE -de -n 2 -m 2048
9 1 root S 2073m 12% 3 3% kamailio -dDDE -de -n 2 -m 2048
12 1 root S 2073m 12% 0 0% kamailio -dDDE -de -n 2 -m 2048
1 0 root S 2073m 12% 4 0% kamailio -dDDE -de -n 2 -m 2048
17 1 root S 2073m 12% 4 0% kamailio -dDDE -de -n 2 -m 2048
13 1 root S 2073m 12% 1 0% kamailio -dDDE -de -n 2 -m 2048
16 1 root S 2073m 12% 3 0% kamailio -dDDE -de -n 2 -m 2048
11 1 root S 2073m 12% 5 0% kamailio -dDDE -de -n 2 -m 2048
7 1 root S 2073m 12% 5 0% kamailio -dDDE -de -n 2 -m 2048
8 1 root S 2073m 12% 2 0% kamailio -dDDE -de -n 2 -m 2048
15 1 root S 2073m 12% 4 0% kamailio -dDDE -de -n 2 -m 2048
18 1 root S 2073m 12% 2 0% kamailio -dDDE -de -n 2 -m 2048
26 0 root S 1696 0% 1 0% ash
32 26 root R 1624 0% 0 0% top -d 1
async is not magic. This is why we say it moves the problem around.
-- Alex
--
Alex Balashov
Principal Consultant
Evariste Systems LLC
Web:
https://evaristesys.com
Tel: +1-706-510-6800