[sr-dev] git:andrei/raw_sock: core: basic raw socket support functions
marius zbihlei
marius.zbihlei at 1and1.ro
Fri Jun 18 11:42:40 CEST 2010
Andrei Pelinescu-Onciul wrote:
> On Jun 14, 2010 at 15:08, marius zbihlei <marius.zbihlei at 1and1.ro> wrote:
>
>> Andrei Pelinescu-Onciul wrote:
>>
>>> On Jun 14, 2010 at 12:37, marius zbihlei <marius.zbihlei at 1and1.ro> wrote:
>>>
>>>
>> Hello,
>>
>> I want to test using some small sip replies(so I am not sure if
>> fragmentation takes part). Of course the fragmentation code should
>> also be tested for performance. Ser integration is another issue. Do
>> you plan to use a global parameter to switch between normal sendto()
>> function and raw sockets ?
>>
>
> Yes, a global param for raw sockets, another for mtu (at least for now)
> and in the future an option to use raw sockets also for listening
> (they might be usefull for a transparent proxy / load balancer
> implementation and who knows, maybe we get a nice surprise testing
> performance with them).
>
> Andrei
>
>
Hello Andrei,
Just performed a couple of tests (I was busy myself), but I think I have
some interesting results. I have tested with 25 UAC/UAS's per test
server, each pair generating 500 calls/s for a total of 12,500 calls/s .
The test servers(running each 25 sipp as UAC and 25 sipp as UAS on
different ports) where 2 quad core Xeon machines in the same LAN
(Gigabit ethernet between them). Ser was doing a simple forward() based
on the R-URI of the request, having 8 worker processes.
1. SER on a quad core Xeon, kernel 2.6.26.
a. I have enable just one test server for a total of 12,500 calls/s.
In this case the CPU usage was worse in case of UDP socks
(udp_raw=0)(median value)
"usr", "sys", "idl", "wai", "hiq", "siq"
13.584, 15.030, 50.713, 0.0, 2.950, 17.723
For RAW socks (udp_raw=1) these values showed up:
"usr", "sys", "idl", "wai", "hiq", "siq"
10.396, 4.950, 76.238, 0.0, 2.970, 5.446
So the biggest difference is in software irq servicing time (last colum)
and in sys. A little weird is the comparable usr CPU, I expected to be
greater in raw sock mode.
b. I enabled both testing machines for a total of 25,000 calls/s.
In this case the CPU usage was almost identical, but mostly because the
sipp instances couldn't send 500 reqs/s in the UDP mode .I limited sipp
to send 20,000 calls per UAC/UAS pair. In the case of a raw sock it took
an average of 55 s (closer to the 40s normal ideal value), but in udp
mode it took almost 88s to send the 20,000 calls. The system load was
the same (27% Idle).
2. SER on a Dual quad core Xeon, kernel 2.6.32
I have done only some basic runs but the results are not consistent with
the ones on the other Ser machine. Siq time is the same, rate is steady
at 500 calls/s but user CPU is greater in raw sock mode.
I have dig around a bit and came over two interesting patches in 2.6.29
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=645ca708f936b2fbeb79e52d7823e3eb2c0905f8
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=271b72c7fa82c2c7a795bc16896149933110672d
The release notes here:
http://kernelnewbies.org/Linux_2_6_29#head-612c6b882f705935cc804d4af0b383167a2f789f
As time allows me I will rerun some tests and provide some graphs if
necessary.
Marius
More information about the sr-dev
mailing list