Hello,

I just realized that I had the dispatcher configured using a hash of Call-ID.  That means, after recvfrom there must be an extra processing finding the Call-ID header in message, to calculate a hash and then forward() message. The more the processing, the more cases when 200 could arrive  before 180. I just changed it to round robin, and the amount decreased a lot, but it's still there. If I send a burst of 1000 messages, about 5 of them leave out of order every time.

Best regards,

Luis



On 4/9/20 1:48 PM, Luis Rojas G. wrote:
Hello,

I have a lot of experience developing mutithreaded applications, and I don't see it so unlikely at all that a process loses cpu just after recvfrom(). It's just as probable as to lose it just before, or when writing on a cache or just before of after sendto(). If there are many messages going through, some of them will fall in this scenario. if I try sending a burst of 100 messages, I see two or three presenting the scenario.

Just forward() with a single process does not give the capacity. I'm getting almost 1000caps. More than that and start getting errores, retransmissions, etc. And this is just one way. I need to receive the call to go back to the network (our application is a B2BUA), so I will be down to 500caps, with a simple scenario, with no reliable responses, reinvites, updates, etc. I will end up having as many standalone kamailio processes as the current servers I do have now.

I really think the simplest way would be to add a small delay to 200 OK. Very small, like 10ms, should be enough. Simple and it should work. As Alex Balashov commented he did for the case with ACK-Re-Invite. 

I have to figure out how to make async_ms_sleep() work in reply_route().

Thanks for all the comments and ideas

Best regards,

Luis



. On 4/9/20 12:17 PM, Daniel-Constantin Mierla wrote:

MICONDA@GMAIL.COM appears similar to someone who previously sent you email, but may not be that person. Learn why this could be a risk
Feedback

Hello,

then the overtaking is in between reading from the socket and getting to parsing the call-id value -- the cpu is lost by first reader after recvfrom() and the second process get enough cpu time to go ahead further. I haven't encountered this case, but as I said previously, it is very unlikely, but still possible. I added the route_locks_size because in the past I had cases when processing of some messages took longer executing config (e.g., due to authentication, accounting, ..) and I needed to be sure they are processed in the order they enter config execution.

Then the option is to see if a single process with stateless sending out (using forward()) gives the capacity, if you don't do any other complex processing. Or if you do more complex processing, use a dispatcher process with forwarding to local host or in a similar manner try to use mqueue+rtimer for dispatching using shared memory queues.

Of course, it is open source and there is also the C coding way, to add a synchronizing mechanism to protect against parallel execution of the code from recvfrom() till call-id lock is acquired.

Cheers,
Daniel


-- 
Luis Rojas
Software Architect
Sixbell
Los Leones 1200
Providencia
Santiago, Chile
Phone: (+56-2) 22001288
mailto:luis.rojas@sixbell.com
http://www.sixbell.com