[sr-dev] Migration of Open IMS Core to sip-router
Dragos Vingarzan
dragos.vingarzan at gmail.com
Wed Jul 29 14:58:44 CEST 2009
I see... so it seems quite complicated to add all the required locks and
to redesign the process_no and my_pid() for not much of a benefit. I did
not see this before.
Well, if this is final and the conclusion is that the restrictions
should be in place there on dynamically forked processes, then I'll
start redesigning my module. It's a not a huge deal, but now the code is
much clear, easier to manage and also potentially faster if each
Diameter TCP connection has it's own process. But this is not a must and
one universal acceptor/receiver forked at the beginning could do all the
ops, much like the TCP structure from ser, right? Where there any
performance issues due to some bottlenecks or something like that?
(I'll probably still keep also my original design too for when you use
this CDiameterPeer standalone, outside of ser...)
Cheers,
-Dragos
Andrei Pelinescu-Onciul wrote:
> On Jul 23, 2009 at 19:59, Dragos Vingarzan <dragos.vingarzan at gmail.com> wrote:
> [...]
>
>> - sip-router_pt.diff
>> - added a drop_my_process() function - in the cdp module (Diameter)
>> we do have dynamic processes, which fork and exit distinctly from the
>> ser ones, so we need this to clean-up. Without it, such usages would not
>> be possible as the process table would fill and then new forks would be
>> denied
>>
>
> That's very problematic. It breaks process_no, my_pid() and the
> assumption that the process number does not change.
> These assumptions are used when doing statistics (e.g. tm): a shared
> mem array is created with one "entry" for each process. Each process
> updates its own entry (e.g. tm_stats[process_no].s.t_created++ )
> without needing any locking or atomic op (which scale very badly on
> multi-cpus due to cacheline ping-pongs).
> The same assumptions are used in the shm malloc ng (only testing
> prototypes for now in ll_malloc) and might be used in the future for
> implementing a RCU like mechanism.
>
>
> Andrei
>
More information about the sr-dev
mailing list