[Serdev] SER's core design features(process model/parser/lumps/script) - was: So who/what is SER for, anyway?

Greger V. Teigre greger at teigre.com
Mon Jan 29 20:38:28 UTC 2007



Dragos Vingarzan wrote:
> Greger V. Teigre wrote:
>   
>>>    - believe it or not, I might want to write a module that needs more
>>> processes to be efficient. SER is no longer "just a SIP proxy".
>>>       
>> Hm, sounds a bit complicated to me. You mean that you need a process
>> model where you can create more processes from a module? I don't really
>> see the use case and in what way the current model is hindering you.
>> Could you please elaborate?
>>     
>>>    - yes, I know about the FIFO, RPC and whatever other interfaces, but
>>> is anyone fully satisfied with that?
>>>       
>> You are mixing FIFO and RPC into the discussion of the process model.
>> I'm a bit confused?! Maybe an example (or explanation of the needs of
>> the openimscore project) would help.
>>
>>     
> There are cases when you need to integrate SER with other servers, like
> the diameter case. The I/S-CSCF "speak" both SIP and Diameter. Many
> CSCFs based on SER choose to use a separate Diameter peer that
> communicates somehow with SER. I took another approach and integrated
> the new protocol into SER. Anyway, I would've re-used most of SER's
> functionality, so it was easier.
>   
Maybe not the right time to say this, but honestly, that sounds like a 
bad design decision. Monolithic applications with all sorts interfaces 
increases complexity, cost of maintenance, increases exponentially the 
number of potential bugs etc etc.  Maybe there are things I don't see 
here, but with the appropriate interfaces, I cannot really see why you 
would want Diameter built into ser binary?!
> When you do this kind of integration/extensions, the big problem is that
> you need to push some events to SER from outside and you need them
> pushed efficiently. Having them all processed through a FIFO, RPC or
> timer process is not the best idea. So I am forking processes from
> modules. The problem at the beginning was that the processes were not 
> (and still aren't) clearly defined - what should be done when you fork.
> Code was duplicated in many places for the same thing. This is just one
> example of lack of design hindered by hacks over the time, that bother
> me so much, that I changed it. But there so many others that I hope
> would be changed.
>   
But if you create Diameter separately, you can even run Diameter on a 
separate physical server, do load balancing across SER servers (if they 
replicate), you can create a queue, you can handle DoS attacks on 
Diameter, you can maintain Diameter with a separate team of developers 
who know nothing of SER, etc etc ...

> And I think that Diameter is not the only protocol that makes sense in
> being integrated. An not only clear protocol need to be integrated.
> Maybe state replication could be done more efficiently with a dedicated
> process, for example.
>   
Yes, internal functions in SER may be candidates. I'm not saying that 
the process model should be protected, in fact, I don't know much about 
the process model and want to keep it that way. I just want to make sure 
that SER will be maintainable in the future and continue to do the 
things it has done for people up to know (and maybe do some new stuff :-)
g-)
>> I couldn't agree more. Martin has also pointed out that the module
>> interface needs a revamp.  I think maybe such a revamp could form the
>> core of more adaptable SER. There are too many things that each module
>> has to take care of, and the API(s) available to the modules are not
>> well documented. The code also has some history where some modules use
>> old functions that should have been deprecated, but not been updated.  I
>> agree with you, a clean interface with a clearly defined API would make
>> it a lot easier to develop modules for SER.
>>
>>     
> What about introducing this kind of new gen modules in parallel with the
> old ones? You are right about the history, so it would take some time
> until all the ol' reliable ones would be ported.
>   
>> lumps for implementation. The performance degradation would probably not
>> be too much?!
>>
>>     
> for sure less than having to loop the messages when you want to actually
> see what others changed.
>
> I was thinking about keeping the original message buffer, yet
> introducing new pointers for new headers. It can be detected if one
> header is the original by its pointer, but of course the whole message
> access needs to be wrapped in an API. We loose due to the API overhead,
> but memory requirements are still at the lowest given the circumstances.
>
> -Dragos
>
>
>   
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.iptel.org/pipermail/serdev/attachments/20070129/84d30552/attachment.html


More information about the Serdev mailing list