[sr-dev] IRC meeting log May 16, 2013

Vicente Hernando vhernando at systemonenoc.com
Fri May 17 10:56:28 CEST 2013


Just to save Daniel some work, attached yesterday IRC log.

Kind regards,
Vicente.
-------------- next part --------------
*******************************************************************************
IRC MEETING LOG: MAY 16, 2013  at #Kamailio on irc.freenode.org
*******************************************************************************

*** #Kamailio was created on Wednesday 2008/07/30 03:46:08 PM

***************** MEETING STARTS *****************************************

<miconda> hi everyone!
<miconda> preparing to start in 1-2 min, to allow some people getting in...
<oej> :-)
<linuxmaniac> :-)  [16:01]
<miconda> meanwhile, check the agenda:
		  https://www.kamailio.org/wiki/devel/irc-meetings/2013a
<miconda> and add there or propose here if you have something new to discuss
<miconda> ok ? so I guess we can start now
<miconda> first one - any critical issue with the latest stable series?
<miconda> (I will try to follow the topics from the wiki, with the last one
		  being the roadmap to next major release v4.1.0)  [16:04]
<miconda> we did a minor release 4.0.1 recently, from that moment not many
		  changes, iirc
<miconda> perhaps we will do 4.0.2 after few more weeks
<miconda> any comments? or I am disconnected :-)  [16:06]
<pdunkley> No comments from me
<linuxmaniac> non
<gmaruzz> lurking
<hughw> I'm here! - I've just added a possible feature to the agenda
<miconda> no need to reply everytime for no comments, I was just probing for
		  this one...  [16:07]
<miconda> hughw: ok
<linuxmaniac> hughw: nice one
<miconda> ok, so moving to next  [16:08]
<miconda> compile time enabled features from core to be moved as module
<miconda> sctp is the one left from the list of stun and dnssec
<miconda> any other you are aware of?
<pdunkley> I don't know of anything else.
<oej> Neither do I

************** COMPILE-TIME OPTIONS VS CONFIGURATION FILE OPTIONS *************

<pdunkley> Are there any compile-time options that should be configuration
		   file options?  [16:10]
<miconda> regarding sctp: i looked a bit at, I thought would be easy to make
		  an api just for the functions related to libsctp
<oej> There's some old flags for TLS and IPv6 in the build system, but I think
	  the meaning got lost years ago...
<miconda> but it is not that simple
<miconda> I will try another approach
<miconda> moving all as module, including the core parameters for sctp and
		  make an api for the core
<miconda> as oposite of an api for sctp options set via libsctp  [16:11]
<miconda> IPv6 can still be disabled
<miconda> at compile time
<oej> Yes, but I realize it's not depending on external libx
<oej> libs
<pdunkley> oej: perhaps removal of old flags and consideration over whether
		   any should be runtime in configuration file should be done during
		   any code re-structuring?
<pdunkley> Presumably the Makefiles will need a lot of changing then anyway?
																	    [16:12]
<miconda> but making it a runtime option might not be useful (we need to force
		  people use ipv6, so we compile it by default, if they don't want
		  ipv6, they need to work a bit)

************************* TLS COMPILE SETTINGS ******************************

<miconda> for TLS, I don't have proper knowledge at this moment  [16:13]
<miconda> have to check the makefiles, but iirc, Andrei let an option to link
		  the tls module statically to the core
<miconda> might be that one ...
<oej> I don't know if all the TLS flags are valid since we made it a
	  module. Is it possible to compile TLS without the module?  [16:14]
<miconda> afaik, there is no tls code in the core
<miconda> but core may be able to link the files from the module when this
		  core-tls option is enabled  [16:15]
<oej> Well, as I said in a private conversation, I think it's time for some
	  TLS work later.
<oej> :-)
<oej> Mainly, I want to be able to verify the connection properties on
	  outbound TLS connections BEFORE we send the first message.
<oej> If I'm not happy with the cert I should be able to close connection and
	  fail.  [16:16]
<pdunkley> Didn't you promise me a beer for something related to that ;-)
<oej> I think I already owe you a few beers, so let's add one for that...
<miconda> guess beer will come if this will be added
<miconda> ? now waiting for the commit notification
<oej> We need beer-bountys.  [16:17]
<pdunkley> I just suggested how it might be done.  oej said he had an idea as
		   to who might do it...
<miconda> oej: yes, some work to refresh a bit tls options and flexibility
		  won't harm at all

***************** OPENSSL VS GNUTLS ISSUE **********************************

<linuxmaniac> BTW: can you remember me what is the problem about openssl vs
			  gnutls
<oej> The difference is that OpenSSL is poorly documented. And so is gnuTLS
	  too?  [16:18]
<oej> Not much of a difference ;-)
<miconda> linuxmaniac: implementation, Andrei Pelinescu-Onciul went with
		  openssl back in 2004 or so
<miconda> but if someone wants to do a gnutls alternative, it is ok  [16:19]
<miconda> might be easier now as it is just a module, thus it needs to
		  implement an interface
<oej> OpenSSL licensing has issues with GNU GPL if I remember correctly. But
	  GNUtls had more issues with Tekelec commercial side I guess.
<oej> Are all parts that use TLS using the module?  [16:20]
<oej> What if LDAP or Postgres use TLS?
<pdunkley> There are other modules that use OpenSSL too.
<miconda> in the past it was in the core, an alternative would have been a
		  bigger problem
<linuxmaniac> so maybe we can create a gnutls module?
<oej> That could lead to SSL initialization issues. Kevin solved that with a
	  libwrapper in Asterisk.
<miconda> no, each module links by its own
<pdunkley> outbound, websocket, and stun all link to it to make use of some of
		   the encryption and hashing functions.
<pdunkley> Not going to be easy to use OpenSSL or GnuTLS and still have all
		   the features available in both.  [16:21]
<oej> So each module initialize. As long as they are in different processes I
	  don't think that's a problem. But if a core process use SIP/TLS and
	  PGSql with TLS we're in trouble.
<miconda> oej: not sure tekelec ever used tls, such transport layer hasn't
		  penetrated carriers :-)
<pdunkley> Used to be that you had to link core with OpenSSL for STUN.
<pdunkley> Shouldn't need to at all now that STUN is in its own module.
<oej> Well, we have no bug reports on OpenSSL init problems, so let's let that
	  sleep.  [16:22]
<oej> linuxmaniac: What would be then benefit of GNUtls?
<miconda> oej: there were some problems in the past
<oej> Ok, waking up.
<miconda> we (I :-) ) solved it somehow
<miconda> need to check the logs
<linuxmaniac> oej: be able to package a kamailio-tls for debian :-P  [16:23]
<oej> In Asterisk we had problems with 3rd party libraries using OpenSSL at
	  the same time as the core process.
<miconda> i think we initialize tls first time, via mod_register()
<oej> linuxmaniac: Sorry, haven't followed that thread. Why is it not possible
	  today?
<miconda> then the rest of the modules
<pdunkley> linuxmaniac, oej: and package outbound, websocket, and stun for
		   Debian too
<linuxmaniac> pdunkley: sure  [16:24]
<oej> We should make the interfaces that these modules use available through
	  the TLS module.
<pdunkley> oej: it's a bit of a heavy-weight solution putting interfaces into
		   TLS.  These other modules just use a couple of hashing and
		   encryption functions.  [16:25]
<oej> Like?  [16:26]
<pdunkley> Especially if you want to make those interfaces generic enough that
		   other libraries could be used.  You'll end up with all sorts of
		   extra data copying and munging to get it to fit.
<miconda> in short: gpl claims openssl license is not compatible, and needs an
		  exception from developers
<pdunkley> Just stuff like HMAC-SHA1  [16:27]
<miconda> what is not clear for me: the exception must be given by all
		  developers of kamailio, or just by developers of modules linking to
		  openssl
<oej> Yes, we did that in Asterisk. I belive those modules are part of deb
	  packages
<pdunkley> Literally a few single function calls without any related state in
		   OpenSSL.
<miconda> if only developers of the modules, then should be easy
<miconda> at least pdunkley can do it for its modules
<oej> I think it just has to be in the project docs
<oej> I've never given a personal statement in regards of this in Asterisk.
																	    [16:28]
<linuxmaniac> http://lists.debian.org/debian-legal/2004/05/msg00595.html
<oej> It's in our LICENSE file for Asterisk  [16:29]
<oej> "Specific permission is also granted to link Asterisk with OpenSSL,
	  OpenH323
<oej> and/or the UW IMAP Toolkit and distribute the resulting binary files."
<linuxmaniac> I have to ask for advice on debian-legal  [16:30]
<vicenteH> if a module or library is under GPL everything which links to it
		   should be GPL. 
<oej> I think we just have to mail sr-dev and ask if any of the developers
	  object to adding a text like this (or the one in the mail) to our
	  LICENSE
<vicenteH> that is why they invented LGPL
<vicenteH> I mean GPL or GPL compatible in should be GPL.  [16:31]
<miconda> oej: we can tr that
<oej> The mail you linked to is very clear, I don't think we have to ask.
<oej> We just ask everyone on sr-dev if it's ok, add a LICENSE.OpenSSL to the
	  git repo and the tar.gz files
<linuxmaniac> this mail is from 2004
<oej> I suggest we make a decision here to move forward with this process.
																	    [16:32]
<oej> Yes, but we also have Asterisk that did this and is accepted by Debian.
<linuxmaniac> oej: ok
<pdunkley> Given that many other modules link to lots of other libraries are
		   there other examples of this that should be handled at the same
		   time?
<oej> Have we done a license review lately?  [16:33]
<oej> There was a library I asked about a number of times on the list, but got
	  no answer. I don't remember what it was now.
<oej> Libunistring should be no problem.
<oej> There was a change in memcache  [16:34]
<oej> Chairman:
<pdunkley> Just thinking that, instead of asking on the list about OpenSSL,
		   there should be one email covering all module/license related
		   issues.  Get it sorted in one go.
<oej> The only one we are aware of is OpenSSL
<miconda> yes, memcache is using a different lib now, from what I could
		  understand from Henning's commits
<pdunkley> There's a number of recent modules doing json and other things too.
<miconda> ok .. so we can discuss on sr-dev what to do
<miconda> review and ask devs
<oej> Let's try to make a wiki page with all third party libraries and their
	  licenses and package names in centos and debian  [16:36]
<miconda> oej: yes, we should do that
<oej> If I understand right we have two proposals:
<oej> 1. Start a process of adding an OpenSSL exception to our LICENSE  [16:37]
<oej> 2. Create a wiki page to list all possible 3rd party library
	  dependencies - name, URL, license and various package names
<oej> DId I forget anything proposed?
<pdunkley> As part of 2 make sure there are no other exceptions needed in
		   LICENSE  [16:38]
<oej> Right!
<oej> Or accidents that may not be compatible with our GNU license.
<pdunkley> 3. Create a mechanism to appropriately chastise developers who
		   add/change 3rd party library dependencies and don't update the wiki
																	    [16:39]
<oej> Have them buy a round of beer at next dev meeting.
<oej> I would suggest
<miconda> :-)
<miconda> ok ? next topic?
<oej> 3. Add a process for reviewing license of 3rd party libraries BEFORE you
	  start using it in a module.
<pdunkley> That'll really cramp my hacking at 2am style of working ;-)  [16:40]
<oej> Oh, I'm sure that there are other developers awake in some weird time
	  zone, like Texas.

******* TM MODULE - FAILURE ROUTES ON DELIVERY ERRORS *************************

<miconda> ok -- on to: t.m module
<miconda> failure routes on delivery errors  [16:41]
<miconda> practically, related to tcp/tls
<miconda> when connection does not exists and cannot be created
<miconda> now t_relay() returns error
<miconda> some people want to trigger failure route
<jh__> yes, it would greatly simplify cinfig  [16:42]
<miconda> especially the new one added by hughw - branch failure route
<jh__> yes, at this point that would be most important to me
<miconda> bits to clarify: 1) should t_relay() return true in this case?
																	    [16:43]
<pdunkley> Although, if all the branches fail in that way presumably you'd
		   need failure_route to trigger as well?
<miconda> 2) what return code should be generated?
<oej> Is this also where Kamailio use some non-standard SIP response codes?
																	    [16:44]
<jh__> i would say that return code is 408
<miconda> we patch the standard, if needed :-)
<oej> To add to the soup
<hughw> If a request is parallel forked, but only one of the branches fails to
		make a connection, the entire t_relay() call is still a success.
<oej> What kind of response codes should T_relay give when we connect and
	  can't accept a TLS certificate?
<oej> We propably need to overhaul or add a more detailed response code in
	  addition to the return value.  [16:45]
<jh__> if some parallel branch succeeds then t_relay() succeeds too.
<oej> Right. So a code we can read in branch_failure_route.  [16:46]
<miconda> jh__: usually t_relay() does not wait for branch completion
<eZz> hello
<jh__> if udp destination is unreacheble 408 is returned after a while.  in
	   case of tco 408 should be returned immediately
<jh__> in case of wrong tls cert, i guess some other response code should be
	   invented unless there is a standard one (perhaps not acceptable here)
																	    [16:48]
<pdunkley> 488 relates to SDP.  Probably going to cause problems if it is
		   reused for other things like TLS.
<oej> That's a 408, but a different one. We need to add warning codes
<miconda> 408a?!?  [16:49]
<oej> Warning/reason headers something.
<oej> Like the 488 has a reason for IPv6
<miconda> perhaps we can make a mod param for it ...
<pdunkley> Create a Kamailio class of responses 7xx :-)  Just make sure none
		   of them leak out to the real world.
<oej> I would suggest reason codes
<miconda> 9xx :-)  [16:50]
<oej> 7xx was reserved for geoloc stuff a while ago
<miconda> 7xx is taken by siemens, iirc
<pdunkley> What a bunch of hackers
<jh__> new class for local codes is ok if they don't leak which can be
	   prevented in config
<miconda> ohh, its quite crowded in 7xx room :-)
<miconda> the idea from oej might be good  [16:51]
<miconda> a pv that gives independent codes
<miconda> that can be used for config logic
<oej>
	  http://www.iana.org/assignments/sip-parameters/sip-parameters.xml#sip-parameters-5
<oej> These are additional information to response codes
<oej> There are no for TLS, but there should be
<jh__> yes, for tls failure there definitely should be an ietf assigned code
																	    [16:52]
<oej> There are 380 and 381 for SIPS, but nothing for TLS errors
<oej> I've discussed that at a number of SIPits but it boils down to "so go
	  write a draft".
<oej> I have a number of drafts waiting to be written.  [16:53]
<oej> Anyhow, I think this is the way to go, that we have a PV for warning
	  codes
<oej> We should also be able to add them to responses
<oej> I wonder why all are in the 3xx class
<miconda> oej: sounds resonable
<oej> "A first digit of "3"  [16:54]
<oej>    indicates warnings specific to SIP. "
<oej> From RFC 3261  [16:55]
<jh__> once the response code is selected, is it difficult to get branch
	   failure route triggered from t_fwr?
<oej> So for really private ones, we should be able to use 9xx
<oej> And don't let the slip outside.
<miconda> anything else for this topic?  [16:57]
<miconda> - sip code configurable
<miconda> - internal warning code via pv
<hughw> If a failure to deliver is detected (either failed TCP or invalid TLS)
		we still need to send an response back to the previous hop. If a
		response goes through branch_failure and failure, it may be sent back
		to the caller.
<pdunkley> If someone creates those routes they should be presumed to know
		   enough to catch and drop stuff that should be kept local (or
		   convert it to something OK to send outside).  [16:58]
<hughw> At the moment we have checks such as t_branch_timeout() and others to
		find out if this was locally generated.
<jh__> now out goes 477 which is not standard
<oej> I think those private response codes needs to go and be replaced by
	  standard + warning code. If there's not a generic warning code, use one
	  in the 9xx class.  [16:59]
<oej> I might need to use that between two Kamailio's to know what's going on,
	  but that's soemthing I will have to code.
<oej> Then we can create a funciton in tmx called
	  add_warning_code_header(code, "text")  [17:00]
<hughw> That sounds good to me
<oej> I think warning codes are cool :-)
<pdunkley> That'll be why no-one uses them.  [17:01]

************ DIALOG VS DIALOG_NG **********************************************

<miconda> ok -- next topic
<miconda> dialog vs dialog_ng
<miconda> not sure any of dialog_ng devs is here?!?!  [17:02]
<miconda> if not, we can take it to sr-dev
<miconda> I wanted to know how far dialog_ng implemented what's on the wiki
		  for the new dialog
<miconda> it's a bit confusing to have two of them, if are not compatible
																	    [17:03]
<oej> agree.
<miconda> i would rather use a different name like ims_dialog if it is
		  strictly built for ims moules
<oej> if dialog_ng wants to be "next generation" they have to work to complete
	  it as a full replacement.
<miconda> oej: yes
<oej> Otherwise, let's rename it to ims_dialog and "backport" stuff to dialog
	  if needed  [17:04]
<miconda> but if it's to ims specific, it might be heavy to use it on casual
		  cases...
<oej> yes, let's challenge them on the mailing list :-)


************** SOURCE TREE RESTRUCTURING **************************************

<miconda> ok
<miconda> next topic: source tree restructuring
<oej> move the core code to /xmpp to hide it from new developers.  [17:05]
<miconda> there were voices about grouping core code in a dedicated dir
<oej> Sorry. Could not resist.
<miconda> :-)
<jh__> i could prefer shallow struccture, i.e.. core at root
<miconda> oej: even goolge is dropping xmpp :-D
<oej> I think it's hard to find text docs and examples since everything is in
	  one big messy dir
<linuxmaniac> +1 for core dir  [17:06]
<oej> I want people to read all the docs we have, make it more visible
<oej> so move to /core or /src - but move all code away from /
<miconda> i would rather move all code in src/
<miconda> with core/ lib/ modules/ and tools/ there
<miconda> tools being parts of utils dir now
<oej> "there" being where?
<miconda> there being src
<miconda> like: src/core/...  [17:07]
<miconda> src/modules/...
<oej> +1 from Sweden
<oej> (made myself Sweden's representative in the kingdom of Kamailio)
<jh__> src/core means more writing or clicking when moving around
<miconda> in tools will be what is strictly related to kamailio operations
		  (kamctl and sercmd now)  [17:08]
<oej> Any other options?
<pdunkley> I like the idea of src/core
<pdunkley> Maybe a few more clicks, but it makes the tree much neater.
<pdunkley> Maybe a few more clicks, but it makes the whole tree much neater.
<jh__> fine, it is not a big  issue to me as long as *.c files don't appear at
	   root  [17:09]
<miconda> it's not going to be an easy task, a lot of #includes have to be
		  updated
<miconda> not to say about Makefiles from core
<jh__> perhaps a script could be written to automate the update
<miconda> with src we can probably move main makefiles in src
<miconda> leaving in the root dir a simple one that invokes commands in src
																	    [17:10]
<oej> Should the include files in / that other modules use be moved to
	  core/include when we're up to cleaning ?
<eZz> btw, I have a question regarding to list of modules and dependencies, is
	  there any easy way to collect them instead of parsing Makefile's ? I'd
	  like to build ncurses-based gui tool to choose a modules instead of cli
<oej> eZz: not on this part of the agenda. Pls stand by  [17:11]
<oej> We'll discuss that later.
<miconda> eZz: perhaps we can add some specs for such needs
<miconda> let's discuss a bit later
<eZz> ok, thanks, will wait
<oej> So where are we?  [17:12]
<oej> Seems like there are no objections about moving all the source to /src
<miconda> we have ./src/*... vs ./core
<oej> and move the .[ch] files in the current root directory to /src/core
<oej> Time to speak up now, fellow developers!  [17:13]
<pdunkley> I like it
<mgw> this is related to the src tree: would there be an advantage to putting
	  modules in their own repos and then use git submodules?
<torrey> but have includes in the same or separate directory?
<oej> mgw: that would be awful when doing developer branches, wouldn't it.
<oej> torrey: That's what I asked - is that a good idea?
<torrey> might make grepping the api a bit easier...  [17:14]
<oej> That's what I thought too. Anyone else having opinions?
<pdunkley> oej: keeping core (and lib) includes that modules use in a separate
		   directory would be nice.
<oej> The lib include files still are in their own directories  [17:15]
<oej> Or?
<pdunkley> Would mean that directory could be added to the include path and
		   just use #include <---> instead of #include "../../---"
<oej> Since we have to change a *lot* of include directives we might as well
	  do this in my opinion
<mgw> oej: would it? I guess it might be hard to keep the submodules pinned
	  correctly.
<pdunkley> This is where sed might come in handy...  Any reg-ex experts
		   around?
<oej> mgw: There are a lot of module interdependencies
<torrey> sounds good, and if they ever move again, fixing will just be a
		 makefile change
<admorten> +1  [17:16]
<pdunkley> And a top-level included Makefile change at that.
<oej> miconda: any opinion on include files?
<miconda> I don't have anything against collecting in a folder
<miconda> but hast to be with some hierarchy inside it
<oej> Ok.  [17:17]
<miconda> like mem/ , parser/ ...
<oej> So I guess we need to book an evening or time when we can have a commit
	  war and make this happen.
<oej> Or open a shared branch to play with
<oej> Because things will be broken while this is going on
<miconda> yes, we have to plan that ...  [17:18]
<miconda> I guess we have to start a wiki on this one as well
<miconda> play separately with branches to build migration tools
<oej> good idea
<miconda> i.e., to allow time for pdunkley to learn regexps :-)
<oej> Seems like we have reached a general agreement on the direction on
	  this. We can hash details out on sr-dev  [17:19]
<pdunkley> pdunkley: is already planning a holiday for the time this is
		   happening.
<oej> pdunkley: Oreilly has a great regexp book. I understood most of the
	  first couple of chapters. Then I got lost.
<miconda> ok, let aim to collect some sed/perl commands that will do the most
		  of the job  [17:20]
<miconda> them plan a hacking day/night for it where at least some devs are
		  available
<miconda> where 'some devs' mean more than 1 :-)
<oej> we need pdunkley's holiday planning too.  [17:21]
<pdunkley> I'm travelling a lot over the next few months, but if I am around I
		   will join in.
<miconda> ok ? going to next topic?
<pdunkley> Fine with me
<miconda> ohh .. i'm done, @pdunkley turn now
<pdunkley> OK
  [17:22]


************** OUTBOUND/GRUU **************************************************

<pdunkley> Outbound/GRUU.  Does any more need done?  If so, does anyone want
		   to do it?
<jh__> i'm happy with the current capabilities except for the missing branch
	   failure stuff we already discussed about  [17:23]
<pdunkley> oej?
<oej> I'm not really up to date.
<oej> I would like to "unhide" gruu's. But that's not a dependency on
	  outbound.
<pdunkley> http://www.kamailio.org/wiki/devel/completing_outbound
<oej> I never like stuff that happens like magic. If a gruu is a r-uri I want
	  to handle that as a gruu in the config script ;-)  [17:24]
<miconda> pdunkley: in single server mode, is there any path?
<pdunkley> I'd like to tidy up the single-server stuff so that received
		   parameters and AVPs aren't needed (unless you want them).  But it
		   isn't something I can prioritise now.
<pdunkley> No Path: in a single server.
<jh__> i don't use path in single server mode
<oej> But that's tidying up. What's needed to make it work (tm)
<miconda> oej: there is a function _is_gruu() to detect such cases
<pdunkley> That's one of the things that needs sorted to do it properly.
																	    [17:25]
<oej> miconda: But how are gruu's resolved into contacts?
<jh__> by lookup()
<miconda> if(is_gruu) lookup(?)
<jh__> yes
<oej> But the gruu's aren't visible in the location table - are they?
<jh__> yes they are  [17:26]
<pdunkley> Worth doing a myself check at the same time as is_gruu() or you end
		   up trying to resolve a non-local gruu.
<miconda> pdunkley: i'm confused about first bullet regarding registrar module
		  at: http://www.kamailio.org/wiki/devel/completing_outbound
<oej> ok. I rest my case until furhter investigation ;-)
<jh__> yes, ie_gruu test is done only when there is no more entries left in
	   route set after loose_route()
<pdunkley> My thoughts were to fudge an entry into the location Path field and
		   then make registrar spot that the Route: it was added was local and
		   effectively loose_route() there and then.
<pdunkley> My thoughts were to fudge an entry into the location Path field and
		   then make registrar spot that the Route: it added was local and
		   effectively loose_route() there and then.
<jh__> my tests are working fine without anything extra in route header uris
																	    [17:28]
<miconda> if there is a path header, registrar will add it as route header
<miconda> but you want to be immediately visible?
<jh__> yes, and the path uri will point to ob proxy  [17:29]
<miconda> registrar will set dst_uri to the first address in path
<miconda> jh__: but isn't this the case for single server, no ob proxy at all?
<jh__> in single server case, there is no path header
<pdunkley> Basically, in single server mode (using outbound instead of
		   received parameters and AVPs) you need some way to get the
		   flow-token into the location table.  [17:30]
<pdunkley> Once that flow token is in the location table you need to use it to
		   set $du, not the Contact: address.  Although the Contact: address
		   still becomes the R-URI after lookup().
<oej> So adding a path to token at localhost is a good call
<pdunkley> oej: yes
<miconda> ok, i'll give it some thoughts  [17:31]
<pdunkley> But to avoid Kamailio wasting effort looping a request back to
		   itself, you want to detect that the Route: added is indeed local,
		   remove it, and set $du after lookup().
<miconda> now we use received to keep the src address
<jh__> if you do that, make it configurable, since i don't like at flow tokens
	   in the uris.  as i have explained, they break gruu.
<miconda> might be the right place to store token at localhost
<miconda> received field is used only for natted devices  [17:32]
<pdunkley> I use an edge proxy at the moment, so I've not had to get
		   single-server outbound working properly.
<miconda> ok ? so now is a bit more clear
<miconda> we can move to next topic ...  [17:33]
<jh__> in my test i have two proxies where each acts are ob and registrar
<oej> How does flow tokens break gruu?
<jh__> if contact uri changes, flow leads to dead end
<oej> That's a 431 right  [17:34]
<pdunkley> 430 is flow broken
<oej> Right
<oej> SO branch_failure can take care of that.
<pdunkley> It's basically the thread on dispatch at ietf.org .  Probably best to
		   leave the debate on there.  [17:35]
<jh__> but since route set cannot change during the dialog, there is nothing
	   you can do to fix the flow token
<jh__> ok, lets leave the topic.  as i said, i'm currently fine what peter has
	   implemented - branch failure  [17:36]
<pdunkley> hughw get's the credit for that one.
<pdunkley> hughw gets the credit for that one.
<pdunkley> Next topic?  [17:37]
* oej gives hughw lots of credit
<hughw> Thanks :)
<oej> Next topic!
<pdunkley> oej: hughw doesn't drink beer
<oej> Oh, we can fix that too :-)


******************* MSRP ******************************************************

<pdunkley> MSRP: http://www.kamailio.org/wiki/devel/completing_msrp  [17:38]
<pdunkley> msrp
<pdunkley> Couple of things missing from the current MSRP Relay implementation
<pdunkley> MSRP replies are hop-by-hop.  So if there is a failure in a relay a
		   REPORT needs to be generated and sent back (if the SEND indicates
		   that one is needed).  [17:39]
<miconda> for report, might be easier not to do it by storing required info
		  inside the cmap
<pdunkley> MSRP relays can receive infinitely large messages over TCP.  A
		   relay needs to be able to handle this.  On Kamailio we wait until
		   we've received the entire message.  [17:40]
<miconda> is the report hop by hop as well?
<pdunkley> REPORT is end-to-end
<miconda> ok, so practically we need to store the headers  [17:41]
<pdunkley> We need to be able to chunk large SENDs into multiple smaller ones.
		   So we receive a huge SEND in and send lots of small ones out (that
		   have the same message ID).
<pdunkley> Yes.
<miconda> until we get the reply
<miconda> if the reply is ok, then drop the headers
<pdunkley> I think the solution to the REPORT and the SEND chunking are
		   related.
<pdunkley> We need some transaction state for SENDs that are bigger than the
		   receive buffer and for ones which may need a report generated.
<miconda> isn't any option to tell back to use smaller chunks?  [17:42]
<miconda> just as quick option ...
<pdunkley> This will allow the REPORT to be generated if there is no reply or
		   if the reply is a failure one.  Also allows us to add the headers
		   to the subsequent chunks.
<pdunkley> miconda: unfortunately not.  MSRP relies entirely on TCP-level flow
		   control.
<miconda> ok
<pdunkley> When the the window is full stop sending, but it is still the same
		   message.
<pdunkley> The Crocodile MSRP stack (in Javascript) does chunking itself to
		   work around this for now, but other clients don't (as they don't
		   have too).
<pdunkley> I've started (and stopped) work on this a few times.  But never got
		   as far as having anything remotely working (or code worth keeping).
																	    [17:44]
<pdunkley> There is some very broken stuff in a branch where I was trying to
		   add a transaction map to the msrp module.
<miconda> so chunking, it has to change byte-range  [17:45]
<pdunkley> I'll get back to it at some point - just wanted to see if anyone
		   could help.
<miconda> anything else? or the other headers stay the same?
<pdunkley> The rest of the headers are the same.
<oej> Which memory is used for caching here?
<miconda> my problem with this extension is lack of clients
<oej> If I send a 4mb video - where does it end up before you start chunking
	  away?
<oej> Like a tour of the new Abba museum in Stockholm  [17:46]
<pdunkley> If any chunk gets an error response from the next-hop then you send
		   a failure REPORT back (if they are turned on for the SEND) and an
		   immediate failure reply too.  That should stop the sender
		   continuing.
<miconda> ok ? we can discuss on sr-dev when someone has time to start, just
		  shout there  [17:47]
<pdunkley> oej: Kamailio shouldn't receive the whole 4Mb file before it starts
		   chunking.  If the Kamailio TCP buffer is 16 kb then you should
		   chunk and send out the chunks at less than 16 kb intervals so as
		   not to overflow the buffer.
<miconda> i may look at report generation  [17:48]
<pdunkley> Basically, you relay stuff as quickly as you can.
<pdunkley> Next?
<miconda> yes


******************** RTCWEB_BREAKER *******************************************

<pdunkley> http://www.kamailio.org/wiki/devel/rtcweb_breaker
<pdunkley> This is something else I hope to start soon.  But any help would be
		   appreciated.  [17:49]
<pdunkley> I think we need a simple way to connect WebRTC and non-WebRTC
		   sessions without a B2BUA or media server.
<oej> What about ICE and DTLS?
<torrey> RTCWebBreaker is an rtp-relay that can DE-Ice the stream if needed?
																	    [17:50]
<osas> it's a breaker
<pdunkley> That's the point.  It's a bit like an RTP proxy that decrypts DTLS
		   and does ice-lite on one side and has ordinary RTP on the other.
<osas> it must do transcoding
<osas> rtp/savpf to whatever else  [17:51]
<pdunkley> It doesn't need to change the actual content.
<osas> not media transcoding
<pdunkley> Just the container for the content.
<oej> It's a huge effort to get it right.
<pdunkley> Yes, but I think it is needed.  At the moment you have to use media
		   gateways that B2BUA the signalling.
<osas> I would expect that in the future more and more clients will support
	   RTP/SAVPF
<oej> But you would have to do huge SDP rewrites too
<pdunkley> oej: yes  [17:52]
<oej> And somehow fake ICE.
<oej> The side without ICE will be confused about re-invites coming
<pdunkley> You don't have to add STUN or TURN candidates yourself though.
<pdunkley> Local candidate lines and STUN pinging  (send to candidates you've
		   received and accept for local candidates) is needed.  [17:53]
<oej> I think it's crazy, but crazy stuff is cool. Go for it.
<pdunkley> Alternative is to keep using media servers and lose a lot of
		   signalling.
<pdunkley> Or wait until all the clients and servers (and interconnects)
		   already out there support RTP/SAVPF.  That should happen later this
		   century.  [17:54]
<oej> We don't really know how the DTLS certs will be handled
<oej> But yes, this seems like an easy upgrade path. I'm just worried that you
	  will have to build a b2bua in the end
<pdunkley> No-one does yet.
<osas> how much traffic do you expect to have using RTP/SAVPF?  [17:55]
<pdunkley> On the cloud system I am building that uses WebRTC... All of it.
<osas> the question here (I think) is the return investment
<osas> well, in that case, is it worth implementing something like this
<pdunkley> Asterisk, Freeswitch, and Doubango, all have media stacks that do
		   the ICE-lite and RTP/SAVPF (as far as it is specified now).
																	    [17:56]
<osas> it all depends how slow.fast the world wide adoption of RTP/SAVPF will
	   go
<pdunkley> Probably slower than IPv6 adoption.
<miconda> iirc, I heard sipwise wanted to extend their media proxy to do some
		  ice and more on rtp
<osas> if more and more clients will support it, then it's not worth
	   implementing it  [17:57]
<miconda> but not sure if this is in their plans, Andreas was here earlier,
		  but no longer now
<oej> Who said IPv6?
<pdunkley> How many ITSPs are there?  How long will it take even a small
		   percentage of them to upgrade?  Plus people running old or bespoke
		   client versions.
<oej> I think you should discuss with Andreas if this can be part of his
	  project too. Why write mroe RTP proxys than we already have?  [17:58]
<jh__> yes, it will take forever before pstn gw vendors support rtp/savpf
<pdunkley> Yup.  And how transparent is the signalling through Asterisk (oej
		   and I talked about this earlier).
<oej> Well, after ten years most SIP vendors fail to understand TLS
	  correctly. It will take some time to get DTLS and SRTP
<oej> Oh, as long as Asterisk has ISDN in the core, it's gonna be messy.
<miconda> pdunkley: any lib that can be used for de-dtls-izing the rtp?
<pdunkley> OK.  I'll talk to sipwise and update sr-dev if/when this work get's
		   underway.
<oej> we do need a more SIP-centric b2bua pbx
<pdunkley> miconda: I know that Asterisk uses an SRTP library.
<miconda> it might be easy to link rtpproxy for it
<jh__> srtp is good for nothing in terms of sucurity.  zrtp would be needed,
	   but for some reason that is not part of webrtc.
<oej> DTLS is supported by OpenSSL and is just the key exchange  [18:00]
<pdunkley> http://srtp.sourceforge.net
<oej> then the key is used with plain SRTP
<oej> People love and hate that library.
<miconda> it was a long discussion on ml about sipwise media proxy, where I
		  think I read about their plans with ice
<miconda> because they pass the entire sdp back and forth
<pdunkley> OK.  I'll get in touch with Andreas about this.
<miconda> or at kamailio world discussion ? not sure, but somehow was in my
		  mind
<miconda> we can discuss on mailing list, indeed  [18:01]
<miconda> everyone is there
<miconda> ok


******************* PRESENCE **************************************************

<pdunkley> Next presence.
<pdunkley> Nothing's changes since last time, but just wanted to see if there
		   is anything on the list classed as urgent?
<miconda> I may look at xcap-diff  [18:02]
<jh__> external refererences to external servers would be nice to have
	   supported
<pdunkley> http://www.kamailio.org/wiki/devel/completing_presence
<miconda> at some point :-), hopefully soon
<hughw> [ the rtpproxy-ng dev page on the wiki has the previous enhancements
		listed ]
<hughw> http://www.kamailio.org/wiki/devel/rtpproxy-ng
<pdunkley> jh__: think that becomes easier when you have a proper XCAP client
		   and server working.
<pdunkley> Again, not something I can spend a lot of time on now, but I'd love
		   to get Kamailio presence and XCAP working with Blink.  [18:03]
<jh__> how about notifiers.  is postgre still needed or can mysql support it
	   too?  [18:04]
<pdunkley> It seems to be the most complete presence client now.
<pdunkley> There was some MySQL work done recently.  Don't know if it complete
		   though.  oej?
<miconda> pdunkley: not a trusted source, though
<jh__> i have tested blink and found several bugs in its presence
	   implementation  [18:05]
<pdunkley> miconda: but there isn't a huge choice out there.  Unless someone
		   here is writing a client.
<jh__> i'm still waiting comments on my latest show stopper bug when testing
	   with k
<pdunkley> Kamailio bug or Blink bug?
<oej> We did add transactions to Mysql. I need to check the second part, but I
	  remember seeing code.  [18:06]
<jh__> unfortunately i have not found any working presence/xcap client yet
<miconda> jh__: any bug in presence modules of k, or just missing features?
<oej> All presence implementations have bugs - and they don't work together.
<jh__> blink bugs or actually bugs in python-sipsimle
<miconda> xquery is used somewhere?  [18:07]
<jh__> i found the escape bugs and fixed them.  after that i found blink bugs
	   (two of them) and waiting for them to be fixed before i'm able to
	   continue
<miconda> or required by clients to work with the server?
<pdunkley> It's used to find contacts to add to your contact-list.
<pdunkley> It's an OMA extension.  [18:08]
<pdunkley> It gives you a Skype-like search feature.
<miconda> OMA has a lot of extensions  [18:09]
<miconda> but no operators ever tried to deploy :-)
<oej> If we continue down the SIMPLE path we can discuss forever...
<pdunkley> I know of at least one deployment with XQuery...
<miconda> ok  [18:10]
<pdunkley> But it isn't an implementation that I can currently share.
<miconda> so, let's see what can be done on this devel cycle
<miconda> as i said, i'll look at xcap diff
<miconda> on xcap_server
<jh__> good to look at xcap-diff.  [18:11]
<pdunkley> Great.
<pdunkley> Second last of mine.
<miconda> and the rest as I find resources ...
<miconda> anything else or move to next topic?
<pdunkley> Doing a lot of work on Amazon right now.  Planning to add some
		   stuff to support various AWS DB services and things.  Again, just
		   wondered if anyone else is interested in this stuff?  [18:12]
<miconda> I never interacted with
<oej> I think it's a good story.
<osas> if you have them and you want to publish them, why not
<pdunkley> Not done them yet.
<osas> you will get more feedback and more testers :)  [18:13]
<pdunkley> Again, just asking as if anyone else wants it and can help it'd be
		   appreciated and probably done faster.
<osas> ask on ml (more audience)


**************** EVENT ROUTE FIRED WHEN A TCP CONNECTION CLOSES ***************

<pdunkley> Last item of mine.  [18:14]
<pdunkley> Added it because oej is here.
<oej> he he
<pdunkley> An event_route that fires when a TCP connection closes.
<miconda> for such things I am more customer driven ? it might get on top
		  unexpected quickly or never :-)
<jh__> sound like a good thing to have in order to be able to unregister
	   contacts
<miconda> this is something I keep remembering I have to look at  [18:15]
<oej> :-)
<oej> Good thing I keep all of you on your toes
<pdunkley> I tried adding it.  The event_route code is trivial.
<pdunkley> Just couldn't work out where to trigger it from.
<oej> The question is how to present any data?
<oej> Which connection got closed?
<pdunkley> I was hoping there'd be a nice, single place in TCP to do it.
<miconda> probably connection id and local/remote sockets
<pdunkley> Same as I did with the WebSocket close event.  [18:16]
<oej> Is it related to something - an outbound flow, another SIP proxy, a TLS
	  conenction?
<pdunkley> Fill in the receive_info details and stuff like $si and $sp work.
<oej> If it's connected to an outbound flow, we need the RUID
<pdunkley> And the new $conid PV I just added does too.
<oej> can that lead to ruid?
<pdunkley> If you want the RUID store it in an htable that is mapped on
		   $si:$sp.  [18:17]
<miconda> oej: perhaps the safest for usrloc record removing is to do it in
		  the module
<pdunkley> It's what I do.
<oej> If an edge proxy drops a connection to a UA is one case
<miconda> nathelper can do that now for udp contacts that are behind nat
<oej> if a core proxy drops a connection to an edge proxy - multiplexed and
	  used for many flows - what do we do then?
<miconda> if they don't respond on options keepalive
<miconda> for tcp will take a timer base check to see if the connection is
		  still open  [18:18]
<miconda> looks easier to get from usrloc to connection id, than from
		  connection id to usrloc record
<oej> I think we will have to play around with this idea on the mailing list
	  to get the requirements hashed out
<jh__> tcp level keepalives could tell when connection is broken
<miconda> perhaps we can use a flag to mark what connections have to be
		  checked (like we do for udp to send options keepalives)  [18:19]
<miconda> jh__: maybe we can hook some callback to that event  [18:20]
<miconda> let's see ?
<miconda> ok, so we will discuss more on mailing list if needed
<miconda> pdunkley: do you have the code in some branch?
<pdunkley> Probably not.  I'll have a look later.
<miconda> ok
<miconda> next topic then ...
<miconda> osas: I guess you are ...  [18:22]


*************************** XHTTP *********************************************

<osas> sure
<osas> xhttp:
<osas> improve the API by providing a response buffer to build in http replies
<miconda> isn't there a buffer inside the module for that?  [18:23]
<osas> is it?
<osas> I don't remember seeing one
<miconda> or you want to stack stuff in it ? and then send?
<miconda> I thought it is, but I might be wrong ...
<miconda> should be no problem to add one
<osas> I would like to add parts of the reply to it and do a flush
<osas> and when the whole reply is built, do a send  [18:24]
<osas> this will solve large rpc replies
<osas> like dumping the registration table
<miconda> ok ? so this needs vim+c code
<miconda> but is nice to have, indeed  [18:25]
<osas> also, it would be nice to integrate xmlrpc with xhttp
<miconda> it is some duplicated code there, right
<osas> so we have a common base for all http related operations
<miconda> we should clean that up
<oej> agree  [18:26]
<osas> and I haven't checked, but it would be great to load the xhttp module
	   only for workers that are tcp workers
<osas> and subsequentiqally, all the modules that are on top of xhttp
<miconda> modules are loaded before forking
<osas> I see ...
<miconda> but maybe you need child init only for tcp workers
<osas> well, we can leave with it :)
<miconda> ok
<miconda> anything else?  [18:28]
<osas> yes, at least not initialize (and cosume pkg mem) on UDP workers
<osas> I think that would be it for now
<miconda> ok, next topic
<miconda> hughw: your turn


********************* DEBUG LEVEL PER MODULE **********************************

<miconda> - debug level per module  [18:29]
<miconda> it is a good feature, it's missing a developer :-)
<oej> I think that's a great idea
<osas> it would be nice to have the ability to control the debug level per
	   module via an rpc command
<oej> RPC command
<oej> Exactly!  [18:30]
<osas> also, a module parameter would be good
<hughw> I had a look at the dprint code last night. I can see there is already
		a 'local_debug'level' which I didn't know about before
<osas> on server restart, to keep the desired debug level
<oej> It could very well be config variables.
<oej> Then we have both config script support as well as RPC automatically.
<hughw> But this requires a shared map of module => debug level  [18:31]
<oej> Config variables.
<miconda> hughw: local_debug is used by some config functions to set/reset
		  debug level in routing blocks
<miconda> you can have something like:
<miconda> setdebug(x)l
<miconda> lookup(?);
<miconda> resetdebug();  [18:32]
<miconda> so lookup is executed with debug=x
<miconda> the problem I see with per module debug is when executing functions
		  that are using code from other modules via API  [18:33]
<miconda> like auth_db executes db_mysql functions
<oej> I need to leave. Thanks for a good meeting. Keep calm and carry on,
	  Kamailians!
<miconda> auth_check() from auth_db module executes code from db_mydql
<pdunkley> and when running presence with notifier you get so much debug level
		   stuff out the DB and presence modules that you can't see anything
		   else at all.  [18:34]
<admorten> Yep.
<hughw> The current debug uses the MODULE_NAME at compile time to create the
		debug strings through macros. Is ther a way to use something similar
		to lookup a shared memory map, so that debug from a module knows the
		current level  [18:35]
<hughw> for that modu;e
<miconda> i'll give some thoughts on it ...  [18:37]
<miconda> at this moment I have some ideas, but might be wrong
<miconda> will continue on dev ml if nobody has more to add  [18:38]
<hughw> OK.
<miconda> next topic then ...


*********************** NCURSES-BASED TOOL ************************************

<miconda> eZz: you wanted to about mods dependencies  [18:39]
<miconda> this should be last before last
<eZz> yes
<eZz> I'm thinking about to write some ncurses-based tool  [18:40]
<eZz> like asterisk's make menuselect
<miconda> ok, it would be nice to have
<miconda> it is no spec file to define dependencies
<miconda> but should be not hard to add
<miconda> we need to define a format
<eZz> but found a problem, no easy way to track a dependencies instead of
	  parse makefiles and xxx.lst  [18:41]
<miconda> since it is not something that dynamic
<miconda> we can build it manually
<miconda> we add new modules once in a while
<miconda> probably takes longer to write a tool to automatically generate it
<miconda> than writing it and maintained by hand
<eZz> ok, maybe just define a spec like rh-based spec files ?
<eZz> and if that tool will find this file in modules dir - show it in menu
																	    [18:43]
<miconda> maybe we can have it a more flexible
<miconda> or what you mean by rh spec file?
<miconda> I was thinking to put a bit more than just dependencies there
																	    [18:44]
<miconda> e.g., a short description
<eZz> yes  [18:45]
<eZz> from my view it will have 1-module options, 2-depends, 3-description
<miconda> if you have something in mind already, send a proposal on devel list
		  with an example ...
<miconda> I mean -- about the format of the spec file  [18:46]
<eZz> alright, I will send this on ml when I will have something resonable
<miconda> maybe we can list there also external libs
<eZz> maybe also option to auto-install ;)  [18:47]
<miconda> at least just for info
<miconda> :-)
<eZz> why not, like cpan
<miconda> ok ? that will be ng version
<eZz> yes
<miconda> you can start discussion on mailing list before you have something
																	    [18:48]
<miconda> other people might be interested in helping out
<eZz> ok, I will
<miconda> at least with ideas (you see, we have plenty of them)
<miconda> ok :-)
<eZz> yes
<eZz> can I ask one more question ?  [18:49]


******************* NEXT MAJOR RELEASE ****************************************

<miconda> so last topic -- next major release
<eZz> oops
<miconda> I guess it is going to be sometime in autumn
<miconda> at previous meeting we thought of doing a short devel cycle for 4.1
		  to get outbound module
<miconda> but pdunkley did it for 4.0
<miconda> anything that is demanding a 4.1 rather soone?!?  [18:50]
<miconda> ok, if not, we should make an more accurate release date proposal
		  after the summer  [18:51]
<jh__> i don;t remember now but there are sone new ob/gruu related stuff in
	   master that is missing from 4.0
<jh__> yes, it is branch failure route  [18:52]
<miconda> that still need the work for t m module and connections  [18:53]
<jh__> once those are in, we could issue 4.1 in early autumn
<miconda> and I won't rush into a major release quickly after a change there,
		  needs some testing
<miconda> early autumn is bad, because people are in vacation during testing
		  period :-)  [18:54]
<miconda> but we can adjust
<eZz> according to releases, can I merge my local branches with uac/tm/sdpops
	  with master till master ?
<jh__> let say end of sept then
<eZz> *till autumn
<miconda> eZz: if you have changes in your branches, you can merge before
																	    [18:55]
<miconda> just ask for review if it is not on your modules
<miconda> and it is recommended to do it before
<miconda> to allow testing
<eZz> it's not just mine, it's global
<miconda> pushing before freeze means less testing time ...
<eZz> ok, I understand, thank  [18:56]
<miconda> yes -- anyone can merge before we announce code freeze
<miconda> code in personal branches should be kept until is something desired
		  for master branch  [18:57]
<miconda> personal branches should be used only for short period of time,
		  while prototyping/playing with new stuff
<miconda> merging late may create lots of conflicts
<miconda> ok --- looks like we are done
<miconda> only 3 hours :-)
<miconda> any ad hoc topics?!?  [18:58]
<admorten> openrcs.com purpose/direction?
<miconda> if not, looks like we have a lot of ideas to code ...
<miconda> admorten: we can discuss after anything related to kamailio devel
																	    [18:59]
<miconda> which might not be anymore ...
<admorten> That's fine.
<miconda> so last call, anything else to discuss about kamailio
		  development?!?!
<miconda> ok, nothing, good!  [19:00]
<miconda> thanks everyone! good input, now let's see the follow up actions
<miconda> I will put a summary of this conversation (read as: copy and paste
		  the log, removing irrelevant parts) on the wiki during next days
																	    [19:01]
<miconda> that's all for the development meeting ? free conversations now
																	    [19:02]

****************** FREE CONVERSATIONS *****************************************

<eZz> ok, what about a bug I described yesterday?  [19:03]
<eZz> about uac which is adding '<' char
<miconda> admorten: the goal with that service is to have a server where
		  people can play with sip and last features of kamailio
<miconda> at FOSDEM we had a panel discussion about federating  [19:04]
<miconda> there is iptel.org service, but not related to kamailio that much
<admorten> Right.
<miconda> voipuser.org still striving to get the website back (the sip server
		  is still up and running)
<miconda> eZz: you have to put a SIP URI there  [19:05]
<miconda> not a header
<miconda> < > were added to fix other bug
<miconda> it will need an extension to be able to specify display name
<miconda> you should add a feature request on tracker ...  [19:06]
<admorten> miconda: I've attempted to sign up for an openrcs.com account, but
		   haven't got a confirmation e-mail yet.
<miconda> admorten: check your spam folder?!?
<admorten> Several times. :)
<admorten> Wasn't able to give my fully attention to this meeting earlier. I
		   did want to mention earlier that we've rolled out the sca module,
		   and now have 6000+ active SCA subscribers, with a likely final
		   total ~10000.  [19:09]
<miconda> all working fine?
<Marquis42> admorten: That's excellent to hear.  [19:10]
<admorten> Yes, working well.
<miconda> it is quire some subscriber base for shared lines ...
<admorten> Yes, turned out to be a very important feature from the legacy
		   system.  [19:11]
<miconda> :-)
<admorten> We've successfully tested group sizes up to 20 so far.
<miconda> admorten: sca is all in one, no need of dialog or other presence
		  modules, right?
<admorten> Correct.
<admorten> That could change with some concerted effort, of course.
<miconda> sometime is good to have it like this, afaik, the specs were
		  floating on sla/bla
<miconda> with broadsoft bringing on its own flavour  [19:13]
<admorten> Yes, SCA's tied a bit more to the INVITE dialog than the bla
		   sipping draft.
<Marquis42> admorten: That's with mostly/all Polycom handsets I suppose?  Or
			do others support it?
<admorten> Our deployment is all Polycoms, but I've been testing with a Cisco
		   SPA 303, Yealink T22P, Aastra 6757i and Aastra 6731i.  [19:14]
<admorten> Works across all of them.
<admorten> I've been meaning to add a wiki page on the subject, but the
		   rollout was taking most of my attention. :)  [19:15]
<Marquis42> Nice, glad to hear it's more than just Polycoms (although they're
			nice).  Are you aweare of any softphones that support it?
<admorten> (Expecting to get a Snom for evaluation, too.)  [19:16]
<admorten> I'm not aware of any softphones doing SCA, but that's because I
		   haven't really looked.  [19:17]
<miconda> I have to go for a while, i'll be back later
<Marquis42> Gotcha.  Either way, glad to hear that's working really well.
			I've been keeping an eye on that module.
<admorten> Great, thanks. Let me know if you've got questions.
<Marquis42> Will do, thanks  [19:19]


More information about the sr-dev mailing list