Hi Klaus,
indeed this is a long email ;).
please see my inline comments.
regards.
Bogdan
Klaus Darilion wrote:
Hi all!
There are several scenarios where TLS will be used to interconnect SIP
proxies. (open)ser's TLS implementation should be generic enough to
handle all the useful scenarios. Thus, to better understand the
requirements, first I present some examples where (open)ser+TLS will
be useful. (I do not propose which of the following interconnect
models are good or bad. However, openser should be capable to handle
all of them, best in a mixed mode).
Enterprise scenario:
A company uses TLS to interconnect their SIP proxies via public
Internet. The proxies import the companies selfsigned CA-cert as
trusted CAs. The proxies trust other proxies as soon as their cert is
validated using the root CA.
This is already possible using openser 1.0.0 (= or ser+experimental TLS)
Federation scenario:
Some ITSPs form a federation. The federation-CA signs the certs of the
ITSPs. Here, the validation is like in the enterprise scenario.
(open)ser validates against the federations CA-cert. This works with
openser 1.0.0 as long as the ITSP is only in one federation, or uses
different egress/ingress points for each federation. If the ITSP is
member of two federations and uses one egress/ingress proxy, it has to
decide which certificate it should present to the peer. The
originating proxy could choose the proper client certificate for
example by using a table like (or having the certificate as blob
directly in the DB):
dst_domain certificate
sip.atlanta.com /etc/openser/federationAcert.pem
sip.biloxy.com /etc/openser/federationBcert.pem
sip.chicago.com /etc/openser/federationAcert.pem
Presenting the proper server certificate, is more difficult. The
server does not know if the incoming TLS request belongs to a member
of fedA, fedB or someone else. Thus, presenting the wrong certificate
will lead to the clients rejecting the certificate due to failed
validation. One solution would be sending the "trusted_ca_keys" (TLS
extension) in Client Hello. Unfortunatelly this is not supported in
openssl (and gnutls). Any workaround for this?
As I understood from Cesc, gnutls already support this extension, but to
migrate to gnutls and restart all testing may not pay the effort as time
as it's just a matter of time until the extension will be also available
in openssl.
As temporary solution I will suggest to go by default without the
extension patch, but to provide the patch into the TLS directory and
people interested in these multi-domain scenarios will have to apply and
recompile the openssl lib. And maybe we should do some lobby (read
pressure) on the openssl mailing list in order to push this extension in
the official tree.
Just an idea.
Anyway, in this scenario it is important to have the certificate
parameters (Subject, Issuer) available in the routing logic to make
routing decisions based on the TLS authenticaten and adding them to
the CDRs (e.g. via AVPs and extra accounting)
interesting but there might be some problems - the information you want
to log comes from transport layer and you try you log it by using
mechanism from the SIP level. It will works, but the info will be
actually available only for requests that initiated the TLS connection
(send or received) and not also for the requests that reuse the connection.
Bilateral scenario:
An ITSP has bilateral trust relationships. Each ITSP has its own CA
which signs the certs of this ITSP. If another ITSP wants to trust
this ISTP it only has to import the others CA-cert. This works already
with openser 1.0.0, but exporting the cert parameters for extra
accounting will be useful.
not sure what you mean by cert parameters.......
Hosted SIP scenario:
An ITSP hosts multiple SIP domains for its customers. If the server
has to offer a certificate which includes the proper SIP domain, the
server_name extension is needed to indicate the requested domain in
the client_hello request. Then the server will present the proper
certificate and domain validation (Subject domain == SIP domain) in
the client will succeed.
the solution will also the mighty extension, indeed.....
This will work fine with initial (out-of-dialog)
requests as they
usually will include the SIP domain in the request URI. There will be
problems for responses and in-dialog requests as usually the
Record-Route and Via headers only includes IP addresses. Thus, the SIP
proxy either has to insert the SIP domain into Via and Record-Route,
or the domain validation should only be done for in-dialog requests.
I don't thing we should worry about replies - they will return via same
connexion - the expiration time of a tcp connection must be higher than
the expiration time of a transaction..
But about the within the dialog requests - you have a strong case here!!
But is actually more complex : you need to know the inbound and outbound
domains actually - if you received the request from another peer via TLS
and fed it also via TLS to another peer (relaying) will need to remember
both domains since the within the dialog request may flow in both
directions ;).
Maybe storing the domain names as RR param is the simplest and uglier
solution...in the mean while I think is the only one without involving
any dialog persistence.
This leads to the problem of domain validation. The TLS connection
will be set up after all the routing logic, somewhere inside t_relay.
Thus, if we want domain validation, it will be inside t_relay. Maybe
we can use a certain flag to indicate if domain-validation should be
done (on a per-transaction basis). This might cause problems if there
is already a TLS connection to the requested destination, but without
domain validation or validation against a different domain (virtual
domain hosting). How to solve this?
one premiss we should based on is the fact that cannot exists (in my
opinion) connections that should or not require domain validation in
different case. Argumentation: AFAIK only two types of connections can
be: user oriented and peering oriented; the first type will not require
validation at all and the second one may or may nor, based on local
policy. So, I think, we cannot have a case when connection to X will
require validation and later no.
To control the validation (and maybe other parameter of the connection),
prior setting from the script may be the solution - I was investigating
with Cesc the idea of building a TLS module which will be used for
provisioning the cert and to control the connection params. The TLS
engine itself will stay in core as now.
So, I would say we never reach the case when we want to reuse an
existing connection but with different settings.
I can't propose a solution to all scenarios. But I think I showed that
the certificate selection and validation should be very flexible, e.g.
by choosing the proper client certificate for each transaction and
different routing in the server depending on the presented client
certificate and the cerfiticate signer (e.g. based on a whitelist).
Further we have to take care to add certifcates and CA-certs during
runtime, e.g. using a FIFO command "tls_reload". This should also drop
all existing TLS connections. Having a maximum connection time after
which we force re-validation will also be useful.
Also (open)ser should allow to import CRL (certificate revocation
lists) (shouldn't be a problem with openssl) or usage of OCSP (Online
Certificate Status Protocol).
Some utilities like this will became definitely needed in short time.....
maybe all this will find the way into the TLS module - that will be
actually it;s purpose - pure management and provisioning.
Now I'm ready for some discussions :-)
regards
klaus
_______________________________________________
Devel mailing list
Devel(a)openser.org
http://openser.org/cgi-bin/mailman/listinfo/devel