Hi Daniel,
this is already compatible with master from kamailio (tests
running ok).
we will start to update the docs when we commit to master
and will get it ready before 4.2.
>>For a better
clarification, do the json payloads flowing via AMQP have a
structure specific for kazoo? I mean the object structure,
how are the fields name set, from the name of the columns?
Is there a >>wrapper to specify the command (e.g.,
insert, delete, select)? Or maybe you can provide an example
of such object...
No, the payloads are what you want. here's a sample on how
we are using to handle REGISTER messages
if ($sht(auth_cache=>$Au) != $null
&& pv_auth_check("$fd", "$sht(auth_cache=>$Au)",
"0", "0")) {
xlog("L_INFO", "$ci|log|authenticated $Au
via cached SIP creds");
} else {
## RABBITMQ - Credentials fetch
## build the payload
>>> $var(amqp_payload_request) =
"{'Event-Category' : 'directory' , 'Event-Name' :
'authn_req', 'Method' : 'REGISTER', 'Auth-Realm' : '" + $fd
+ "', 'Auth-User' : '" + $fU + "', 'From' : '" + $fu + "',
'To' : '" + $tu +"' }";
$var(amqp_routing_key) = "authn.req." +
$(fd{kz.encode}); // kz.encode is a helper function
to encode properly to amqp
if(kazoo_query("callmgr",
$var(amqp_routing_key), $var(amqp_payload_request))) {
// kazoo_query publishes and
expects answer
xlog("L_DBG", "$ci|log|amqp_result =
$kzR"); // $kzR is the
result payload
$var(password) =
$(kzR{kz.json,Auth-Password});
//
kz.json is a helper to extract values from payload, nested
values are permitted
// it could be something like $(kzR{kz.json,Channels[0].switch_url })
if($var(password) != $null) {
if (!pv_auth_check("$fd",
"$var(password)", "0", "0")) {
we build the payload and publish it to rabbitmq, in this
case we wait for a correlated msg to get the result.
kazoo_publish -> sends only
kazoo_query -> sends and waits for correlated answer
here's an example on how we subscribe to a exchange
event_route[kazoo:mod-init]
{
#!ifdef PRESENCE-ROLE
$var(payload) = "{
'exchange' : 'dialoginfo' , 'type' : 'direct', 'queue' :
'BLF-QUEUE-MY_HOSTNAME', 'routing' : 'BLF-MY_HOSTNAME',
'auto_delete' : 0, 'durable' : 1, 'no_ack' : 0,
'wait_for_consumer_ack' : 1 }";
kazoo_subscribe("$var(payload)");
#!endif
}
and how we react when we get
a message
event_route[kazoo:consumer-event-presence-update]
{
xlog("L_INFO", "received
$(kzE{kz.json,Event-Package}) update for
$(kzE{kz.json,From})");
kazoo_pua_publish($kzE);
pres_refresh_watchers("$(kzE{kz.json,From})",
"$(kzE{kz.json,Event-Package})", 1);
}
the event_route name is built from the payload we receive
on the subscription
kazoo:consumer-event - fixed
-presence - taken from parameter amqp_consumer_event_key
-update - taken from parameter amqp_consumer_event_subkey
amqp_consumer_event_key and amqp_consumer_event_subkey are
for now parameters for the module but we want to make by
subscription.
from the above example we will change it to
$var(payload) = "{ 'key' : 'fieldname', 'subkey' :
'fieldname', 'exchange' : 'dialoginfo' , 'type' :
'direct', 'queue' : 'BLF-QUEUE-MY_HOSTNAME', 'routing' :
'BLF-MY_HOSTNAME', 'auto_delete' : 0, 'durable' : 1, 'no_ack'
: 0, 'wait_for_consumer_ack' : 1 }";
right now, all interaction is made from configuration
script.
the long term will be to create a bind structure so that
other modules can interact directly with it.
let me know if that helps
Best
Hello,
thanks for this contribution!
Do you have a link to the module source code or can you
send it attached? We need to review a bit and check if
everything is in place (e.g., license, documentation).
For a better clarification, do the json payloads flowing
via AMQP have a structure specific for kazoo? I mean the
object structure, how are the fields name set, from the
name of the columns? Is there a wrapper to specify the
command (e.g., insert, delete, select)? Or maybe you can
provide an example of such object...
Kazoo is an open source application, therefore the new
module has no barrier in getting inside kamailio
repository. My questions were related more to see if worth
considering a new name.
To get it in our git repository, we expect that you or
someone else from 2600hz is willing to maintain it for at
least one year. You will get write access over ssh to git
repository to be able to push the new module and commit to
it in the future. I will write a separate email directly
to you with the required details for access.
Cheers,
Daniel
On 09/09/14 12:22, Luis Azedo
wrote:
Hello Awesome Kamailio Community,
We are writing on behalf of 2600hz, where we have
been using Kamailio for some time now and are very
pleased with our results! So, seems time to commit
something back, dontchya think?
We would like to present to you a new module, to
hopefully be included in master. We call it
db_kazoo (although a new name is fine too).
db_kazoo is a general purpose AMQP connector
(connects to our rabbitmq-server). It exposes
publish/consume capabilities into Kamailio. Why is
this amazing, you ask? Well even if you didn’t ask,
we will explain…
From a high-level, the purpose of the module
might be for things like:
- Integrate to an AMQP application to make
real-time routing decisions (instead of using, say,
a SQL database)
- Provide a real-time integration into your
program, instead of your database, so you can
overlay additional logic in your preferred language
while also utilizing a message bus
- Utilize messaging to have a distributed
messaging layer, such that machines processing
requests/responses/events can go up/down or share
the workload and your Kamailio node will still be
happy
With this module, someone can:
1 - publish json payloads to rabbitmq
2 - publish json payloads to rabbitmq and wait
for correlated response message
3 - subscribe to an exchange with a routing key
The module works with a main forked process that
does the communication with rabbitmq for issuing
publishes, waiting for replies and consuming
messages. When it consumes a message it defers the
process to a worker process so that it doesn't block
this main process.
The worker process issues an event-route where we
can act on the received payload. The name of the
event-route is composed by values extracted from the
payload.
Consumed messages have the option of being
acknowledge in two ways:
1 - immediately when received
2 - after processing by the worker
One unique feature of our implementation revolves
around failover of the message bus itself. In our
design, the module supports multiple RabbitMQ
servers and will fallback from one to the next in a
list of RabbitMQ servers if the connection fails to
the current connected server. When using acknowledge
in db_kazoo with clustering in RabbitMQ, we have
simulated and experienced full reconnects while in
the middle of processing pending messages. In this
way, when we get disconnected from one server we
proved that, even at high speeds, we are able to
connect to the next in the list and continue
interacting with our application.
We have run a variety of sipp load tests on this
module and believe it is ready for prime time. We
monitored memory, response accuracy and overall
stability and it seemed OK. But we would, of course,
love for others to help us find what we have missed,
or contribute more features, or overall just use the
work we slaved over for so many hours. Or just make
comments and suggestions!
If nothing else, we hope we’ve provided something
useful to the Kamailio community, as you all have
provided useful items to us.
Thanks to everyone for all the work on Kamailio
and related products.
Look forward to hearing from you all.
This is our first formal / large contribution to
Kamailio so if we’ve done something wrong process,
code or otherwise please let us know!
_______________________________________________
sr-dev mailing list
sr-dev@lists.sip-router.org
http://lists.sip-router.org/cgi-bin/mailman/listinfo/sr-dev
--
Daniel-Constantin Mierla
http://twitter.com/#!/miconda - http://www.linkedin.com/in/miconda
Next Kamailio Advanced Trainings 2014 - http://www.asipto.com
Sep 22-25, Berlin, Germany