[SR-Users] 500 ERROR Reloading data when reloading hash table using FIFO

kawarod kawarod at laposte.net
Thu Mar 28 06:31:18 CET 2013


Dear list users,

I encounter an issue when reloading a hash table using FIFO command.
My hash is getting a list of "Username / Serial Number" from SQL for 
each phone connecting to Kamailio. Then I use this hash to authenticate 
the phone based on a X-SerialNumber attribute.

This hash has to be updated once or twice a day, to accomodate new users 
or a change of the Phone.

I'm using the following command for reloading:
/opt/kamailio/sbin/kamctl fifo sht_reload hash_cpe_serial

Everything is working fine, but after 25-26 reload I have an error like 
this:
500 ERROR Reloading data

The only way to recover is do a restart with ./kamctl restart

The hash size has approximatively 22 000 users.

The hash has been defined like this in the configuration file:
modparam("htable", "htable", "hash_cpe_serial=>size=14;dbtable=subscriber;")

When using kamcmd, i have this:
kamcmd> htable.listTables
{
         name: ipban
         dbtable:
         dbmode: 0
         expire: 300
         updateexpire: 1
         size: 256
}
{
         name: hash_cpe_serial
         dbtable: subscriber
         dbmode: 0
         expire: 0
         updateexpire: 1
         size: 16384
}

When trying to dump with kamcmd, I have an error:
kamcmd> htable.dump hash_cpe_serial
error: 500 - Internal server error processing '{': buffer too small 
(overflow) (-2)

But it works fine with:
/opt/kamailio/sbin/kamctl fifo sht_dump hash_cpe_serial

I have upgraded from 3.2.2 to 4.0.0 to be sure that it was not relative 
to the old version, but without success.

Any idea on how to correct this.

Kind regards,
rod






More information about the sr-users mailing list