Module: sip-router
Branch: janakj/bdb
Commit: e4a39affff2140f41c031c94610236718efb7e03
URL: http://git.sip-router.org/cgi-bin/gitweb.cgi/sip-router/?a=commit;h=e4a39af…
Author: Henning Westerholt <henning.westerholt(a)1und1.de>
Committer: Henning Westerholt <henning.westerholt(a)1und1.de>
Date: Wed Sep 24 11:02:42 2008 +0000
- disable big integer (DB_BIGINT) support for non SQL DB modules for now
- if such a value is used, an error will be returned and also logged
git-svn-id: https://openser.svn.sourceforge.net/svnroot/openser/trunk@4986 689a6050-402a-0410-94f2-e92a70836424
---
modules/db_berkeley/bdb_res.c | 6 ++++++
modules/db_berkeley/bdb_val.c | 4 ++++
2 files changed, 10 insertions(+), 0 deletions(-)
diff --git a/modules/db_berkeley/bdb_res.c b/modules/db_berkeley/bdb_res.c
index c165122..93a408b 100644
--- a/modules/db_berkeley/bdb_res.c
+++ b/modules/db_berkeley/bdb_res.c
@@ -424,6 +424,9 @@ int bdb_is_neq_type(db_type_t _t0, db_type_t _t1)
case DB_INT:
if(_t0==DB_DATETIME || _t0==DB_BITMAP)
return 0;
+ case DB_BIGINT:
+ LM_ERR("BIGINT not supported");
+ return 0;
case DB_DATETIME:
if(_t0==DB_INT)
return 0;
@@ -514,6 +517,9 @@ int bdb_cmp_val(db_val_t* _vp, db_val_t* _v)
case DB_INT:
return (_vp->val.int_val<_v->val.int_val)?-1:
(_vp->val.int_val>_v->val.int_val)?1:0;
+ case DB_BIGINT:
+ LM_ERR("BIGINT not supported");
+ return -1;
case DB_DOUBLE:
return (_vp->val.double_val<_v->val.double_val)?-1:
(_vp->val.double_val>_v->val.double_val)?1:0;
diff --git a/modules/db_berkeley/bdb_val.c b/modules/db_berkeley/bdb_val.c
index 1e239ed..1f20c9c 100644
--- a/modules/db_berkeley/bdb_val.c
+++ b/modules/db_berkeley/bdb_val.c
@@ -112,6 +112,10 @@ int bdb_str2val(db_type_t _t, db_val_t* _v, char* _s, int _l)
}
break;
+ case DB_BIGINT:
+ LM_ERR("BIGINT not supported");
+ return -1;
+
case DB_BITMAP:
if (db_str2int(_s, &VAL_INT(_v)) < 0) {
LM_ERR("Error while converting BITMAP value from string\n");
Module: sip-router
Branch: janakj/bdb
Commit: 3dfbaaf8343291bec70a2ad40ce0a312dff8c344
URL: http://git.sip-router.org/cgi-bin/gitweb.cgi/sip-router/?a=commit;h=3dfbaaf…
Author: Klaus Darilion <klaus.darilion(a)pernau.at>
Committer: Klaus Darilion <klaus.darilion(a)pernau.at>
Date: Wed Aug 6 11:44:43 2008 +0000
- renamed: bdb_recover -> kambdb_recover
git-svn-id: https://openser.svn.sourceforge.net/svnroot/openser/trunk@4602 689a6050-402a-0410-94f2-e92a70836424
---
modules/db_berkeley/README | 26 ++++++++++++------------
modules/db_berkeley/doc/db_berkeley_admin.xml | 22 ++++++++++----------
2 files changed, 24 insertions(+), 24 deletions(-)
diff --git a/modules/db_berkeley/README b/modules/db_berkeley/README
index 694489e..f2dd471 100644
--- a/modules/db_berkeley/README
+++ b/modules/db_berkeley/README
@@ -43,7 +43,7 @@ Will Quan
1.10. METADATA_READONLY (optional)
1.11. METADATA_LOGFLAGS (optional)
1.12. DB Maintaince Script : kamdbctl
- 1.13. DB Recovery : bdb_recover
+ 1.13. DB Recovery : kambdb_recover
1.14. Known Limitations
List of Examples
@@ -57,7 +57,7 @@ Will Quan
1.7. METADATA_KEYS
1.8. METADATA_LOGFLAGS
1.9. kamdbctl
- 1.10. bdb_recover usage
+ 1.10. kambdb_recover usage
Chapter 1. Admin Guide
@@ -101,7 +101,7 @@ modparam("db_berkeley", "auto_reload", 1)
The following operations can be journaled: INSERT, UPDATE,
DELETE. Other operations such as SELECT, do not. This
journaling are required if you need to recover from a corrupt
- DB file. That is, bdb_recover requires these to rebuild the db
+ DB file. That is, kambdb_recover requires these to rebuild the db
file. If you find this log feature useful, you may also be
interested in the METADATA_LOGFLAGS bitfield that each table
has. It will allow you to control which operations to journal,
@@ -203,7 +203,7 @@ modparam("db_berkeley", "journal_roll_interval", 3600)
'/usr/local/share/kamailio/db_berkeley/openser' By default
these tables are created Read/Write and without any
journalling as shown. These settings can be modified on a per
- table basis. Note: If you plan to use bdb_recover, you must
+ table basis. Note: If you plan to use kambdb_recover, you must
change the LOGFLAGS.
METADATA_READONLY
0
@@ -423,13 +423,13 @@ ce of db; output DB_PATH/db.new)
kamdbctl bdb newappend db datafile (appends data to a new instan
ce of db; output DB_PATH/db.new)
-1.13. DB Recovery : bdb_recover
+1.13. DB Recovery : kambdb_recover
The db_berkeley module uses the Concurrent Data Store (CDS)
architecture. As such, no transaction or journaling is
- provided by the DB natively. The application bdb_recover is
+ provided by the DB natively. The application kambdb_recover is
specifically written to recover data from journal files that
- Kamailio creates. The bdb_recover application requires an
+ Kamailio creates. The kambdb_recover application requires an
additional text file that contains the table schema.
The schema is loaded with the '-s' option and is required for
@@ -447,20 +447,20 @@ ce of db; output DB_PATH/db.new)
The following illustrates the four operations available to the
administrator.
- Example 1.10. bdb_recover usage
-usage: ./bdb_recover -s schemadir [-h home] [-c tablename]
+ Example 1.10. kambdb_recover usage
+usage: ./kambdb_recover -s schemadir [-h home] [-c tablename]
This will create a brand new DB file with metadata.
-usage: ./bdb_recover -s schemadir [-h home] [-C all]
+usage: ./kambdb_recover -s schemadir [-h home] [-C all]
This will create all the core tables, each with metadata.
-usage: ./bdb_recover -s schemadir [-h home] [-r journal-file]
+usage: ./kambdb_recover -s schemadir [-h home] [-r journal-file]
This will rebuild a DB and populate it with operation from jour
nal-file.
The table name is embedded in the journal-file name by conventi
on.
-usage: ./bdb_recover -s schemadir [-h home] [-R lastN]
+usage: ./kambdb_recover -s schemadir [-h home] [-R lastN]
This will iterate over all core tables enumerated. If journal f
iles exist in 'home',
a new DB file will be created and populated with the data found
@@ -474,7 +474,7 @@ n
the last hours data in table location.
Important note- A corrupted DB file must be moved out of the
- way before bdb_recover is executed.
+ way before kambdb_recover is executed.
1.14. Known Limitations
diff --git a/modules/db_berkeley/doc/db_berkeley_admin.xml b/modules/db_berkeley/doc/db_berkeley_admin.xml
index dd5acc3..410bfe8 100644
--- a/modules/db_berkeley/doc/db_berkeley_admin.xml
+++ b/modules/db_berkeley/doc/db_berkeley_admin.xml
@@ -74,7 +74,7 @@ modparam("db_berkeley", "auto_reload", 1)
The following operations can be journaled:
INSERT, UPDATE, DELETE. Other operations such as SELECT, do not.
This journaling are required if you need to recover from a corrupt
- DB file. That is, bdb_recover requires these to rebuild
+ DB file. That is, kambdb_recover requires these to rebuild
the db file. If you find this log feature useful, you may
also be interested in the METADATA_LOGFLAGS bitfield that each
table has. It will allow you to control which operations to
@@ -223,7 +223,7 @@ modparam("db_berkeley", "journal_roll_interval", 3600)
By default, the files are installed in '/usr/local/share/kamailio/db_berkeley/openser'
By default these tables are created Read/Write and without any journalling as
shown. These settings can be modified on a per table basis.
- Note: If you plan to use bdb_recover, you must change the LOGFLAGS.
+ Note: If you plan to use kambdb_recover, you must change the LOGFLAGS.
</para>
<programlisting format="linespecific">
METADATA_READONLY
@@ -513,13 +513,13 @@ usage: kamdbctl create
</section>
<section>
- <title>DB Recovery : bdb_recover</title>
+ <title>DB Recovery : kambdb_recover</title>
<para>
The db_berkeley module uses the Concurrent Data Store (CDS) architecture.
As such, no transaction or journaling is provided by the DB natively.
- The application bdb_recover is specifically written to recover data from
+ The application kambdb_recover is specifically written to recover data from
journal files that Kamailio creates.
- The bdb_recover application requires an additional text file that contains
+ The kambdb_recover application requires an additional text file that contains
the table schema.
</para>
@@ -540,19 +540,19 @@ usage: kamdbctl create
<para>
The following illustrates the four operations available to the administrator.
<example>
- <title>bdb_recover usage</title>
+ <title>kambdb_recover usage</title>
<programlisting>
-usage: ./bdb_recover -s schemadir [-h home] [-c tablename]
+usage: ./kambdb_recover -s schemadir [-h home] [-c tablename]
This will create a brand new DB file with metadata.
-usage: ./bdb_recover -s schemadir [-h home] [-C all]
+usage: ./kambdb_recover -s schemadir [-h home] [-C all]
This will create all the core tables, each with metadata.
-usage: ./bdb_recover -s schemadir [-h home] [-r journal-file]
+usage: ./kambdb_recover -s schemadir [-h home] [-r journal-file]
This will rebuild a DB and populate it with operation from journal-file.
The table name is embedded in the journal-file name by convention.
-usage: ./bdb_recover -s schemadir [-h home] [-R lastN]
+usage: ./kambdb_recover -s schemadir [-h home] [-R lastN]
This will iterate over all core tables enumerated. If journal files exist in 'home',
a new DB file will be created and populated with the data found in the last N files.
The files are 'replayed' in chronological order (oldest to newest). This
@@ -564,7 +564,7 @@ usage: ./bdb_recover -s schemadir [-h home] [-R lastN]
</para>
<para>
- Important note- A corrupted DB file must be moved out of the way before bdb_recover is executed.
+ Important note- A corrupted DB file must be moved out of the way before kambdb_recover is executed.
</para>
</section>