- Add missing pkg_free for the dynamic PV name created during PV parsing - Add missing pkg_free for any GPARAM_TYPE_PVS created during PV parsing
#### Pre-Submission Checklist - [x] Commit message has the format required by CONTRIBUTING guide - [x] Commits are split per component (core, individual modules, libs, utils, ...) - [x] Each component has a single commit (if not, squash them into one commit) - [x] No commits to README files for modules (changes must be done to docbook files in `doc/` subfolder, the README file is autogenerated)
#### Type Of Change - [ ] Small bug fix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds new functionality) - [ ] Breaking change (fix or feature that would change existing functionality)
#### Checklist: - [ ] PR should be backported to stable branches - [ ] Tested changes locally - [ ] Related to issue #XXXX (replace XXXX with an open issue number)
#### Description
We are using NDB_REDIS in anger, fetching data from changing hash structures. NDB_REDIS creates dynamic PVs, and in order to do so it has a custom parser that allocates a dynamic PV structure as well as potentially multiple nested parameter structures. We very quickly found that we were running out of package memory, and enabling Kamailio's own memory debug logging quickly revealed that every `$redis(...)` query was leaking.
It appears that the allocated structures are only used within the subsequent handler to fetch the data. Therefore, to resolve the leak, the allocated structures are freed after the data for the PV in question has been fetched. You can view, comment on, or merge this pull request online at:
https://github.com/kamailio/kamailio/pull/2536
-- Commit Summary --
* ndb_redis: Fix leaks from PV parsing
-- File Changes --
M src/modules/ndb_redis/ndb_redis_mod.c (24)
-- Patch Links --
https://github.com/kamailio/kamailio/pull/2536.patch https://github.com/kamailio/kamailio/pull/2536.diff
It seems like the same sort of problematic pattern may exist in other modules, such as NDB_MONGO. I have not looked into those in detail, but this form of fix may be applicable there also.
Should I create an issue for this leak bug as well?
There still seems to be a leak somewhere. Will re-open once found.
Closed #2536.
Are you using it in native kamailio.cfg script or in KEMI scripting (Lua, Python, ...)?
Are you using it in native kamailio.cfg script or in KEMI scripting (Lua, Python, ...)?
KEMI Python3. The code in question doesn't seem to relate to KEMI at all, however.
Then just reuse the redis result (reply) ids (e.g., static strings), likely you do not need to work with more than 2-3 results at the same time.
The design of PVs, which was done for the native scripting language, has impact here -- in the native scripting the PV names are resolved at startup, but with Kem, PVs can be created dynamically and can result in filling the memory. It is not a leak per se, because they are still referenced.
A similar case was also reported with sqlops module in #2032, in case you want to read about.
Then just reuse the redis result (reply) ids (e.g., static strings), likely you do not need to work with more than 2-3 results at the same time.
I was finding an issue with array results, actually. But I can see how your reasoning still works, in that as long as arrays are bounded in length, there should be a limit at some point.
One final question then - is the ID space separate between workers? i.e. can worker 1 use ID 1 and worker 2 use ID 1 without worrying about colliding with each other?
The results are kept in private memory, there is no collision between workers. If you want to discuss more, use sr-users@lists.kamailio.org mailing list.