Hi,
On 07/20/2010 11:27 AM, Henning Westerholt wrote:
On Tuesday 20 July 2010, JR Richardson wrote:
[..] So now I'm running out of shmem as well. After I loaded more than 320K records in the pdt database table, I started getting these errors: 0(17599) ERROR: pdt [pdtree.c:283]: bad parameters 0(17599) INFO: pdt [pdt.c:490]: no prefix found in [7000011234]
shmem:real_used_size = 1247512 This parameter does not get above 1247512 without having a problem, even though I have plenty of shmem available.
shmem:total_size = 33554432 shmem:used_size = 1229472 shmem:real_used_size = 1247512 shmem:max_used_size = 1247512 shmem:free_size = 32306920
I increased these parameters in config.h but that did not help:
/*used only if PKG_MALLOC is defined*/ #define PKG_MEM_POOL_SIZE 64*1024*1024
/*used if SH_MEM is defined*/ #define SHM_MEM_SIZE 64
Hi Jr,
its not necessary to re-compile the server to just increase the SHM mem pool. You can give this as a config parameter or a server daemon parameter during startup. Maybe there is also an bug present in the PDT, which somehow causes this problems if you load that much records. I did a short look into the code, there is a variable
#define PDT_MAX_DEPTH 32
this defines the max length for prefixes (DIDs) that can be stored in the internal trees.
which somehow seems to restrict the maximum size of the internal (tree) datastructure for queries. But i did not used this module so far - all i can say is that during my test with cr we loaded more records without any problems coming from the internal SHM MM, so i guess this is a limitation from the module. Maybe you should also try to increase the pool to 512 MB (the mem is just free also on a system level if its not used).
If there are many records then the shared memory size must be increased (command line parameter -m, e.g., -m 512). I used the module with over 1million records without problem.
Regards, Ramona