[SR-Users] htable size
Henning Westerholt
henning.westerholt at 1und1.de
Mon Apr 26 13:48:37 CEST 2010
On Monday 26 April 2010, Alex Balashov wrote:
> > i think this is a static setting, as the memory is allocated on startup
> > in shared memory. Do you've run into performance problems or memory size
> > constraints with the htable module?
>
> None whatsoever, I was just curious from a theoretical perspective.
>
> The answer lets me know in an a priori way whether the htable is optimal
> for applications involving rapidly shrinking or contracting data sets,
> by several factors.
There is probably a certain point on which further increase of the htable size
make not that much sense anymore. It seems the module only supports from 256
to 16384 buckets in the htable. So depending on the distribution of the hash
function the htable will start to degenerate, up to a worst case complexity to
O(n). The used function is core_case_hash from hash_func.c
Regards,
Henning
More information about the sr-users
mailing list