On Tuesday 11 May 2010, Iñaki Baz Castillo wrote:
Now with
"fetch" support this problem is gone as only the currently
fetched rows have to fit into the memory.
Ok, I understand. Then I should take care in my design. I'm using
'address' table to identify each client based on it(s) source IP(s),
using 'grp_id' column which points to each client id.
It's expected that the number of clients will be increased terribly,
perhaps 1000-2000, so I could have the same number (or more) of
entries in 'address' table. Currently I use 12 or 16 MB of PKG_MEM,
could I get into problems?
Hi Iñaki,
you could easily test if the number of records is too much for your current
PKG_MEM pool setting by experiment. You should be able to observe memory
allocation errors in the logs from the module or the database API. If you
choose something like 1/2 or 2/3 of the maximum setting, you should be save.
There is also another reason to prefer a partitioned loading mechanism, as it
generate less fragmentation in the PKG_MEM pool, so its "easier" for the
memory manager to cope with this workload.
Cheers,
Henning