[fdo-internals] SDF 3.0 FDO Provider Serious Memory Leak

Carl Jokl carl.jokl at keynetix.com
Tue Jul 15 04:45:59 EDT 2008

For clarification I must explain that there are two applications which were
written using the SDF FDO provider. I wrote a benchmarking application. Just
a simple one which was designed just to give a rough impression of the
comparative speed differences of the different providers. The aim of this
was to help us to decide which providers / storage methods we wanted to use
in our application as we deal with large data sets and fairly heavy load
where performance matters. 

The memory leak was not noticed in this application initially becasue during
the testing it never threw an out of memory exception (and still hasn't so
far). The problem occured when an application was written by a colleague
which is used to migrate our large legacy sdf 2.0 data to SDF 3.0. This
application had problems with out of memory exceptions which were cured by
putting in explicit calls to the garbage collector. The explicit calls to
garbage collection though result is a severe performance penalty. For this
reason work is going on right now to try and only do these calls after some
optimum number of reads as part of a general trial of breaking down the
migration into smaller batches of reads and writes to see if that will help. 

I went back to my benchmakr application and tested it to see if it exhibited
the same memory problems. As I said before it did not run out of memory even
on a larger data set but still used lots of memory and what is most worrying
in my opinion was that it still held on to a lot even after all the readers
and connections had been closed. I tried putting explicit garbage collection
to run after the benchmark and it made no discernable difference to the
ammount of memory being held on to after the benchmark completed.

There is a windows forms component to the benchmark application but this
only displays the time taken to complete the test and how many records were
processed etc but not any of the actual data from the file. The data in the
file is stored in a list up to a limited batch size after which point the
data is just discarded and a new batch starts (as this is just a benchmark
and nothing usefull is being done with the data). 

The actual migration application to my knowledge doesn't use a GUI at all.
View this message in context: http://www.nabble.com/SDF-3.0-FDO-Provider-Serious-Memory-Leak-tp18447913p18460584.html
Sent from the FDO Internals mailing list archive at Nabble.com.

More information about the fdo-internals mailing list