[fdo-internals] SDF 3.0 FDO Provider Serious Memory Leak
max at geoinova.com
Tue Jul 15 05:05:03 EDT 2008
I'm doing this too, but there are situations when you simply cannot trust
FDO managed wrappers, neither nor you can rely on common object reusability
See trac #356: .NET: IFeatureReader throws errors when passed by value
BTW, described minimize-restore main window trick works if FDO connection
runs on the same thread as the main application. I didn't try running FDO on
separate dedicated thread since noone can guarantee it's stability, but I
might experiment with this a bit if I find some time.
Kenneth, what FDO classes did you find suitable for reuse?
From: fdo-internals-bounces at lists.osgeo.org
[mailto:fdo-internals-bounces at lists.osgeo.org] On Behalf Of Kenneth
Skovhede, GEOGRAF A/S
Sent: Tuesday, July 15, 2008 10:52
To: FDO Internals Mail List
Subject: Re: [fdo-internals] SDF 3.0 FDO Provider Serious Memory Leak
I can confirm that I have seen the memory problem.
I was able to minimize it by re-using objects, rather than create new ones.
It took a lot of experiments to figure out which classes were capable of
Regards, Kenneth Skovhede, GEOGRAF A/S
Carl Jokl skrev:
> For clarification I must explain that there are two applications which
> were written using the SDF FDO provider. I wrote a benchmarking
> application. Just a simple one which was designed just to give a rough
> impression of the comparative speed differences of the different
> providers. The aim of this was to help us to decide which providers /
> storage methods we wanted to use in our application as we deal with
> large data sets and fairly heavy load where performance matters.
> The memory leak was not noticed in this application initially becasue
> during the testing it never threw an out of memory exception (and
> still hasn't so far). The problem occured when an application was
> written by a colleague which is used to migrate our large legacy sdf
> 2.0 data to SDF 3.0. This application had problems with out of memory
> exceptions which were cured by putting in explicit calls to the
> garbage collector. The explicit calls to garbage collection though
> result is a severe performance penalty. For this reason work is going
> on right now to try and only do these calls after some optimum number
> of reads as part of a general trial of breaking down the migration into
smaller batches of reads and writes to see if that will help.
> I went back to my benchmakr application and tested it to see if it
> exhibited the same memory problems. As I said before it did not run
> out of memory even on a larger data set but still used lots of memory
> and what is most worrying in my opinion was that it still held on to a
> lot even after all the readers and connections had been closed. I
> tried putting explicit garbage collection to run after the benchmark
> and it made no discernable difference to the ammount of memory being held
on to after the benchmark completed.
> There is a windows forms component to the benchmark application but
> this only displays the time taken to complete the test and how many
> records were processed etc but not any of the actual data from the
> file. The data in the file is stored in a list up to a limited batch
> size after which point the data is just discarded and a new batch
> starts (as this is just a benchmark and nothing usefull is being done with
> The actual migration application to my knowledge doesn't use a GUI at all.
fdo-internals mailing list
fdo-internals at lists.osgeo.org
__________ NOD32 3267 (20080714) Information __________
This message was checked by NOD32 antivirus system.
More information about the fdo-internals