[fdo-internals] SDF 3.0 Memory Leak
traian.stanev at autodesk.com
Fri Jul 18 11:52:10 EDT 2008
It is every unlikely that there is such a large leak in the provider itself. The problem you are seeing is some interplay of the managed wrappers to FDO and your test code. In both cases, you would be seeing problems with all providers, not just SDF. Have you tried your test code with the SHP provider for example? Does it exhibit the same behavior?
> -----Original Message-----
> From: fdo-internals-bounces at lists.osgeo.org [mailto:fdo-internals-
> bounces at lists.osgeo.org] On Behalf Of Carl Jokl
> Sent: Friday, July 18, 2008 11:47 AM
> To: fdo-internals at lists.osgeo.org
> Subject: RE: [fdo-internals] SDF 3.0 Memory Leak
> As requested I am posting my response email on the thread for continued
> discussion. I have editied the response slightly for some company
> confidentiality reasons:
> With all due respect I am not sure that this will make a difference.
> you talk about memory usage falling to an acceptable level. The problem
> is a
> matter that the memory usage spirals upwards. It peaks higher than the
> ~340mb value and the ~340mb is the memory usage after the benchmark
> completes and the connection and everything else has been closed.
> ~340mb is
> still being held on to. It is not a matter of this being acceptable for
> benchmark. That has run and served it’s purpose now. The larger
> question is
> the implication for a lived deployed system if we use the provider. I
> found no way of getting the 340mb of memory to free up again without
> terminating the process. This might be fine for this little benchmark
> even for a one off migration. The problem comes when using the FDO
> with MapGuide enterprise. If the provider does have a memory leak then
> consequence is that memory will be leaked whenever any data is
> though FDO from SDF. On a busy web server the effect of this is that
> eventually the web server will run out of memory and crash. The memory
> only freed up by terminating the process which in this case would mean
> taking down the web server and starting it up again.
> The mapguide work being carried out is for a high profile organisation
> and a
> very important client of ours.
> It is part of a mission critical application which is being migrated
> legacy map guide 6 to map guide enterprise. You can imagine then the
> prospect of a potential memory leak which could theoretically crash a
> server is a bit worrying to say the least.
> It is not the case that I would be unwilling to admit that there were a
> problem with my code if I really believe that were the case. Right now
> most important thing is just establishing what the cause of the problem
> This benchmark in itself is just a very small part of a much bigger and
> more important project. If this problem is exhibited with the benchmark
> more worrying not in how it affects this individual program but the
> implications of it for the whole project.
> For that reason it has been suggested that my my colleague when he gets
> on Tuesday and circumstances permitting is going to look into the
> code C++ implementation of the SDF provider. He is our most experienced
> capable developer. I for my part have a lot of work to complete
> relating to
> the migration and will only be able to look into this memory leak
> problem as
> circumstances permit.
> I will have to leave this issue for now to work on other things but
> will get
> back to you when I have something new to report.
> View this message in context: http://www.nabble.com/RE%3A-SDF-3.0-
> Sent from the FDO Internals mailing list archive at Nabble.com.
> fdo-internals mailing list
> fdo-internals at lists.osgeo.org
More information about the fdo-internals