[Qgis-developer] memory layer performance while adding many features?

Matthias Kuhn matthias at opengis.ch
Wed Aug 5 14:33:39 PDT 2015


Hi all,

in general I think we should advice to use the layer.addFeature() method
with a layer.commitChanges() in the end. It has the advantage that it
does batch updates on the provider automatically and proper signals are
sent so the UI (canvas, attribute table...) can reflect changes immediately.
The layer.dataProvider().addFeatures() method delegates all this to the
developer.

Cheers,
Matthias

On 08/05/2015 08:28 PM, Raymond Nijssen wrote:
> Hi Denis,
>
> Adding all features at once, or 1000 by 1000, speeds up enormously. It
> only takes a few minutes, at a steady speed of 480 fpm. Where adding
> the same dataset 1 by 1 took about 5 hours.
>
> Thanks for pointing this out!
>
> Raymond
>
> On 30-07-15 09:34, Denis Rouzaud wrote:
>>  From your code [0] , I would see 2 options:
>>
>> * add features all at once on the provider using
>> QgsVectorDataProvider::addFeatures (the method you currently use)
>>
>> * work on the layer level rather than the provider, which I believe is
>> the recommended way for plugins. Make the layer editable, and add
>> features to the edit buffer and commit at once. Or maybe commit 1000 by
>> 1000.
>>
>> I hope this helps.
>>
>>
>>
>>
>> [0]
>> https://github.com/opengeogroep/AERIUS-QGIS-plugins/blob/master/ImaerReader/imaer_reader.py#L228
>>
>>
>>
>> On 07/30/2015 09:20 AM, Raymond Nijssen wrote:
>>> Hi Denis,
>>>
>>> The code is in the ImaerReader plugin in the qgis repo and in github:
>>>
>>> https://github.com/opengeogroep/AERIUS-QGIS-plugins
>>>
>>> I'm adding the features one by one.
>>>
>>> Regards,
>>> Raymond
>>>
>>> On 30-07-15 08:46, Denis Rouzaud wrote:
>>>> Hi Raymond,
>>>>
>>>> Can you show us the code doing this?
>>>> Do you commit features one by one ar all at once?
>>>>
>>>> Best wishes,
>>>> Denis
>>>>
>>>> On 07/29/2015 09:04 AM, Raymond Nijssen wrote:
>>>>> Dear developers, ;)
>>>>>
>>>>> A plugin of mine imports data from a gml file into a memory layer.
>>>>> Works fine for tiny gml files, but takes forever on huge ones. Sounds
>>>>> plausible maybe, but the relation is not linear.
>>>>>
>>>>> So I did did some tests and found out that reading and parsing the
>>>>> gml
>>>>> and creating the features always goes at the same speed of about 400
>>>>> features a second (fps). But when adding these to my memory layer,
>>>>> the
>>>>> process slows down tremendously.
>>>>>
>>>>> I output the fps for every 1000 features and it gave me this graph:
>>>>>
>>>>> http://terglobo.nl/downloads/memory-layer-performance.png
>>>>>
>>>>> At the end of the importing process I'm processing just 3 features
>>>>> per
>>>>> second.
>>>>>
>>>>> Is this expected behaviour? Is a memory layer not meant for something
>>>>> like this? Or is this a bug?
>>>>>
>>>>> In the source code I found that for every feature I add to a memory
>>>>> layer the index is updated. Could that be inefficient?
>>>>>
>>>>> Hoping anyone can explain!
>>>>>
>>>>> Regards,
>>>>> Raymond
>>>>>
>>>>>
>>>>
>>>>
>>>
>>
>>
>


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: OpenPGP digital signature
URL: <http://lists.osgeo.org/pipermail/qgis-developer/attachments/20150805/ed442471/attachment.pgp>


More information about the Qgis-developer mailing list