[OpenDroneMap-users] question about resolution and camera orientation

Stephen Mather stephen at smathermather.com
Wed Feb 25 12:17:06 PST 2015


Ah yes, that vagrant file would explain why you aren't seeing threading! It
is very conservative. Try something more like this (you'll need to modify
the RAM and total number of processors):

# -*- mode: ruby -*-
# vi: set ft=ruby :

# Vagrantfile API/syntax version. Don't touch unless you know what you're
doing!
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

  # Every Vagrant virtual environment requires a box to build from.
  config.vm.box = "ubuntu/trusty64"

  config.vm.synced_folder "../vodm_data", "/vagrant_data"

  # Provider-specific configuration so you can fine-tune various
  # backing providers for Vagrant. These expose provider-specific options.
  # Example for VirtualBox:
  #
   config.vm.provider "virtualbox" do |vb|
     vb.customize ["modifyvm", :id, "--memory", "20480"]
 vb.cpus = 6
     vb.customize ["modifyvm", :id, "--usb", "on"]
   end
end


On Wed, Feb 25, 2015 at 11:33 AM, Anna Petrášová <kratochanna at gmail.com>
wrote:

>
>
> On Wed, Feb 25, 2015 at 11:02 AM, Stephen Mather <
> stephen at smathermather.com> wrote:
>
>> Hi Anna,
>>
>> Matching should be parallelized. What are you running it on
>> (hardware/hosting, OS)? Just Bundler should be single-threaded, if memory
>> serves (not that memory always serves). My general recommendation is throw
>> as much resources as possible. All the test datasets will process in a
>> couple hours or less on a 32GB RAM digital ocean machine, which costs about
>> $2 to run. I also have a workstation I do testing on which has 12
>> processors and 32GB of RAM plus an SSD RAID. That is a pleasant machine to
>> run datasets on. I know many have had success with larger Amazon instances
>> as well -- perhaps someone on the list can speak to those experiences.
>>
>> As a point of contrast, running on a VM on my ultra-book is an exercise
>> in frustration. :)
>>
>
> I am currently running it on my Ubuntu laptop (8 cores, 4GB memory), but
> using the Vagrant file provided and it's really frustrating. I plan to move
> it on a more powerful machine, but it was more convenient for me to test it
> first on my machine.
>
>>
>> If it is a dataset you can share, I can run some tests and see if I am
>> seeing similar issues.
>>
>
> I am not sure if the data can be shared. I will first try to run it on a
> better machine and see if that helps.
>
> Thanks for your help!
>
> Anna
>
>
>> Cheers,
>> Best,
>> Steve
>>
>>
>>
>> On Tue, Feb 24, 2015 at 10:58 PM, Anna Petrášová <kratochanna at gmail.com>
>> wrote:
>>
>>>
>>>
>>> On Tue, Feb 24, 2015 at 10:23 PM, Stephen Mather <
>>> stephen at smathermather.com> wrote:
>>>
>>>> Hi Anna,
>>>>
>>>> Yes "--resize-to" will typically help get you a denser point cloud. I
>>>> have found --resize-to 3000 to be a useful default, though the project
>>>> currently defaults to 2400 to avoid swamping people's machines.
>>>>
>>>
>>> I am currently experimenting with larger images but it takes too long.
>>> Just the matching takes forever. I wonder why the matching is not running
>>> in parallel, if it's not implemented yet, it can't be implemented easily or
>>> it gets confused since I am running it in virtual box?
>>>
>>>>
>>>> There's a lot to be done on the documentation side. Also, while the
>>>> meshing and texturing portions of the codebase are novel, some other
>>>> portions (Bundler, CMVS, PMVS) are projects onto themselves, so it will
>>>> take a bit of exploration to figure out optimal parameters. As we (as a
>>>> community) discover those optimal parameters, we can improve the sane
>>>> defaults, and document good alternatives.
>>>>
>>>
>>> I was looking for some better documentation for these projects, but I
>>> didn't find anything helpful. I wish I would know more about the actual
>>> algorithms, but that's not my background.
>>>
>>>>
>>>> (That said, there is plenty of documentation to be written that I or
>>>> others already know, so that is a priority)
>>>>
>>>> As to XYZ, yaw, pitch, and roll, OpenDroneMap does not yet take that
>>>> into account. I would be excited to see this, perhaps even using SFCGAL or
>>>> other 3D library for doing proper image footprints a la:
>>>>
>>>>
>>>> https://smathermather.wordpress.com/2013/12/15/uas-drone-footprint-geometries-calculated-in-postgis-with-sfcgal-for-real-this-time/
>>>>
>>>
>>> Great blog!
>>>
>>>>
>>>>
>>>> This would substantially reduce the processing time on the "match"
>>>> step, which is a decent proportion of the current processing time.
>>>>
>>>> Also, I haven't written any documentation on it yet, but in the
>>>> bundle.out file in reconstruction-with-image-size-2400 can be loaded
>>>> in Meshlab to check camera positions and determine sanity of camera
>>>> positions. There's a youtube video on that somewhere that I'll dig up.
>>>>
>>>
>>> Right, I noticed that.
>>>
>>> Best,
>>>
>>> Anna
>>>
>>>>
>>>> Cheers!
>>>> Best,
>>>> Steve
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, Feb 24, 2015 at 1:52 PM, Anna Petrášová <kratochanna at gmail.com>
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Tue, Feb 24, 2015 at 1:32 PM, Andy Wilde <awilde76 at gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Anna,
>>>>>>
>>>>>> For the resolution issues please read the wiki information which
>>>>>> shows some of the parameter information.  This documentation is still under
>>>>>> development and will improve.
>>>>>>
>>>>>
>>>>> Yes, I looked at the wiki, but I couldn't find what I was looking for.
>>>>> I am still experimenting with different parameters so I will see. I will
>>>>> definitely keep checking the wiki.
>>>>>
>>>>>>
>>>>>> For the X, Y, Z values the system will interpreted GPS information if
>>>>>> it is in the image data,  I would expect that it will interpret x,y,y
>>>>>> values in much the same way.  For the roll and yaw I do not think it will
>>>>>> recognise this as yet.
>>>>>>
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Anna
>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>>>>>>> Sent from Mailbox <https://www.dropbox.com/mailbox>
>>>>>>
>>>>>>
>>>>>> On Mon, Feb 23, 2015 at 9:28 PM, Anna Petrášová <
>>>>>> kratochanna at gmail.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I am trying to run opendronemap on my 200 images and a better
>>>>>>> documentation of the different options would help me a lot. (I know,
>>>>>>> writing documentation is always painful.) So I have a couple of questions.
>>>>>>>
>>>>>>> 1. How do I increase the resolution? I need the point cloud from
>>>>>>> which I would construct high-resolution raster DSM. Would higher
>>>>>>> --resize-to do the job? Or a different parameter?
>>>>>>>
>>>>>>> 2. I have the external orientation of camera (x, y, z, yaw, pitch,
>>>>>>> roll), is there any way the algorithms could take advantage of this
>>>>>>> information? I can imagine using these as initial values for the algorithm.
>>>>>>>
>>>>>>> Thank you for this great software!
>>>>>>>
>>>>>>> Anna
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> OpenDroneMap-users mailing list
>>>>> OpenDroneMap-users at lists.osgeo.org
>>>>> http://lists.osgeo.org/cgi-bin/mailman/listinfo/opendronemap-users
>>>>>
>>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osgeo.org/pipermail/opendronemap-users/attachments/20150225/83186099/attachment.html>


More information about the OpenDroneMap-users mailing list