[GRASS-dev] [GRASS GIS] #2033: Moving g.pnmcomp to lib/display to improve render performance of wxGUI
GRASS GIS
trac at osgeo.org
Wed Jul 17 16:23:26 PDT 2013
#2033: Moving g.pnmcomp to lib/display to improve render performance of wxGUI
----------------------------------------------+-----------------------------
Reporter: huhabla | Owner: grass-dev@…
Type: enhancement | Status: new
Priority: major | Milestone: 7.0.0
Component: wxGUI | Version: svn-trunk
Keywords: display, Python, multiprocessing | Platform: All
Cpu: All |
----------------------------------------------+-----------------------------
Comment(by glynn):
Replying to [comment:2 huhabla]:
> I have the hope to speed up the composition by avoiding disc I/O.
If one process writes a file and another immediately reads it, it doesn't
necessarily involve "disc" I/O.
The OS caches disc blocks in RAM. write() completes as soon as the data
has been copied to the cache (the kernel will copy it to disc on its own
schedule), read() reads the data from the cache (and only requires disc
access for data which isn't already in the cache).
The kernel will use all "free" memory for the disc cache. So unless memory
pressure is high, the files written by the display driver will remain in
the cache for quite a while.
> As far as i understand mmap(), it is file backed and reads/writes the
data from the file on demand into the shared memory? An exception is
anonymous mapping, but is this also supported on windows? How can we
access the anonymous mmap() from wxPython?
Anonymous mmap() isn't relevant here. mmap() is file backed, but this
doesn't affect the time required to read and write the file unless memory
pressure is so high that the size of the file exceeds the amount of free
memory. In the event that sufficient free memory is available, neither
writing nor reading will block waiting for disc I/O.
> > Once you have d.* commands generating BMP files, it shouldn't be
necessary to add any binary blobs to wxGUI. Compositing should be
perfectly viable within Python using either numpy, PIL or wxPython (having
wxPython perform the compositing during rendering may be able to take
advantage of video hardware).
>
> What do you mean with binary blobs? Binary large objects?
Machine code.
IOW, it shouldn't be necessary to move g.pnmcomp into a library (DLL/DSO)
which is accessed from the wxGUI process. The replacement can just be
written in Python, using existing Python modules (numpy, PIL or wxPython)
to get reasonable performance.
> Does wxPython take advantage of the video hardware?
wxWidgets is a cross-platform wrapper around existing toolkits: Windows
GDI, GTK/GDK, etc. The underlying toolkit will use the video hardware, but
wxWidgets may insist upon inserting itself between the data and the
hardware.
> IMHO we can also implement a OpenCL version of the PNM image
composition.
This won't help much unless you can persuade wxWidgets/wxPython to use the
composed image directly. If it insists upon pulling the data from video
memory so that it can pass it to a function which just pushes it straight
back again, it would probably be quicker to perform the compositing on the
CPU.
> It is still puzzling me how to create a shared memory buffer using
multiprocessing.sharedctypes.Array and use this in the C-function calls.
I'm not sufficiently familiar with the multiprocessing module to answer
this question. However, if it turns out to be desirable (and I don't
actually think it will), it wouldn't be that hard to modify the PNG/cairo
drivers to write into a SysV-IPC shared memory segment (shmat() etc).
But I don't think that will offer any advantages over mmap()d files, and
it's certainly at a disadvantage compared to GPU rendering into shared
video memory.
> Should we wait for hardware that have no distinction between video and
main memory?
X11 will always make a distinction between server memory and client
memory, as those may be on different physical systems.
> Using pycairo.BitmapFromImageSurface() seems to be a good approach?
It may be the best that you're going to get. GDK can can create a
GdkPixmap from an XID (gdk_pixmap_foreign_new), and this functionality is
exposed by PyGTK. But the higher level libraries all seem to insist upon
creating the pixmap themselves from data which is in client memory. Or at
least, if the functionality is available, it doesn't seem to be
documented.
> I don't think that calling the d.vect and d.rast functionality as
library functions is insane. :)
Eliminating the process boundary for no reason other than to avoid having
to figure out inter-process communication is not sane.
> Using library function will allow to use the same image buffers across
rendering and composition that can be passed to the wxGUI parent process
using the multiprocessing queue.
Using files will allow to use the same "image buffers" (i.e. the kernel's
disc cache).
> Well, the massive amount of d.vect and d.rast options will make it
difficult to design a convenient C-function interface ... but this can be
solved.
Solving inter-process communication is likely to be a lot simpler, and the
end result will be nicer.
--
Ticket URL: <http://trac.osgeo.org/grass/ticket/2033#comment:3>
GRASS GIS <http://grass.osgeo.org>
More information about the grass-dev
mailing list