[GRASS-dev] [GRASS GIS] #2033: Moving g.pnmcomp to lib/display to improve render performance of wxGUI
GRASS GIS
trac at osgeo.org
Wed Jul 17 01:59:25 PDT 2013
#2033: Moving g.pnmcomp to lib/display to improve render performance of wxGUI
----------------------------------------------+-----------------------------
Reporter: huhabla | Owner: grass-dev@…
Type: enhancement | Status: new
Priority: major | Milestone: 7.0.0
Component: wxGUI | Version: svn-trunk
Keywords: display, Python, multiprocessing | Platform: All
Cpu: All |
----------------------------------------------+-----------------------------
Comment(by huhabla):
Replying to [comment:1 glynn]:
> Replying to [ticket:2033 huhabla]:
>
> > My question would be if this is also possible with d.rast, d.vect and
other display modules? Hence, moving the code from these modules into the
display library and calling these functions from dedicated wxGUI sub-
processes to speed up the rendering?
>
> Possible? Probably. Sane? No.
>
> Moving the guts of d.rast/d.vect/etc around won't make it run any
faster. If the issue is with the communication of the raster data, there
are faster methods than reading and writing PNM files.
I have the hope to speed up the composition by avoiding disc I/O.
> Both the PNG and cairo drivers support reading and writing 32-bpp BMP
files where the raster data is correctly aligned for memory mapping.
Setting GRASS_PNGFILE to a filename with a .bmp suffix selects this
format, and setting GRASS_PNG_MAPPED=TRUE causes the drivers to mmap() the
file rather than using read() and write().
As far as i understand mmap(), it is file backed and reads/writes the data
from the file on demand into the shared memory? An exception is anonymous
mapping, but is this also supported on windows? How can we access the
anonymous mmap() from wxPython?
> Once you have d.* commands generating BMP files, it shouldn't be
necessary to add any binary blobs to wxGUI. Compositing should be
perfectly viable within Python using either numpy, PIL or wxPython (having
wxPython perform the compositing during rendering may be able to take
advantage of video hardware).
What do you mean with binary blobs? Binary large objects? Well as i can
see from the wx description, there is no way around blobs since even numpy
arrays must be converted into a bytearray or similar to create a wx image.
Does wxPython take advantage of the video hardware? IMHO we can also
implement a OpenCL version of the PNM image composition. In this case it
would be a large advantage to have the images created by d.rast and d.vect
in a shared memory area as well to avoid disk I/O.
> Additionally, on X11 (and provided that the cairo library supports it),
the cairo driver supports rendering directly into an X pixmap which is
retained in the server (typically in video memory) after the d.* program
terminates. This has the added advantage that rendering will be performed
using the video hardware.
>
> Setting GRASS_PNGFILE to a filename ending in ".xid" selects this
option; the XID of the pixmap will be written to that file as a
hexadecimal value. The g.cairocomp module can composite these pixmaps
without the image data ever leaving video memory ("g.cairocomp -d ..." can
be used to delete the pixmaps from the server).
>
> The only missing piece of the puzzle is a way to get wxPython to use an
existing pixmap (ideally without pulling it into client memory then
pushing it back out to the server). The cleanest approach would be via
pycairo and wx.lib.wxcairo, which would also allow g.cairocomp to be
eliminated, but that's yet another dependency.
It is still puzzling me how to create a shared memory buffer using
multiprocessing.sharedctypes.Array and use this in the C-function calls.
In the current approach i have to use a queue object to transfer the image
data from the child process to its parent and therefor the transformation
of the image buffer into a Python bytearray. How to access video memory is
another point. Are pipes or similar techniques available for this kind of
operation? Should we wait for hardware that have no distinction between
video and main memory? Using pycairo.BitmapFromImageSurface() seems to be
a good approach?
However, we should focus on approaches that work on Linux/Unix/Mac and
Windows. Using X11 specific features is not meaningful in my humble
opinion. Besides of that the cairo driver is not work with the windows
grass installer yet (missing dependencies, with the exception of Markus
Metz local installation).
I don't think that calling the d.vect and d.rast functionality as library
functions is insane. :)
Using library function will allow to use the same image buffers across
rendering and composition that can be passed to the wxGUI parent process
using the multiprocessing queue. This will not increase the actual
rendering speed, but it will avoid several I/O operations and allows I/O
independent parallel rendering in case of multi-map visualization. The
mmap() approach is not needed in this case as well.
Well, the massive amount of d.vect and d.rast options will make it
difficult to design a convenient C-function interface ... but this can be
solved.
In the long-term, the current command interface to access the wx monitors
is a bit ... lets say ... error prone. It would be an advantage to have
the d.* modules as Python modules that are able to talk to the monitors
using socket connections or other cross platform IPC methods, sending
serialized objects that describe the call of the new (d.vect) vector
rendering or (d.rast) raster rendering functions in the display library.
In addition these modules can call the display library functions them self
for image rendering without monitors.
--
Ticket URL: <http://trac.osgeo.org/grass/ticket/2033#comment:2>
GRASS GIS <http://grass.osgeo.org>
More information about the grass-dev
mailing list