[GeoNode-users] Hint for Hardware-Specifications / Performance
Stefan Steiniger
sstein at geo.uzh.ch
Tue Dec 8 07:19:06 PST 2015
Hi Florian,
this answer does probably not help much to find an optimal solution, but...
So I have Postgres and GeoNode (2.0) on separate servers. GeoNode runs
on a Dell R420 with 6 real cores / 12 hyperthr. cores and 32 GB, Raid 1
with 2 TB. However this server runs also other Webstuff so GeoNode is
installed in VirtualBox VM with 6 cores assigned, 12GB and 600 GB HD.
Postgres runs on its own Server, not in a VM, a Dell R520, 6/12core 32G,
4 TB (but it shares load with some VMs). GeoServer is in the same VM as
GeoNode.
Now here my experiences:
- I had once a seminar with about 10 parallel requests of the general
GeoNode layer list overview. Here it took some participants 1 minute to
load (but the issue may be also the network in the class room that is on
some days really slow)
- I have one image Mosaik (20+ files) with 16cm Resolution over
40x40kms, rendering takes some minutes if I want to see the layer and/or
zoom in and out
- I have a few big shapefiles, around 400MB as shapefiles, for which
rendering also takes quite some time (15-25 secs).
- so, based on that I guess that GeoServer (rendering) is my bottleneck.
I have, however, not done any optimisation of GeoServer (I think work on
the tile-chaching would help greatly).
- The Image-Mosaik needs to be on the machine where GeoServer is, so
having it in a VM has some drawbacks if I think about adding plenty of
more high-res imagery as there is limited space (meaning: for some
reason ecw did not work, and I had to convert it to tiff images with one
mosaik being 220GB, so its clear why rendering takes that much time).
- That renderding of Shapefiles takes time, can also be explained by the
time needed to transfer the data from the Postgres server to GeoServer
machine via the network.
- In general my Postgres server does run idle, and has not much to do.
So based on that, I would probably suggest have everything on one server
(32G and 6/12 cores seems enough) to avoid network traffic with some
speedy disks and if GeoServer is optimized. However, if you want to
upgrade Geonode later, you can not just take a backup copy of the VM, as
i can do at the moment and play around with it. If you want to have it
on separates servers, then: (a) check what your network limits are, and
(b) consider VMs (its still faster internally ;)
As I have not many visitors a day yet (i.e. more robots who check for
whats new), its ok for me.
cheers,
stefan
PS: my instance would be http://cedeusdata.geosteiniger.cl/
Am 08.12.15 um 05:58 schrieb Florian Hoedt:
> Hello List,
> We plan to use Geonode as our enterprise (university) SDI. I have to evaluate which hardware specs we need to get enough performance for our use case. Here are some facts about our current data / user /usage scenario:
>
> Our peak scenarios have ~ 60 - 80 Concurrent users which:
> Use quite large rasters (aerials 20cm resolution, sat-images) as basemaps.
> Use WFS / WCS as input for geoprocessing services (the geoprocessing is locally not on the server).
>
> What experience is there for RAM and CPU cost - which pair of those would work nicely for our peak scenario?
>
> My Idea is to split each used software to a different machine like:
> 1. Geonode
> 2. Geoserver (most taxing afaik)
> 3. Postgres
>
> Are there known issues / limitations doing so?
>
> Thank you
> mit freundlichen Grüßen
> Florian Hoedt
>
> B.Sc. Florian Hoedt
> Hochschule Ostwestfalen-Lippe / Campus Höxter
> FB 9 Landschaftsarchitektur und Umweltplanung
> An der Wilhelmshöhe 44
> 37671 Höxter
> Tel.: 05271-687-7478
> E-Mail: florian.hoedt at hs-owl.de
> www.hs-owl.de/fb9
> _______________________________________________
> geonode-users mailing list
> geonode-users at lists.osgeo.org
> http://lists.osgeo.org/mailman/listinfo/geonode-users
>
More information about the geonode-users
mailing list