[SAC] System Roles

John Graham johng at telascience.org
Mon Jun 26 04:51:22 EDT 2006


Frank

Sounds like we should get 3 blades running to start...
"very secure (LDAP),  fairly secure, and well backed up (SVN, Plone) and 
stuff that can be a bit more loosy-goosey (demo stack, builtbot)."
When we figure out whats wrong with my DHCP and kickstart server I can 
get 3 more blades for going osgeo.

198.202.74.211 has a few related projects on it...
osgeo.telascience.org         osgeo plone instance .. can move the DNS 
for osgeo.telascience.org to one of the FC4 blades and use this in the 
buildbot
ossim.telascience.org         OSSIM Sample Data
onearth.telascience.org       1.3 TB Landsat7 global dataset from JPL 
onearth.jpl.nasa.gov

and some other fun!
http://198.202.74.211/WX/  APRS Weather station posting data from SDSU
podcast.planetwalk.org      I am playing around with ATMediaFile

John

Frank Warmerdam wrote:

> Folks,
>
> I have been trying to keep track of what systems are what at:
>
>   http://wiki.osgeo.org/index.php/SAC_Service_Status
>
> HJG, is 198.202.74.211 a Solaris system?  Is it also used for a bunch
> of non-osgeo telascience work?  There was some uncertainty between
> Howard and Norman how it is setup.  If you could update the wiki page
> I would appreciate it.
>
> I discussed what should go on what system a bit with nhv and hobu in irc,
> and there seemed to be general agreement that:
>
>  o We shouldn't be putting much else on the ldap master.
>  o We likely don't want to setup too much complicated stuff on Solaris.
>  o We need to have svn on the same system as an Apache server since svn
>    is normally/best run under apache.
>
> So, I am thinking that we would use the Fedora Core 4 general purpose 
> system
> at 198.202.74.218 for svn/apache.  One downside is that there is 
> already quite
> a bit of OSGeo "stack" software installed on this system, mostly by 
> John (HJG)
> I think, presumably with the intent that it would be our "demo stack" 
> system.
>
> Well, the "demo stack" system really needs to provide shell access to a
> fairly broad number of developers (ie. those in the shell group), while I
> would be *inclined* to think that our SVN server ought to be more 
> secure and
> only provide shell access to those in the administrator group.  Thus a 
> bunch
> of setup work would be wasted.
>
> I would note that HJG is working on setting up an FC4 buildbot main 
> system,
> though I believe he has run into some problems with DHCP.
>
> In the quite near future, I can see us wanting to deploy:
>
>  o A LDAP server (likely on it's own for security reasons).
>  o A "demo stack" system with lots of our packages built, a postgis 
> database,
>    etc that we could use for stuff like the user spatial database and
>    wms/wcs access to some geodata committee datasets.
>  o An SVN server (several svn instances actually)
>  o A plone system (though I think we need to have some serious 
> discussions
>    within and outside of SAC about whether we are going to try and use
>    plone as the CMS for OSGeo.org).
>  o A buildbot master system.
>
> Some of these things can be put on one system, but I think we need to
> distinguish between services that need to be very secure (LDAP),
> fairly secure, and well backed up (SVN, Plone) and stuff that can be a
> bit more loosy-goosey (demo stack, builtbot).
>
> One thing I am wondering is if there is any sort of virtualization
> technology we ought to be considering to keep distinct system
> configurations for difference services without need a lot of
> physical systems.
>
> Best regards,





More information about the Sac mailing list