[SAC] OSGeo7 Sever Config Quote
Chris Giorgi
chrisgiorgi at gmail.com
Fri Feb 16 13:59:31 PST 2018
I concur with Martin, having more layers of complexity than necessary
makes things both fragile and difficult to administer.
I would like to propose the following stack as an alternative to the
proposed md + LVM2 + filesystem + KVM + libvirt (while still allowing
KVM/libvert when desired):
Operating system - Ubuntu?:
(https://www.ubuntu.com/)
- It seems most SAC members are comfortable with the Debian based tools.
- ZFS and LXD are actively supported by Canonical, LXD available
with commercial support.
- Base debian does not include ZFS; you must build it from source in
the extras repo using DKMS.
Use ZFS instead of md software RAID + LVM2 volume management + filesystem:
(http://zfsonlinux.org/)
(https://wiki.ubuntu.com/ZFS)
- ZFS is a logged CoW (Copy on Write) filesystem, which reduces the
random io and eliminates most dataloss risks from unexpected
powerloss.
- ZFS handle all levels of the storage stack, thus can ensure data
integrity remains intact.
- There are only a couple of commands to learn to handle almost all
storage related tasks:
- `zpool` handles everything from the raw disks up to the 'pool' level; and
- `zfs` handles everything related to 'datasets', which encompass
both logical volume and filesystem semantics.
- Mount points for the datasets are managed by ZFS, no fstab editing
is required.
- Each dataset can have it's options tuned individually, inheriting
it's parent's options by default.
- Redundancy is available both at the device and dataset levels.
- Snapshots are easy, instantaneous, and only consume the space
needed to store changes.
- Backup and restore functions are built in and very easy to use;
`zfs send` and `zfs receive` work locally, to/from a file, or over the
network.
- Datasets can be cloned instantly, with each clone growing in size
only as it changes from the source; this is ideal for containers.
- Caching is handled at the block level, allowing all unchanged
portions of clones to be cached only once.
- Caching can be distributed across additional fast devices for both
read (L2ARC) and write (ZIL SLOG) caches.
- Unallocated system memory is used as the primary cache (ARC),
making hot data access nearly instant.
- Utilizing a fast SLOG allows write latency to pools on spinning
HDDs to be reduced to the write latency of the SLOG device.
- Read latency from a pool of spinning HDDs is only appreciable the
first time cold data is accessed, after it has been cached, it is read
from the ARC (RAM) or L2ARC (Fast storage).
- Virtual raw block devices called zvols can be created to act as
backing stores for swap, VMs, or other low-level block device needs,
and can be coned and snapshotted just like datasets.
- Quotas and acls can be set on a dataset level.
- Several container/virtualization management tools support using
ZFS's clones and snapshots to quickly create, copy, provision, and
backup containers/VMs.
Use LXC/LXD containers in place of most VMs:
(https://linuxcontainers.org/lxd/)
(https://help.ubuntu.com/lts/serverguide/lxd.html)
(https://www.ubuntu.com/containers/lxd)
- Administration is simpler.
- Containers don't require their own kernel or filesystem.
- Resource utilization is much lower.
- By not fragmenting resources, they can be allocated more
efficiently and a single copy of duplicate data shared across multiple
containers.
- Networking can be passed through without adding another layer of
device drivers.
- Containers can nest with other containers or VMs; you can run
Docker inside a LXC container inside a VM if you really want to, as
well as running a couple of VMs inside a container if needed.
- Containers can be set up as privileged and resources of the host
exposed where needed.
- Containers can be stopped, started, and migrated in much the same
way as VMs.
- For scaling up, OpenStack works with containers as well as VMs.
Please look this over and provide comments and concerns.
Take care,
~~~Chris~~~
On Fri, Feb 16, 2018 at 6:33 AM, Martin Spott <Martin.Spott at mgras.net> wrote:
> Alex M wrote:
>
>> The plan based on discussions is to manage KVM virtual machines, lvm
>> drives, with libvirt.
>
> Of course I'm not in a position to veto against this conclusion, anyhow I'd
> like to emphasize that I'd rather not support another virtualization setup
> having a boot loader inside the respective VM's. From my perspective this
> adds maintenance overhead without benefit.
>
> The last infrastructure transition was driven by people who've been
> abdicating from their responsibilities shortly after making far-reaching
> technical decisions, letting others suffer from the trouble they caused.
> Thus, whoever is supporting the above conclusion should be prepared to stand
> by their responsibilities for a couple of years fixing stuff *themselves*
> whenever anything goes wrong !!
>
> Cheers,
> Martin.
> --
> Unix _IS_ user friendly - it's just selective about who its friends are !
> --------------------------------------------------------------------------
> _______________________________________________
> Sac mailing list
> Sac at lists.osgeo.org
> https://lists.osgeo.org/mailman/listinfo/sac
More information about the Sac
mailing list