The Departmental Private Cloud Infrastructure

Overview

  • We maintain a private cloud on prem, based on open source software OpenStack.

  • Two basic usage models:

    • OpenStack Ready-to-Run Instances: a single instance, prepared and maintained by the departmental system administrators, running your chosen applications (possibly installed by you with a non-root local account) with a public IP address.

    • OpenStack Do-it-Yourself Instances: an Openstack project with a private network and compute resources (ram, cpu, disk) with which you can create your own instances for which you are responsible.

    • more details below

  • Additional remarks:

    • Public (internet) access to (applications on) (both kinds of) these instances goes via a system-group-maintained proxy (more details below).
    • Do-it-Yourself instances are completely your responsability: you must maintain and keep up to date the entire instances: all of the OS and all of the applications and all of the configuration
    • On Ready-to-Run instances the OS and the system wide installed software is configured and maintained by the departmental system administrators - you are only responsible for the configuration and installation of applications that you install yourself (using an unprivileged account)

Cloud Managers

  • each research group/section that wants to use the departmental private cloud infrastructure must assign at least 1 cloud manager

  • the cloud managers decide which resources are assigned to the projects and which network configuration is used

    • resources are limited by the physical machines that are used as compute nodes for the research group/section
  • because of current OpenStack limitations, the cloud managers cannot administer their projects themselves directly but consult with helpdesk to do so

Typical Use

We currently implemented 2 typical scenario's with which to use the departmental cloud.

In both scenario's, instances can have access to data volumes: either NetApp volumes (preferred) or OpenStack volumes (only if NetApp volumes are too difficult or overkill).

Also in both scenario's, public access to (web) applications hosted on / provided by your virtual machine(s) typically goes via a departmental proxy (unless the protocol cannot be proxied)

  • DNS name of public proxy end point can be any of X.be, X.org, X.Y.eu, X.cs.kuleuven.be, X-Z.cs.kuleuven.be, ...
  • proxy configuration: https://X.be/ → your application on your VM

OpenStack Ready-to-Run/Managed Instances

  • a single virtual machine, set up and managed by systems group with a public IP address
    • the departmental systems group takes care of management of OS, networks and basic software
    • you install and/or configure your application(s) as needed (either directly on the VM itself or in/via containers that run on the VM)
    • More details:
      • 1 (or more) local accounts are created with which you can install and set up your application(s) - use SSH keys to login
      • DNS name of the virtual machine: X-Z-hera.cs.kuleuven.be with:
        • X = name of machine/project
        • Z = name of research group/section (dnet, dtai, ...)
  • what is needed to get started:
    • the names of the virtual machine and the local account(s) - this is the X in the details above - usually the account(s) has/ve the same (or at least similar) name(s)
    • at least 1 public SSH key to login on the local account(s)
    • if needed: the name of the public proxy end point and the port(s) on which your application(s) is/are accessible for the proxy service
    • if needed: the name and (initial) size of the NetApp volume(s)
    • a simple mail to helpdesk with the above information is enough to get started - details will be worked out as needed
  • what do you get:
    • a virtual machine where you can login with the public SSH key that you provided
    • a working proxy end point with the requested URLs that refers to (the specified port of) your application
    • SVM name(s) and junction path(s) to access your NetApp volume(s)
  • what do you need to do yourself:
    • add additional SSH keys if and how needed
    • ask helpdesk to install software that can be installed system-wide
    • install software yourself that cannot be installed system-wide
    • see our software catalogue for more details
  • ssh access to the virtual machine using its public DNS name and IP address:
    • (usually) only from within departmental networks or via departmental OpenVPN or using departmental ssh server as jump host:
    • ssh -p 2222 account@X-Z-hera.cs.kuleuven.be
    • ssh -p 2222 -J login@ssh.cs.kuleuven.be account@X-Z-hera.cs.kuleuven.be
    Most of these virtual machines are in a firewall zone that needs the special 2222 ssh port - just leave out the -p option if yours is not.
    Access to the virtual machine and to the jump host preferably using SSH Certs

OpenStack Do-it-Yourself Instances/Projects

  • a private network and resources with which you can create your own virtual machines / instances
    • management of instances, networks and resources is completely your responsibility
    • More details:
      • login and create instances, volumes, ...
        • create and use application credentials for command line use
          • all student computer lab machines and selected departemental machines have the OpenStack CLI commands installed - you can use the CLI commands from elsewhere by logging in on those machines via SSH
          • you can install the OpenStack CLI commands on your own machine and use them from there, but only when your machine has access to the Horizon-URL, either directly (from within departmental and/or KU Leuven networks) or via (departmental and/or KU Leuven) VPN
        • each project has:
          • a name: prj.Z.Y with:
            • Y = (real-world) name of project
            • Z = name of research group/section (dnet, dtai, ...)
          • limited CPU and RAM resources
          • a private network
          • access to and from the outside world to (the virtual machines in) your project is to be decided:
            • several external networks are provided in the cloud, largely corresponding to the departmental firewall zones
            • several network configurations are possible: private (virtual) router, floating ip, proxy, port redirection, ...
        • pre-defined (OS-)images and virtual machine configurations are available - new ones can be added on request
          • we strive as much as possible for uninterrupted operation and therefore might limit the virtual machine configurations to maximise the flexibility (e.g. for live-migration)
      • if needed: (static) DNS name of the instances: Y.Z.hera.cs.kuleuven.be or X.Y.Z.hera.cs.kuleuven.be with:
        • X = name of instance
        • Y = name of (OpenStack) project
        • Z = name of research group/section (dnet, dtai, ...)
        access still only from within departmental networks or via departmental OpenVPN or using departmental ssh server as jump host - see below in the 'ssh access to instances' item but replace the 172.2... IP address with the name of the instance.
  • what is needed to get started:
    • the name of the project - this is the Y in the details above
    • the resource-quotas for the project (vCPUs, vRAM, vDisk, ...)
    • the OpenStack router of the project - see below
    • the members of the project
    • if needed: the name of the public proxy end point and the port(s) on which your application is accessible for the proxy service
    • if needed: the name and (initial) size of the NetApp volume(s)
    • DistriNet has its own procedure and documentation - for others a simple mail to helpdesk with the above information is enough to get started - details will subsequently be worked out as needed
  • what do you get:
    • OpenStack account(s) for the OpenStack Horizon web interface (using your KU Leuven credentials)
    • resources to create OpenStack instances and volumes
    • OpenStack network with private IP address range
    • SVM name(s) and junction path(s) to access your NetApp volume(s)
  • what do you need to do yourself:
    • create OpenStack instances and volumes and install software in them
    • administer those instances and volumes to keep them up to date and safe
  • ssh access to instances using their private IP address:
    • no public IP address - therefore direct access only from within departmental networks or via departmental OpenVPN:
      • ssh ubuntu@172.2...
    • access from elsewhere using departmental ssh server as jump host:
      • members of department: ssh -J login@ssh.cs.kuleuven.be ubuntu@172.2...
      • students of department: ssh -J r0123456@st.cs.kuleuven.be ubuntu@172.2...
      • access to jump host preferably using SSH Certs
  • access to Horizon web interface using your KU Leuven credentials:

OpenStack Routers

OpenStack projects use a private network. So no instances of other projects will be on the same network/subnet as yours. Moreover and by default, access to your network from outside (be it other OpenStack networks or the departmental networks or outside networks) is blocked: you have to open such access by defining and using security groups.

OpenStack routers route network traffic in several directions:

  • OpenStack routers are used to connect OpenStack instances to the outside world/networks. They do so using NAT such that the internal private OpenStack IP addresses are not visible to the outside world.
  • But routers also route network traffic between the private networks that are attached to them. That traffic might be blocked or allowed by the security groups of the OpenStack projects/instances.
  • But routers do not route traffic between themselves. It is not possible to connect to instances that are connected to another OpenStack router than yours using their private OpenStack IP addresses. If you want to connect to such instances behind other routers, you have to use external / floating IP addresses.

Because this provides an extra layer of protection, where configuration mistakes cannot be exploited easily, there are several OpenStack routers. Depending on your needs your project/network will be connected to the right router:

    Student rtr.stud994external NAT IP 192.31.23.130internal IPs 172.23.0.0/16
    DistriNet rtr.dnet994external NAT IP 192.31.23.129internal IPs 172.22.0.0/16
    Cuckoo rtr.cuckoo991external NAT IP 192.31.23.5internal IPs 172.20.51.0/24
    CtF rtr.ctf1092external NAT IP 193.190.168.170internal IPs 172.23.4.0/25
    DeptCW rtr.cssys17external NAT IP 134.58.47.170internal IPs 172.20.0.0/18
    DeptCW rtr.cssys994external NAT IP 192.31.23.131internal IPs 172.20.64.0/18

Future Work, not yet (production) ready:

For the future we are thinking of providing separate DNS (sub)domains under (complete) control of the project-admins:

  • DNS subdomain: Y.Z.hera.cs.kuleuven.be
  • you administer your subdomain any which way you need to
  • departemental name servers are configured as slave/secondary name servers for your subdomain

Cleaning up old/legacy/test setups:

  • 172.20.101.0/24 - netapp-kronos - mag weg wanneer niet meer gebruikt - is vervangen door cssys.util-994