DoC Computing Support Group


Private Cloud Working Group: 3rd April 2012 meeting

A working group of academics has been set up, this met on 3rd April 2012 for the first time. Things discussed:

  • PJM/Susan: background (spend money now, define services later), acknowledged unusual approach.. added (PJM) idea that a group can have a VM per project per year if they need, so they build new apps on the latest supported OS, while maintaining the ability to run their old versions on the older OS, allows people to try old code on new OS releases without "big bang" server upgrade problems. old VMs can eventually wither away.. want to save RAs (and CSG?) sysadmin time.
  • PJM: start with concept of: every student gets a VM as they walk in through the door, keep while at College, have root access [need to fix/avoid NFS problem]. users should have the ability to create more VMs programmatically, both short term and long term ones.
  • PJM: also, are we all agreed: it's got to be a realiable production system. Noone disagreed (but see later discussions).
  • JAMM: use cases of interest to her - projects into cloud technologies, pervasive computing exercises could be made more flexible [not sure how], some of her research involves streaming data from sensors, need high-capacity filestores.
  • PRP: EPSRC call "every research grant puts in for a small cluster" by the name "vanity clusters". EPSRC favouring shared resources (Dept, College, federated) - will allocate at most first £10K of equipment, then excess must have matching funds from Dept! favours (for example) shared services, grids, clouds and HPC.
  • PRP added: VMs can really speed up provisioning of research project kit, instead of purchasing kit, waiting for it to arrive, installing and configuring it, use and maintain it, then (after project) decide what to do with it, can create 16 short term VMs bound to suitable hardware very quickly, do quick experiments and release the VMs resources. If spare hardware capacity is in hand, of course!
  • PRP agreed with Julie that research into cloud and distributed systems performance could be improved if we had a cloud which we could monitor and tweak.
  • JD: 2 important aspects of cloud here: 1. easily provisioned VMs; 2. amortization of all resources over multiple projects. The latter requires that researchers don't require all of their "own" resources "all" of the time - otherwise none spare!
  • PJM/Susan: the matching funds model allows Dept to demand up to 50% of these shared resources [on average over time, perhaps front-loaded so "owners" get the majority of time up front, release nearly all resources later for general use].
  • CCADAR: will sometimes need exclusive access to all "your" cluster VMs on all your hardware for experiments - repeatability is especially important. => need ability to pin VMs onto particular classes of node.

  • PRP: Yes, and sometime experiments need to happen directly on the bare metal. but only a small minority!
  • JAMM: performance monitoring very important.
  • WJK: yes, including power monitoring of the physical VM hosts, a la picards. very useful.
  • GCASALE: agreed, added a subtle point about frequency of monitoring being very different between "cheap" power mon and "expensive" power mon.. LDK discussing with him.
  • SUSAN: Maja had mentioned that she makes a very large amount of use of Matlab, on Windows clusters, buying extra parallel licenses etc. PJM: why not use College standard license? DCW: believe extra modules and parallel licenses not included in College Matlab license, which is why ICT HPC kit doesn't support Matlab either!
  • TORA: Lab are very interested in more continuous autotesting, need a better sandbox: like a short term VM to run student code in! Also very interested in scalable storage
  • JD/SUSAN discussed: where are other Computing Depts with clouds? at any level (Dept, College, federated?) - answer seems to be: none known in production.
  • DWM added that LESC had done lots of "cloud v1" - grid - related work, and mentioned the similarities between grids, private clouds, batch processing and HPC.
  • PRP said that we should make more use of ICT's HPC, big resource. Susan said: some use ICT extensively (eg PHJK). PJM added that PHJK has found ICT HPC support very helpful, has invested money in more HPC kit, and believes we should make more use of college HPC. SUSAN added that she/Khanwal have found HPC team a bit sniffy about running Java code on HPC kit.
  • DCW said: yes, real programmers in HPC:-), and added that lots of money still going in though - let's use it. DCW added: HPC doesn't even let you access College home dirs cos they're "not fast enough" (source: Simon Burbidge, ICT), and mentioned that ICT also upgrading to VMware ESX 5, which "supports cloud" (but DCW doesn't know what that means).
  • PJM asked re: this - does everyone want DoC home dirs and research volumes accessible from VMs? everyone agreed, but several people pointed out that existing fileservers can be saturated by Condor so fileservers will need to scale more to cope.
  • DCW asked: what about Amazon S3 - simple distributed (key,value) storage system - important to DoC? some people said "might be useful" but noone had a solid use case.
  • WJK added that he'd love to do storage speed experiments using different speed storage eg. flash and raid levels.
  • TORA added that a large scalable block storage system would be very useful.
  • DWM said there seems to be a need for scalable storage at some level as part of the cloud, there are a variety of technologies - open source and commercial - to look at. Amazingly, he didn't even say "Ceph":-)
  • PJM said that commercial filers should be looked into, such as NetApp/EMC. PRP added that cloud storage is NetApp's bread and butter and their support and scalability was really good. Susan said DoC should consider these, but has a preference for open source if possible, DCW: CSG need to investigate NetApp with PRP/Cambridge/ICT help.

  • SUSAN reported that DR had initially said - CSG do everything his group needs, why need a cloud. However, when she asked him - could your group use more scalable storage, his eyes lit up!
  • DWM: so we conclude that scalable storage is very important? general vague agreement.
  • DCW summary: so cloud storage needs to hold VM images, it's not clear whether the same cloud storage subsystem should also support scalable filesystems, or whether fileservers are separate (but need to scale more). No estimate of size! S3 probably not important (optional extra).
  • GCASALE asked: what type of cloud? private? DCW/PJM: yes. what about cloudbursting, he asked ? DWM: what's that? GCASALE: ability to upload VMs to Amazon after development (or when need short term extra resources, maybe downloading VMs from Amazon too, general inter-Amazon operability). PJM: useful if possible.
  • COSTA: what about network bandwidth? 10Gb links? may also need bandwidth reservation in switch fabric. DWM: talking with ICT networking about 10Gb, they can also discuss bandwidth reservation.
  • Natasha's phd student Vuk Janjic": their group are very interested in virtualizing algorithms but still using FGPAs and GPUs, and again more scalable storage is needed here.
  • WL agreed, saying some VM hosts definitely need to have GPUs and FPGAs (he can provide details and costs). DCW added that Amazon EC2 had VMs with access to GPUs and FPGAs etc in their pricing model.
  • WL added that he'd be very interested in "getting under the hood" and tweaking and monitoring how various aspects of the cloud operate. PJM said: may be contrary to production cloud - but perhaps a "sandpit cloud" could fork off the main cloud on occasion, grab some hardware etc. WJK agreed.
  • PJM talked about a cost accounting model, enforcing 50% maximum usage, sounded very complicated (DCW: god knows how that's even implemented! perhaps logging use for post-analysis). WJK wondered whether anything that heavy was needed.
  • JD asked: would we give access to people outside of DoC? DCW: no, our resources, our users. JD: power of clouds (and interesting research topics) is when you get to federating.
  • PJM: might be open to sharing with ICT, maybe specific research projects later?

Quick round up of other comments at end, useful services to check?

  • CouchDB useful (JD)
  • OpenNebula (GCASALE) - DCW: looks very interesting, open source data centre virtualization, very cute, supposed to be able to "integrate your existing storage, hosts etc".

  • MooseFS (DCW added) - DCW: cluster file system that openNebula can use for shared storage. LDK: uses FUSE layer.
  • Eucalyptus (JD).
  • OpenStack (PJM)

  • Hadoop/Mapreduce (COSTA)
  • DCW asked about size of storage: helpful answer was "TBs to PBs".

PJM's summary of meeting

I think that three basic conclusions should be drawn from yesterday's discussion regarding the specification of hardware:

  1. Our concept of buying compute nodes with large numbers of cores and large memory, with disc storage for virtual machines images, and with 10G network will support the main objective of providing a DoC Cloud that provides virtual computers to the DoC students and staff for teaching and research purposes. We need to refine exactly which machines and configuration will be purchased.
  2. We should look at input from research groups as to what is the cost of GPU, FGPA, and hardware monitoring, and see if this can be incorporated at this stage.
  3. We do need to look at fast network storage options.

Next meeting

Next Working Group meeting: 25th April 1pm, level 4 common room

 
 

project/privatecloud/meeting-2012-04-03 (last edited 2012-05-14 18:44:04 by ldk)