Aim:
Scaling the cluster! (New Step 5)
Attending:
CIS(theta) 2010-2011: DavidG, HerbertK, JoshG, RyanH
Reading:
NA
I'm adding a new Step 5: Scaling the Cluster. This step, once we figured it out, was actually very simple but very time consuming!. It boils down to replicating Step 4 on every single node. We have 25 nodes with 50 cores, so this step took several meetings to debug everything (dang stupid mistakes)! As far as compilers are concerned, only the Master Node needs to be configured for mpif77, mpif90, mpicc and mpicc++ to work properly. So, we can compile once and copy many. So, Step 5, listed below, is mainly concerned with getting mpirun to work correctly on every node.
This was our last meeting of the year, so this is our last update. Step 6 will have to wait for next year's CIS(theta) 2011-2012 team (GeorgeA, GrahamS, KennyK, LukeE) to actually produce something with this cluster. Congrats, CIS(theta) 2010-2011 (DavidG, HerbertK, JoshG, RyanH), you are the first team to actually get openMPI to work natively on our Ubuntu partitions! Thanx to CIS(theta) 2009-2010 (DevinB, JeremyA, SteveB) for getting us a head start on this project last year. We stand upon the shoulders of giants! Good job guys!
InstantCluster Step 1: Infrastructure
Make sure your cores have enough ventilation. The room has to have powerful air conditioning too. These two factors may seem trivial but will become crucial when running the entire cluster for extended periods of time! Also, you need to have enough electrical power, preferably with the cabling out of the way, to run all cores simultaneously. Don't forget to do the same with all your Ethernet cabling. We have CAT6E cables to support our gigE Ethernet cards and switches. We are lucky that this step was taken care of for us already!
InstantCluster Step 2: Hardware
You need up to date Ethernet switches plus Ethernet cards and cores as well as plenty of RAM in each Linux box. As stated above, our gigE LAN and switches were already setup for us. Also, we have 64bit dual-core AMD Athlons and our HP boxes have 750 MB of RAM. I'd rather 1 or 2 GB of RAM, but that will have to wait for an upgrade!
InstantCluster Step 3: Firmware
We wasted way too much time last year trying out all kinds of Linux distros looking for a good 64bit base for our cluster. This year we spent way too much time testing out different liveCD distros. Recently, we downgraded from 64bit Ubuntu 10.04 Desktop edition to the 32bit version on our Linux partitions. 64bit gives us access to more RAM and a larger maxint, but was proving to be a pain to maintain. Just to name one problem, jre and flash were hard to install and update on FireFox. Last year we tried Fedora, Rocks, Oscar, CentOS, Scientific Linux and, finally, Ubuntu. 32bit Ubuntu has proven very easy to use and maintain, so I think we'll stick with it for the cluster! We've done this several times over the years using everything from Slakware and KNOPPIX to Fedora and Ubuntu!
Current Ubuntu version:
http://shadowfaxrant.blogspot.com/2011/05/2-so-many-hard-drives-so-little-time.html
Current Ubuntu version:
http://shadowfaxrant.blogspot.com/2011/05/2-so-many-hard-drives-so-little-time.html
InstantCluster Step 4: Software Stack
On top of Ubuntu we need to add openSSH, public-key authentication and openMPI. In step 6 we can discuss an application to scatter/gather over the cluster whether it be graphical (fractals, povray, blender, openGL, animations) or number crunching (C++ or python app for Mersenne Primes or Beal's Conjecture). So, what follows is a summary of what we did to get up to plublic-key authentication. This summary is based on the http://cs.calvin.edu/curriculum/cs/374/MPI/ link listed below. First, we installed openSSH-server from http://packages.ubuntu.com using the proxy server, then:
- If you have no .ssh directory in your home directory, ssh to some other machine in the lab; then Ctrl-d to close the connection, creating .ssh and some related files.
- From your home directory, make .ssh secure by entering:
chmod 700 .ssh
- Next, make .ssh your working directory by entering:
cd .ssh
- To list/view the contents of the directory, enter:
ls -a [we used ls -l]
- To generate your public and private keys, enter:
ssh-keygen -t rsa
The first prompt is for the name of the file in which your private key will be stored; press Enter to accept the default name (id_rsa).The next two prompts are for the password you want, and since we are trying to avoid entering passwords, just press Enter at both prompts, returning you to the system prompt. - To compare the previous output of ls and see what new files have been created, enter:
ls -a [we used ls -l]
You should see id_rsa containing your private key, and id_rsa.pub containing your public key. - To make your public key the only thing needed for you to ssh to a different machine, enter:
cat id_rsa.pub >> authorized_keys
[The Linux boxes on our LAN, soon to be cluster, have IPs ranging from 10.5.129.1 to 10.5.129.24 So, we copied each id_rsa.pub file to temp01-temp24 and uploaded these files to our ftp server. On the ftp server we just ran cat tempnn >> authorized_keys for each temp file to generate one master authorized_keys file for all nodes that we could just download to each node's .ssh dir.]
- [optional] To make it so that only you can read or write the file containing your private key, enter:
chmod 600 id_rsa
- [optional] To make it so that only you can read or write the file containing your authorized keys, enter:
chmod 600 authorized_keys
We then installed openMPI (see dependencies below) and tested multi-core with flops. Testing the cluster as a whole will have to wait until the next meeting! We followed openMPI install instructions for Ubuntu from http://www.cs.ucsb.edu/~hnielsen/cs140/openmpi-install.html
These instructions say to use sudo and run run apt-get install openmpi-bin openmpi-doc libopenmpi-dev However, the way our firewall is setup at school, I can never update my apt-get sources files properly. So, I used http://packages.ubunutu.com to look up these files and got the following dependencies!
If you think that's bad, look at all the dependencies I ran into to install VLC on Ubuntu Maverick Meerkat 10.04 which was required to view *.mp4 files and to instrall handbrake on the Teacher Station!
We finally got all this working (sans vlc which is only on my Teacher station). Then we used the following FORTRAN code to test multi-core. FORTRAN, really? I haven't used FORTRAN77 since 1979! ...believe it or don't!
we compiled on the Master Node flops.f after installing gfortran too:
mpif77 -o flops flops.f
and tested openmpi and got around 900 MFLOPS using 2 cores:
mpirun -np 2 flops
Next, we generated a "machines" file to tell mpirun where all the nodes (Master and Workers) are (2 nodes with 2 cores each for example):
mpirun -np 4 --hostfile machines flops
InstantCluster Step 5: Scale the Cluster!
The focus here is to get mpirun working on every node. I don't know if you have to set up every node exactly the same way, but everything finally worked when we did! IE: I didn't think we needed the machines file on every box. Here's what we did on each node:
as root on every node:
install openmpi
install openssh
adduser shadowfax
as shadowfax on Master node:
create machines file and copy /home/shadowfax/machines to ftp site
as shadowfax on every node:
create /home/shadowfax/.ssh
run ssh-keygen -t rsa
cat id_rsa.pub tempXX (XX = 1-25 = node number)
copy all tempXX files to ftp site and cat tempXX >> authorized_keys
copy from ftp site to /home/shadowfax/flops
chmod 755 flops
copy from ftp site to /home/shadowfax/machines
copy from ftp site to /home/shadowfax/.ssh/authorized_keys
Note: uniformity is the key! This appoarch is very convenient as, with just a little bit of tweaking of the "machines" and "authorized_keys" files as well as the command line, any box can be the Master node and you can use as many or as few of the nodes as you want!
Every node has the same "machines" text file in /home/shadowfax listing all the IPs, one per line. Every node has the same "flops" executable file (or whatever your executable will be) in /home/shadowfax. Every node has the same "authorized_keys" text file with all 25 keys in /home/shadowfax/.ssh
Every node has the same "machines" text file in /home/shadowfax listing all the IPs, one per line. Every node has the same "flops" executable file (or whatever your executable will be) in /home/shadowfax. Every node has the same "authorized_keys" text file with all 25 keys in /home/shadowfax/.ssh
InstantCluster Step 6: Running Apps!
This step will have to wait for next year. Thanx to the hard work of the last 2 years' CIS(theta) teams, we finally know how to setup an openMPI cluster on our Linux partitions: no more 64bit, no more liveCDs, no more hunting for a new distro! Next year's team can redo steps 1-5 to learn how all this works with a new version of Ubuntu (bye bye Lucid Lemur and Maverick Meerkat, hello Natty Narwhal or Oneiric Ocelot). Than we can finally play with fractals, povray, tachyon, blender, animations and number crunching!
Research:
openSSH and openMPI
MPI on Ubuntu
http://ubuntuforums.org/showthread.php?t=1327253
http://ubuntuforums.org/showthread.php?t=1327253
Sample MPI code
http://www.cc.gatech.edu/projects/ihpcl/mpi.html
http://www.cc.gatech.edu/projects/ihpcl/mpi.html
Thomas Jefferson High courses
http://academics.tjhsst.edu/compsci/parallel/
Thomas Jefferson High paper
http://www.tjhsst.edu/~rlatimer/techlab07/BWardPaperQ3-07.pdf
Thomas Jefferson High ftp
http://www.tjhsst.edu/~rlatimer/techlab07/
Thomas Jefferson High teacher
http://www.tjhsst.edu/~rlatimer/
MPI4py
http://blog.mikael.johanssons.org/archive/2008/05/parallell-and-cluster-mpi4py/
http://academics.tjhsst.edu/compsci/parallel/
Thomas Jefferson High paper
http://www.tjhsst.edu/~rlatimer/techlab07/BWardPaperQ3-07.pdf
Thomas Jefferson High ftp
http://www.tjhsst.edu/~rlatimer/techlab07/
Thomas Jefferson High teacher
http://www.tjhsst.edu/~rlatimer/
MPI4py
http://blog.mikael.johanssons.org/archive/2008/05/parallell-and-cluster-mpi4py/
Parallel Python
IPython
Large Integer number crunching Mersenne Primes
http://www.hoise.com/primeur/03/articles/weekly/AE-PR-01-04-37.html
Large Integer number crunching Beal Conjecture
http://www.bealconjecture.com/
http://www.hoise.com/primeur/03/articles/weekly/AE-PR-01-04-37.html
Large Integer number crunching Beal Conjecture
http://www.bealconjecture.com/
MPIPOV
http://comp.uark.edu/~ewe/misc/povray.html
POVRAY
http://himiko.dnsalias.net/wordpress/2010/03/29/persistence-of-vision-ray-tracer-povray/
MPI and blender
http://www.blender.org/forum/viewtopic.php?t=10244&view=next&sid=3003d90233a27b81c5093a374c2b0e31
More MPI and blender
http://wiki.vislab.usyd.edu.au/moinwiki/BlenderMods/
http://comp.uark.edu/~ewe/misc/povray.html
POVRAY
http://himiko.dnsalias.net/wordpress/2010/03/29/persistence-of-vision-ray-tracer-povray/
MPI and blender
http://www.blender.org/forum/viewtopic.php?t=10244&view=next&sid=3003d90233a27b81c5093a374c2b0e31
More MPI and blender
http://wiki.vislab.usyd.edu.au/moinwiki/BlenderMods/
Well, that's all for now, enjoy!