Aim:
If you can't beat them....
Attending:
CIS(theta) 2010-2011: DavidG, HerbertK, JoshG, RyanH
Reading:
NA
NA
Research:
openSSH and openMPI
http://packages.ubuntu.com
public keys
http://cs.calvin.edu/curriculum/cs/374/MPI/ssh.html
MPI intro, nice!
http://www.ualberta.ca/CNS/RESEARCH/Courses/2001/PPandV/Intro_to_MPI.pdf
MPI on Ubuntu
http://ubuntuforums.org/showthread.php?t=1327253
Sample MPI code
http://www.cc.gatech.edu/projects/ihpcl/mpi.html
openSSH and openMPI
http://packages.ubuntu.com
public keys
http://cs.calvin.edu/curriculum/cs/374/MPI/ssh.html
MPI intro, nice!
http://www.ualberta.ca/CNS/RESEARCH/Courses/2001/PPandV/Intro_to_MPI.pdf
MPI on Ubuntu
http://ubuntuforums.org/showthread.php?t=1327253
Sample MPI code
http://www.cc.gatech.edu/projects/ihpcl/mpi.html
Thomas Jefferson High courses
http://academics.tjhsst.edu/compsci/parallel/
Thomas Jefferson High paper
http://www.tjhsst.edu/~rlatimer/techlab07/BWardPaperQ3-07.pdf
Thomas Jefferson High ftp
http://www.tjhsst.edu/~rlatimer/techlab07/
Thomas Jefferson High teacher
http://www.tjhsst.edu/~rlatimer/
MPI4py
http://blog.mikael.johanssons.org/archive/2008/05/parallell-and-cluster-mpi4py/
http://academics.tjhsst.edu/compsci/parallel/
Thomas Jefferson High paper
http://www.tjhsst.edu/~rlatimer/techlab07/BWardPaperQ3-07.pdf
Thomas Jefferson High ftp
http://www.tjhsst.edu/~rlatimer/techlab07/
Thomas Jefferson High teacher
http://www.tjhsst.edu/~rlatimer/
MPI4py
http://blog.mikael.johanssons.org/archive/2008/05/parallell-and-cluster-mpi4py/
Parallel Python
IPython
Large Integer number crunching Mersenne Primes
http://www.hoise.com/primeur/03/articles/weekly/AE-PR-01-04-37.html
Large Integer number crunching Beal Conjecture
http://www.bealconjecture.com/
MPIPOV
http://comp.uark.edu/~ewe/misc/povray.html
POVRAY
http://himiko.dnsalias.net/wordpress/2010/03/29/persistence-of-vision-ray-tracer-povray/
MPI and blender
http://www.blender.org/forum/viewtopic.php?t=10244&view=next&sid=3003d90233a27b81c5093a374c2b0e31
More MPI and blender
http://wiki.vislab.usyd.edu.au/moinwiki/BlenderMods/
If you can't beat them, join them! In other words, I think the consensus from our last meeting is that we are done with liveCD Linux distro solutions for HPC. What's to stop us from installing openMPI on our Linux partitions? So, we will have to install openSSH and openMPI directly on our Linux partitions to set up a more permanent cluster.
Fret not, all is not lost as we can learn from our experience this year. Cluster By Night should serve as a proof of concept. Namely, we CAN run openMPI over openSSH! pelicanHPC worked fine with a crossover Ethernet cable, but I am done with PXE boot! However, we can emulate all the number crunching apps that pelicanHPC has by running Octave with MPITB and openMPI. Let's make sure not repeat the mistakes from last year trying out a million distros on our Linux partitions. Let's stick to our 32bit Unbuntu 10.10 Desktop. We should take stock of where we've been.
CIS(theta) (2007-2008) used openMOSIX and C++ to make fractal graphs on the complex plane. We took our cue from these blog posts:
http://nullprogram.com/blog/2007/09/02/
http://nullprogram.com/blog/2007/09/17/
http://nullprogram.com/blog/2007/10/01/
http://nullprogram.com/projects/mandel/
CIS(theta) (2008-2009) used public key authenticated ssh and a pile of bash scripts to scatter/gather povray jobs. In fact, this blog started out as a site to record our results and document how we did it!
http://shadowfaxrant.blogspot.com/2009/05/poor-mans-cluster-step-0.html
http://shadowfaxrant.blogspot.com/2009/06/poor-mans-cluster-step-1.html
http://shadowfaxrant.blogspot.com/2009/07/poor-mans-cluster-step-2.html
http://shadowfaxrant.blogspot.com/2009/08/poor-mans-cluster-step-3.html
http://shadowfaxrant.blogspot.com/2009/08/poor-mans-cluster-step-4.html
http://shadowfaxrant.blogspot.com/2009/08/poor-mans-cluster-step-5.html
http://shadowfaxrant.blogspot.com/2009/08/poor-mans-cluster-step-6.html
http://shadowfaxrant.blogspot.com/2009/08/poor-mans-cluster-step-7.html
http://shadowfaxrant.blogspot.com/2009/08/poor-mans-cluster-step-8.html
http://shadowfaxrant.blogspot.com/2009/08/poor-mans-cluster-step-9.html
CIS(theta) (2009-2010) got bogged down finding a stable 64bit Linux distro to use on the Linux partitions of our dual boot PCs. We used 32bit and 64bit Fedora 11 and 12. We tried centOS, Scientific Linux, OSCAR and Rocks! We got a torque server working for openMPI and helloMPI.c but didn't get much farther than that, I'm afraid.
CIS(theta) (2010-2011) has to switch gears and take the best of all the above. Many in the HPC community talk about a software stack. So, let's come up with our own! I think we can run our application (C++ fractal program we design, povray, blender or openGL) on top of openMPI on top of openSSH (with public key authentication) on top of Ubuntu. What about the hardware stack (64bit dual-core AMD athlons on top of a gigE switch)? I think we can make a stack like this work, what do you think?
BTW, openGL sounds interesting. Take a look at the links above from Thomas Jefferson High School. They've been running a clustering course for quite some time. In fact, they got into it when they won some sort of computing competition on the late 1980s and Cray donated a super computer to their school! More recently they've been playing with openMosix and openMPI as well as fractals and povray just like us. They have a lot of notes on openGL too! Also, if you want a good overview of things MPI, take a look at the ualberta link, it's a very good overview in ppt style even though its a pdf!
Then we should update our Poor Man's Cluster notes to Poor Man's Cluster 2.0 notes! Step1 can be about the physical plant including electrical power, ventilation and AC. Step2 can be about the hardware stack including our gigE LAN and athlon boxes. Step3 can be about the firmware stack including Ubuntu, openSSH and setting up public keys. Step4 can be about the software stack whether it be openMPI+MPITB+Octave or MPI4py or povray or blender or openGL. Step5 can be about various applications and results.
Well, that's all for now, enjoy!
http://www.hoise.com/primeur/03/articles/weekly/AE-PR-01-04-37.html
Large Integer number crunching Beal Conjecture
http://www.bealconjecture.com/
MPIPOV
http://comp.uark.edu/~ewe/misc/povray.html
POVRAY
http://himiko.dnsalias.net/wordpress/2010/03/29/persistence-of-vision-ray-tracer-povray/
MPI and blender
http://www.blender.org/forum/viewtopic.php?t=10244&view=next&sid=3003d90233a27b81c5093a374c2b0e31
More MPI and blender
http://wiki.vislab.usyd.edu.au/moinwiki/BlenderMods/
If you can't beat them, join them! In other words, I think the consensus from our last meeting is that we are done with liveCD Linux distro solutions for HPC. What's to stop us from installing openMPI on our Linux partitions? So, we will have to install openSSH and openMPI directly on our Linux partitions to set up a more permanent cluster.
Fret not, all is not lost as we can learn from our experience this year. Cluster By Night should serve as a proof of concept. Namely, we CAN run openMPI over openSSH! pelicanHPC worked fine with a crossover Ethernet cable, but I am done with PXE boot! However, we can emulate all the number crunching apps that pelicanHPC has by running Octave with MPITB and openMPI. Let's make sure not repeat the mistakes from last year trying out a million distros on our Linux partitions. Let's stick to our 32bit Unbuntu 10.10 Desktop. We should take stock of where we've been.
1995-2001: Our First Cluster
Ethernet (10Mbps),
Gateway PCs, 128MB RAM,
Mixed Intel Pentium II 400MHz and Intel Pentium III 800MHz
****************************************
****************************************
2001-2007: Our Second Cluster
Fast Ethernet (100Mbps),
Dell PCs, 1GB RAM,
Pentium IV 2.6 GHz
****************************************
****************************************
2007-now: Our Current Cluster
gigE (1000Mbps),
HP PCs, 750MB RAM,
AMD Athlon 64bit dualcore 2GHz
CIS(theta) (2007-2008) used openMOSIX and C++ to make fractal graphs on the complex plane. We took our cue from these blog posts:
http://nullprogram.com/blog/2007/09/02/
http://nullprogram.com/blog/2007/09/17/
http://nullprogram.com/blog/2007/10/01/
http://nullprogram.com/projects/mandel/
CIS(theta) (2008-2009) used public key authenticated ssh and a pile of bash scripts to scatter/gather povray jobs. In fact, this blog started out as a site to record our results and document how we did it!
http://shadowfaxrant.blogspot.com/2009/05/poor-mans-cluster-step-0.html
http://shadowfaxrant.blogspot.com/2009/06/poor-mans-cluster-step-1.html
http://shadowfaxrant.blogspot.com/2009/07/poor-mans-cluster-step-2.html
http://shadowfaxrant.blogspot.com/2009/08/poor-mans-cluster-step-3.html
http://shadowfaxrant.blogspot.com/2009/08/poor-mans-cluster-step-4.html
http://shadowfaxrant.blogspot.com/2009/08/poor-mans-cluster-step-5.html
http://shadowfaxrant.blogspot.com/2009/08/poor-mans-cluster-step-6.html
http://shadowfaxrant.blogspot.com/2009/08/poor-mans-cluster-step-7.html
http://shadowfaxrant.blogspot.com/2009/08/poor-mans-cluster-step-8.html
http://shadowfaxrant.blogspot.com/2009/08/poor-mans-cluster-step-9.html
CIS(theta) (2009-2010) got bogged down finding a stable 64bit Linux distro to use on the Linux partitions of our dual boot PCs. We used 32bit and 64bit Fedora 11 and 12. We tried centOS, Scientific Linux, OSCAR and Rocks! We got a torque server working for openMPI and helloMPI.c but didn't get much farther than that, I'm afraid.
CIS(theta) (2010-2011) has to switch gears and take the best of all the above. Many in the HPC community talk about a software stack. So, let's come up with our own! I think we can run our application (C++ fractal program we design, povray, blender or openGL) on top of openMPI on top of openSSH (with public key authentication) on top of Ubuntu. What about the hardware stack (64bit dual-core AMD athlons on top of a gigE switch)? I think we can make a stack like this work, what do you think?
BTW, openGL sounds interesting. Take a look at the links above from Thomas Jefferson High School. They've been running a clustering course for quite some time. In fact, they got into it when they won some sort of computing competition on the late 1980s and Cray donated a super computer to their school! More recently they've been playing with openMosix and openMPI as well as fractals and povray just like us. They have a lot of notes on openGL too! Also, if you want a good overview of things MPI, take a look at the ualberta link, it's a very good overview in ppt style even though its a pdf!
Then we should update our Poor Man's Cluster notes to Poor Man's Cluster 2.0 notes! Step1 can be about the physical plant including electrical power, ventilation and AC. Step2 can be about the hardware stack including our gigE LAN and athlon boxes. Step3 can be about the firmware stack including Ubuntu, openSSH and setting up public keys. Step4 can be about the software stack whether it be openMPI+MPITB+Octave or MPI4py or povray or blender or openGL. Step5 can be about various applications and results.
Well, that's all for now, enjoy!
No comments:
Post a Comment