All the topics or threads I've written on so far focus primarily on last school year 2008-2009 at Baldwin High School and Nassau Community College. Some of these threads also referenced issues from years prior to that. So, I think I need some new topics:
General will remain unchanged
"LinuxCluster" will become "Clusters 2009-2010" relating to the work of my Computer Science IndependentStudy students (CSI).
"Scientific Computing" will become "Learning with Technology 2009-2010" focusing on my new Calculus Research Lab (CRL).
"SmartBoards" will become "Teaching with Technology 2009-2010" discussing technology issues from my end as the teacher delivering content with technology.
Generally Speaking,
calcpage@aol.com
Computer Algebra Systems & Linux Clusters!
* The Website formally known as CALCPAGE (1988 – 2008) *
FREE FOR YOUR READING PLEASURE
PLEASE CLICK ON ONE OF MY ADVERTISERS
Sunday, August 23, 2009
Friday, August 21, 2009
Poor Man's Cluster - Step 9
This is going to be my last entry to this topic. I'll be making new topics for the 2009-2010 school year. Anyway, I thought you'd be interested in a couple scripts we used from time to time. These are not entirely necessary, but proved helpful:
biscuit.sh:
./povray +i./scenes/advanced/biscuit.pov -f +d +p +v +w1024 +h768 +a0.1 +L./include
addkey.sh:
#ssh root@10.5.129.$1 "useradd -m cluster && passwd cluster"
#ssh cluster@10.5.129.$1 "mkdir -p /home/cluster/.ssh;echo 1; ssh-keygen -N \"\" -f /home/cluster/.ssh/id_rsa -t rsa;echo 2;"
#scp -v cluster@10.5.129.$1:/home/cluster/.ssh/id_rsa.pub cluster@10.5.129.7:/home/cluster/id_rsa.pub$1;echo 3;
for ((a=1;a<26;a++));do echo trying $a; ssh -x cluster@10.5.129.$a "ssh-keygen -N \"\" -f /home/cluster/.ssh/id_rsa" ; scp cluster@10.5.129.$a:/home/cluster/.ssh/id_rsa.pub /home/cluster/id_rsa.pub$a ;done
#ssh -x cluster@10.5.129.$1 "mkdir -p /home/cluster/.ssh; ssh-keygen -N \"\" -f /home/cluster/.ssh/id_rsa -t rsa; scp /home/cluster/.ssh/id_rsa.pub cluster@10.5.129.7:/home/cluster/id_rsa.pub$1"
populate.sh:
for ((i=1;i<26;i++));do scp .ssh/authorized_keys 10.5.129.$i:/home/cluster/.ssh; scp .ssh/known_hosts 10.5.129.$i:/home/cluster/.ssh; scp .vimrc 10.5.129.$i:/home/cluster; ssh 10.5.129.$i ln .ssh/authorized_keys .ssh/authorized_keys2; echo done $i; done
povkill.sh:
#!/bin/bash
for a in $@; do
addr="10.5.129."${a#10.5.129.}
addr=${addr/10.5.129.self/127.0.0.1}
addr=${addr/10.5.129.localhost/127.0.0.1}
addr=${addr/10.5.129.127.0.0.1/127.0.0.1}
ssh cluster@$addr pkill -u cluster povray
echo $a done
done
shutdown.sh:
for a in $@; do
ssh root@10.5.129.$a reboot
done
Enjoy,
A. Jorge Garcia,
calcpage@aol.com,
http://calcpage.tripod.com/,
http://www.zazzle.com/cistheta2008,
http://cistheta2007.deviantart.com/,
http://www.youtube.com/cistheta2007
biscuit.sh:
./povray +i./scenes/advanced/biscuit.pov -f +d +p +v +w1024 +h768 +a0.1 +L./include
addkey.sh:
#ssh root@10.5.129.$1 "useradd -m cluster && passwd cluster"
#ssh cluster@10.5.129.$1 "mkdir -p /home/cluster/.ssh;echo 1; ssh-keygen -N \"\" -f /home/cluster/.ssh/id_rsa -t rsa;echo 2;"
#scp -v cluster@10.5.129.$1:/home/cluster/.ssh/id_rsa.pub cluster@10.5.129.7:/home/cluster/id_rsa.pub$1;echo 3;
for ((a=1;a<26;a++));do echo trying $a; ssh -x cluster@10.5.129.$a "ssh-keygen -N \"\" -f /home/cluster/.ssh/id_rsa" ; scp cluster@10.5.129.$a:/home/cluster/.ssh/id_rsa.pub /home/cluster/id_rsa.pub$a ;done
#ssh -x cluster@10.5.129.$1 "mkdir -p /home/cluster/.ssh; ssh-keygen -N \"\" -f /home/cluster/.ssh/id_rsa -t rsa; scp /home/cluster/.ssh/id_rsa.pub cluster@10.5.129.7:/home/cluster/id_rsa.pub$1"
populate.sh:
for ((i=1;i<26;i++));do scp .ssh/authorized_keys 10.5.129.$i:/home/cluster/.ssh; scp .ssh/known_hosts 10.5.129.$i:/home/cluster/.ssh; scp .vimrc 10.5.129.$i:/home/cluster; ssh 10.5.129.$i ln .ssh/authorized_keys .ssh/authorized_keys2; echo done $i; done
povkill.sh:
#!/bin/bash
for a in $@; do
addr="10.5.129."${a#10.5.129.}
addr=${addr/10.5.129.self/127.0.0.1}
addr=${addr/10.5.129.localhost/127.0.0.1}
addr=${addr/10.5.129.127.0.0.1/127.0.0.1}
ssh cluster@$addr pkill -u cluster povray
echo $a done
done
shutdown.sh:
for a in $@; do
ssh root@10.5.129.$a reboot
done
Enjoy,
A. Jorge Garcia,
calcpage@aol.com,
http://calcpage.tripod.com/,
http://www.zazzle.com/cistheta2008,
http://cistheta2007.deviantart.com/,
http://www.youtube.com/cistheta2007
Poor Man's Cluster - Step 9
This is going to be my last entry to this topic. I'll be making new topics for the 2009-2010 school year. Anyway, I thought you'd be interested in a couple scripts we used from time to time. These are not entirely necessary, but proved helpful:
biscuit.sh:
./povray +i./scenes/advanced/biscuit.pov -f +d +p +v +w1024 +h768 +a0.1 +L./include
addkey.sh:
#ssh root@10.5.129.$1 "useradd -m cluster && passwd cluster"
#ssh cluster@10.5.129.$1 "mkdir -p /home/cluster/.ssh;echo 1; ssh-keygen -N \"\" -f /home/cluster/.ssh/id_rsa -t rsa;echo 2;"
#scp -v cluster@10.5.129.$1:/home/cluster/.ssh/id_rsa.pub cluster@10.5.129.7:/home/cluster/id_rsa.pub$1;echo 3;
for ((a=1;a<26;a++));do echo trying $a; ssh -x cluster@10.5.129.$a "ssh-keygen -N \"\" -f /home/cluster/.ssh/id_rsa" ; scp cluster@10.5.129.$a:/home/cluster/.ssh/id_rsa.pub /home/cluster/id_rsa.pub$a ;done
#ssh -x cluster@10.5.129.$1 "mkdir -p /home/cluster/.ssh; ssh-keygen -N \"\" -f /home/cluster/.ssh/id_rsa -t rsa; scp /home/cluster/.ssh/id_rsa.pub cluster@10.5.129.7:/home/cluster/id_rsa.pub$1"
populate.sh:
for ((i=1;i<26;i++));do scp .ssh/authorized_keys 10.5.129.$i:/home/cluster/.ssh; scp .ssh/known_hosts 10.5.129.$i:/home/cluster/.ssh; scp .vimrc 10.5.129.$i:/home/cluster; ssh 10.5.129.$i ln .ssh/authorized_keys .ssh/authorized_keys2; echo done $i; done
povkill.sh:
#!/bin/bash
for a in $@; do
addr="10.5.129."${a#10.5.129.}
addr=${addr/10.5.129.self/127.0.0.1}
addr=${addr/10.5.129.localhost/127.0.0.1}
addr=${addr/10.5.129.127.0.0.1/127.0.0.1}
ssh cluster@$addr pkill -u cluster povray
echo $a done
done
shutdown.sh:
for a in $@; do
ssh root@10.5.129.$a reboot
done
Enjoy,
A. Jorge Garcia,
calcpage@aol.com,
http://calcpage.tripod.com/,
http://www.zazzle.com/cistheta2008,
http://cistheta2007.deviantart.com/,
http://www.youtube.com/cistheta2007
biscuit.sh:
./povray +i./scenes/advanced/biscuit.pov -f +d +p +v +w1024 +h768 +a0.1 +L./include
addkey.sh:
#ssh root@10.5.129.$1 "useradd -m cluster && passwd cluster"
#ssh cluster@10.5.129.$1 "mkdir -p /home/cluster/.ssh;echo 1; ssh-keygen -N \"\" -f /home/cluster/.ssh/id_rsa -t rsa;echo 2;"
#scp -v cluster@10.5.129.$1:/home/cluster/.ssh/id_rsa.pub cluster@10.5.129.7:/home/cluster/id_rsa.pub$1;echo 3;
for ((a=1;a<26;a++));do echo trying $a; ssh -x cluster@10.5.129.$a "ssh-keygen -N \"\" -f /home/cluster/.ssh/id_rsa" ; scp cluster@10.5.129.$a:/home/cluster/.ssh/id_rsa.pub /home/cluster/id_rsa.pub$a ;done
#ssh -x cluster@10.5.129.$1 "mkdir -p /home/cluster/.ssh; ssh-keygen -N \"\" -f /home/cluster/.ssh/id_rsa -t rsa; scp /home/cluster/.ssh/id_rsa.pub cluster@10.5.129.7:/home/cluster/id_rsa.pub$1"
populate.sh:
for ((i=1;i<26;i++));do scp .ssh/authorized_keys 10.5.129.$i:/home/cluster/.ssh; scp .ssh/known_hosts 10.5.129.$i:/home/cluster/.ssh; scp .vimrc 10.5.129.$i:/home/cluster; ssh 10.5.129.$i ln .ssh/authorized_keys .ssh/authorized_keys2; echo done $i; done
povkill.sh:
#!/bin/bash
for a in $@; do
addr="10.5.129."${a#10.5.129.}
addr=${addr/10.5.129.self/127.0.0.1}
addr=${addr/10.5.129.localhost/127.0.0.1}
addr=${addr/10.5.129.127.0.0.1/127.0.0.1}
ssh cluster@$addr pkill -u cluster povray
echo $a done
done
shutdown.sh:
for a in $@; do
ssh root@10.5.129.$a reboot
done
Enjoy,
A. Jorge Garcia,
calcpage@aol.com,
http://calcpage.tripod.com/,
http://www.zazzle.com/cistheta2008,
http://cistheta2007.deviantart.com/,
http://www.youtube.com/cistheta2007
Thursday, August 20, 2009
A Little History
A Little History
Topic: LinuxClusters
Topic: LinuxClusters
Just thought I'd provide a little background:
Our First Cluster: 1995-2001, ethernet (10Mbps), Gateway PCs, 128MB RAM, Mixed Intel Pentium II 400MHz and Intel Pentium III 800MHz
Our Second Cluster: 2001-2007, fast ethernet (100Mbps), Dell PCs, 1GB RAM, Pentium IV 2.6 GHz
Our Current Cluster: 2007-now, gigabit ethernet (1000Mbps), HP PCs, 750MB RAM, AMD Athlon 64bit dualcore 2GHz
That's all folks,
A Little History
A Little History
Topic: LinuxClusters
Topic: LinuxClusters
Just thought I'd provide a little background:
Our First Cluster: 1995-2001, ethernet (10Mbps), Gateway PCs, 128MB RAM, Mixed Intel Pentium II 400MHz and Intel Pentium III 800MHz
Our Second Cluster: 2001-2007, fast ethernet (100Mbps), Dell PCs, 1GB RAM, Pentium IV 2.6 GHz
Our Current Cluster: 2007-now, gigabit ethernet (1000Mbps), HP PCs, 750MB RAM, AMD Athlon 64bit dualcore 2GHz
That's all folks,
Monday, August 17, 2009
Poor Man's Cluster - Step 8
Poor Man's Cluster
Topic: LinuxClusters
Topic: LinuxClusters
Now that I think about it, the title of this thread should not have been "Poor Man's Cluster" as the only thing cheap about it is the firmware and software which is free, but the hardware is state-of-the-art. The topic should be something like: "Instant Cluster, Just add water!"
Anyway, just to emphasize the public key ssh athentication, here is the thread from www.knoppix.net that I linked below:
Posted: Mon Jun 02, 2008 12:35 pm | |||
My students and I are interested in setting up a parallel java (PJ) cluster (http://www.cs.rit.edu/~ark/pj.shtml). These are my advanced independent study students who just spent a year working with MPI & OpenMosix using C & C++. MPI is getting a bit dated and the OpenMosix project just closed, so they want experience with a new kind of cluster/grid paradigm.
Anyway, we've complied with all the installation requirements for PJ (http://www.cs.rit.edu/~ark/pj/doc/edu/rit/pj/cluster/package-summary.html middle of page) except for the following:
I have sshd running, but I don't have any experience with "public key authentication." Anyone out there have any suggestions or tips for this?
TIA,
AJG
Anyway, we've complied with all the installation requirements for PJ (http://www.cs.rit.edu/~ark/pj/doc/edu/rit/pj/cluster/package-summary.html middle of page) except for the following:
Furthermore, it must be possible to do an SSH remote login from the frontend processor to each backend processor without the user having to type a password. This is done using public key authentication. SSH must have public key authentication enabled. Each user must set up SSH public and private keys in his or her account. For further information, refer to the SSH documentation. |
I have sshd running, but I don't have any experience with "public key authentication." Anyone out there have any suggestions or tips for this?
TIA,
AJG
Thu Jun 12, 2008 11:26 pm | |||
BUMP!
OK, I figured it out. I simply started sshd and created a login on each PC with the same userid and passwd. I logged into each and issued the following commandline:
This created a file called id_rsa.pub in the hidden .ssh folder of each user's home dir. Then I scp'ed these files to each machine (with diff filenames) to cut and past one large file calledauthorized_keys2 and that's it! I can now ssh to and from each PC using that userid and no passwd. In fact, as I was logged into each PC with the same userid, I don't think I even need the userid to ssh or fish!
See: http://www.petefreitag.com/item/532.cfm
HTH Some1,
AJG
OK, I figured it out. I simply started sshd and created a login on each PC with the same userid and passwd. I logged into each and issued the following commandline:
Code: |
ssh-keygen -t rsa |
This created a file called id_rsa.pub in the hidden .ssh folder of each user's home dir. Then I scp'ed these files to each machine (with diff filenames) to cut and past one large file calledauthorized_keys2 and that's it! I can now ssh to and from each PC using that userid and no passwd. In fact, as I was logged into each PC with the same userid, I don't think I even need the userid to ssh or fish!
See: http://www.petefreitag.com/item/532.cfm
HTH Some1,
AJG
Posted: Fri Jun 20, 2008 11:52 am | |||
Correct, instead of fish://userid@xxx.xxx.xxx.xxx in konqueror with a passwd, I can fish without a passwd.
Also, I can simply fish://xxx.xxx.xxx.xxx - no need for a userid or passwd as long as I'm using the same userid on each PC.
You just have to have the one userid in common on all PCs, sshd running and the correct authorized_keys2 file on each home dir. Just log into one with a passwd normally, then fish or ssh away to your heart's content without userids or passwds!
HTH Some1 even though my application may be a bit unusual,
AJG
PS: any1 else out there setting up linux clusters?
Also, I can simply fish://xxx.xxx.xxx.xxx - no need for a userid or passwd as long as I'm using the same userid on each PC.
You just have to have the one userid in common on all PCs, sshd running and the correct authorized_keys2 file on each home dir. Just log into one with a passwd normally, then fish or ssh away to your heart's content without userids or passwds!
HTH Some1 even though my application may be a bit unusual,
AJG
PS: any1 else out there setting up linux clusters?
Hope that helps,
Poor Man's Cluster - Step 8
Poor Man's Cluster
Topic: LinuxClusters
Topic: LinuxClusters
Now that I think about it, the title of this thread should not have been "Poor Man's Cluster" as the only thing cheap about it is the firmware and software which is free, but the hardware is state-of-the-art. The topic should be something like: "Instant Cluster, Just add water!"
Anyway, just to emphasize the public key ssh athentication, here is the thread from www.knoppix.net that I linked below:
Posted: Mon Jun 02, 2008 12:35 pm | |||
My students and I are interested in setting up a parallel java (PJ) cluster (http://www.cs.rit.edu/~ark/pj.shtml). These are my advanced independent study students who just spent a year working with MPI & OpenMosix using C & C++. MPI is getting a bit dated and the OpenMosix project just closed, so they want experience with a new kind of cluster/grid paradigm.
Anyway, we've complied with all the installation requirements for PJ (http://www.cs.rit.edu/~ark/pj/doc/edu/rit/pj/cluster/package-summary.html middle of page) except for the following:
I have sshd running, but I don't have any experience with "public key authentication." Anyone out there have any suggestions or tips for this?
TIA,
AJG
Anyway, we've complied with all the installation requirements for PJ (http://www.cs.rit.edu/~ark/pj/doc/edu/rit/pj/cluster/package-summary.html middle of page) except for the following:
Furthermore, it must be possible to do an SSH remote login from the frontend processor to each backend processor without the user having to type a password. This is done using public key authentication. SSH must have public key authentication enabled. Each user must set up SSH public and private keys in his or her account. For further information, refer to the SSH documentation. |
I have sshd running, but I don't have any experience with "public key authentication." Anyone out there have any suggestions or tips for this?
TIA,
AJG
Thu Jun 12, 2008 11:26 pm | |||
BUMP!
OK, I figured it out. I simply started sshd and created a login on each PC with the same userid and passwd. I logged into each and issued the following commandline:
This created a file called id_rsa.pub in the hidden .ssh folder of each user's home dir. Then I scp'ed these files to each machine (with diff filenames) to cut and past one large file calledauthorized_keys2 and that's it! I can now ssh to and from each PC using that userid and no passwd. In fact, as I was logged into each PC with the same userid, I don't think I even need the userid to ssh or fish!
See: http://www.petefreitag.com/item/532.cfm
HTH Some1,
AJG
OK, I figured it out. I simply started sshd and created a login on each PC with the same userid and passwd. I logged into each and issued the following commandline:
Code: |
ssh-keygen -t rsa |
This created a file called id_rsa.pub in the hidden .ssh folder of each user's home dir. Then I scp'ed these files to each machine (with diff filenames) to cut and past one large file calledauthorized_keys2 and that's it! I can now ssh to and from each PC using that userid and no passwd. In fact, as I was logged into each PC with the same userid, I don't think I even need the userid to ssh or fish!
See: http://www.petefreitag.com/item/532.cfm
HTH Some1,
AJG
Posted: Fri Jun 20, 2008 11:52 am | |||
Correct, instead of fish://userid@xxx.xxx.xxx.xxx in konqueror with a passwd, I can fish without a passwd.
Also, I can simply fish://xxx.xxx.xxx.xxx - no need for a userid or passwd as long as I'm using the same userid on each PC.
You just have to have the one userid in common on all PCs, sshd running and the correct authorized_keys2 file on each home dir. Just log into one with a passwd normally, then fish or ssh away to your heart's content without userids or passwds!
HTH Some1 even though my application may be a bit unusual,
AJG
PS: any1 else out there setting up linux clusters?
Also, I can simply fish://xxx.xxx.xxx.xxx - no need for a userid or passwd as long as I'm using the same userid on each PC.
You just have to have the one userid in common on all PCs, sshd running and the correct authorized_keys2 file on each home dir. Just log into one with a passwd normally, then fish or ssh away to your heart's content without userids or passwds!
HTH Some1 even though my application may be a bit unusual,
AJG
PS: any1 else out there setting up linux clusters?
Hope that helps,
Sunday, August 16, 2009
Poor Man's Cluster - Step 7
Poor Man's Cluster
Topic: LinuxClusters
Topic: LinuxClusters
OK, that's basically it! So, if you use this "Poor Man's Cluster" howto to setup your own cluster, I think you will find it very easy to do. You will have a blast running your own ray tracings too!
By using the public key authentication technique I suggested here, you don't need any special Linux Kernels like openMOSIX does, no middleware like parallelJava, no special languages such as MPICH or MPICC and no special environments like MPI or PVM.
What's more, you could even write your own C or C++ programs that behave like povray, say fractal.cc, and you could even use the same chop.c program and chopsplit.sh script with only minor modifications!
Happy clustering,
Poor Man's Cluster - Step 7
Poor Man's Cluster
Topic: LinuxClusters
Topic: LinuxClusters
OK, that's basically it! So, if you use this "Poor Man's Cluster" howto to setup your own cluster, I think you will find it very easy to do. You will have a blast running your own ray tracings too!
By using the public key authentication technique I suggested here, you don't need any special Linux Kernels like openMOSIX does, no middleware like parallelJava, no special languages such as MPICH or MPICC and no special environments like MPI or PVM.
What's more, you could even write your own C or C++ programs that behave like povray, say fractal.cc, and you could even use the same chop.c program and chopsplit.sh script with only minor modifications!
Happy clustering,
Sunday, August 9, 2009
Poor Man's Cluster - Step 6
Poor Man's Cluster - Step 6
Topic: LinuxClusters
Topic: LinuxClusters
That's about it, except we also named our nodes:
10.5.129.1 = alpha, 10.5.129.2 = beta, 10.5.129.3 = gamma, ....
This worked well as there's 24 letters in the Greek alphabet an we had 24 nodes. The teacher's PC we used as the master node, but this one is 10.5.129.25 = hal9000!
Happy clustering,
Poor Man's Cluster - Step 6
Poor Man's Cluster - Step 6
Topic: LinuxClusters
Topic: LinuxClusters
That's about it, except we also named our nodes:
10.5.129.1 = alpha, 10.5.129.2 = beta, 10.5.129.3 = gamma, ....
This worked well as there's 24 letters in the Greek alphabet an we had 24 nodes. The teacher's PC we used as the master node, but this one is 10.5.129.25 = hal9000!
Happy clustering,
Saturday, August 8, 2009
Poor Man's Cluster - Step 5
Poor Man's Cluster - Step 5
Topic: LinuxClusters
#include
#include
#include
#include
#include
#define MAX_UINT ((unsigned)-1)
#define MAX_INT (MAX_UINT>>1)
#define MIN_INT (MAX_INT+1)
//Almost as fast a #defines, no side effects
inline int max(int a, int b)
{
if (a>b)
return a;
return b;
}
inline int min(int a, int b)
{
if (a
return a;
return b;
}
int main(int argc, char *argv[])
{
struct stat stats;
int infile, outfile, bs, fbyte, lbyte, fpos=0;
char *buf;
int x, y, off;
if (argc<3)
{
printf("%s usage: filein fileout [firstbyte [lastbyte]]\n", argv[0]);
exit(1);
}
//if argv[1]=="-"
if (argv[1][0]=='-' && argv[1][1]=='\0')
infile=STDIN_FILENO;
else
{
infile=open(argv[1], O_RDONLY);
if (infile==-1)
{
fprintf(stderr, "Error reading input file. Quiting.\n");
exit(2);
}
}
//if argv[2]=="-"
if (argv[2][0]=='-' && argv[2][1]=='\0')
outfile=STDOUT_FILENO;
else
{
outfile=open(argv[2], O_WRONLY | O_TRUNC | O_CREAT, S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH /*644*/);
if (outfile==-1)
{
fprintf(stderr, "Error reading output file. Quiting.\n");
perror(NULL);
exit(2);
}
}
if (argc>3)
{
fbyte=atol(argv[3]);
if (argc>4)
lbyte=atol(argv[4]);
else
lbyte=0;
}
else
lbyte=fbyte=0;
if(infile==STDIN_FILENO || fstat(infile, &stats))
{
//Cannot get file size, defaulting to 64k
bs=64*1024;
if (fbyte<0)
fbyte=0;
if (lbyte<0)
lbyte=0;
}
else
{
bs=stats.st_size;
if (fbyte<0)
fbyte+=bs;
if (lbyte<0)
lbyte+=bs;
}
buf=malloc(bs);
while (buf==0)
{
bs/=2;
if (bs==0)
{
fprintf(stderr, "No memory available, quitting\n");
exit(3);
}
buf=malloc(bs);
}
if (lbyte==0)
lbyte=MAX_INT;
fpos=lseek(infile, fbyte, SEEK_SET);
if (fpos==-1)
{
if (errno==ESPIPE)
{
fpos=0;
while (fpos
{
x=read(infile, buf, min(bs,fbyte-fpos));
//EOF before fbyte
if (x==0)
exit(4);
if (x==-1)
{
fprintf(stderr, "Read failure. Quiting");
exit(5);
}
fpos+=x;
}
}
else
{
fprintf(stderr, "Seek error. Quitting\n");
exit(6);
}
}
while (fpos
{
x=read(infile, buf, min(bs,lbyte-fpos));
if (x==0)
exit(0);
if (x==-1)
{
fprintf(stderr, "Read error. Quiting\n");
exit(6);
}
fpos+=x;
off=0;
while(x>0)
{
y=write(outfile, buf+off, x);
if (y==-1)
{
fprintf(stderr, "Write error. Quiting\n");
exit(7);
}
x-=y;
off+=y;
}
}
return 0;
}
TTYL,
Topic: LinuxClusters
Now, to do a rendering, you have to be logged into the master as "cluster." Also, you have to download and install the latest version of povray (we used 3.6) on every worker node. We also saved temporary files in /home/cluster/distimg, so make this directory on every worker node. Finally, the following c program, chop.c, needs to be stored on the path of every worker, we put it into /bin as root.
To actually start a job we used the following commandline:
time ./chopsplit.sh advanced/biscuit.pov 400 300 biscuit.ppm 1 2 3 ...
if we wanted to render biscuit.pov on nodes 1 2 3. That's really 10.5.129.1, 10.5.129.2, 10.5.129.3 , .... Then we converted biscuit.ppm to a jpg using the gimp. The 400 is the horizontal pixel resolution and the 300 is the vertical pixel resolution in this example.
Type the code that follows using any text editor, call it "chop.c" and compile it with the commandline: gcc -o chop chop.c
#include #include
#include
#include
#include
#include
#define MAX_UINT ((unsigned)-1)
#define MAX_INT (MAX_UINT>>1)
#define MIN_INT (MAX_INT+1)
//Almost as fast a #defines, no side effects
inline int max(int a, int b)
{
if (a>b)
return a;
return b;
}
inline int min(int a, int b)
{
if (a
return a;
return b;
}
int main(int argc, char *argv[])
{
struct stat stats;
int infile, outfile, bs, fbyte, lbyte, fpos=0;
char *buf;
int x, y, off;
if (argc<3)
{
printf("%s usage: filein fileout [firstbyte [lastbyte]]\n", argv[0]);
exit(1);
}
//if argv[1]=="-"
if (argv[1][0]=='-' && argv[1][1]=='\0')
infile=STDIN_FILENO;
else
{
infile=open(argv[1], O_RDONLY);
if (infile==-1)
{
fprintf(stderr, "Error reading input file. Quiting.\n");
exit(2);
}
}
//if argv[2]=="-"
if (argv[2][0]=='-' && argv[2][1]=='\0')
outfile=STDOUT_FILENO;
else
{
outfile=open(argv[2], O_WRONLY | O_TRUNC | O_CREAT, S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH /*644*/);
if (outfile==-1)
{
fprintf(stderr, "Error reading output file. Quiting.\n");
perror(NULL);
exit(2);
}
}
if (argc>3)
{
fbyte=atol(argv[3]);
if (argc>4)
lbyte=atol(argv[4]);
else
lbyte=0;
}
else
lbyte=fbyte=0;
if(infile==STDIN_FILENO || fstat(infile, &stats))
{
//Cannot get file size, defaulting to 64k
bs=64*1024;
if (fbyte<0)
fbyte=0;
if (lbyte<0)
lbyte=0;
}
else
{
bs=stats.st_size;
if (fbyte<0)
fbyte+=bs;
if (lbyte<0)
lbyte+=bs;
}
buf=malloc(bs);
while (buf==0)
{
bs/=2;
if (bs==0)
{
fprintf(stderr, "No memory available, quitting\n");
exit(3);
}
buf=malloc(bs);
}
if (lbyte==0)
lbyte=MAX_INT;
fpos=lseek(infile, fbyte, SEEK_SET);
if (fpos==-1)
{
if (errno==ESPIPE)
{
fpos=0;
while (fpos
{
x=read(infile, buf, min(bs,fbyte-fpos));
//EOF before fbyte
if (x==0)
exit(4);
if (x==-1)
{
fprintf(stderr, "Read failure. Quiting");
exit(5);
}
fpos+=x;
}
}
else
{
fprintf(stderr, "Seek error. Quitting\n");
exit(6);
}
}
while (fpos
{
x=read(infile, buf, min(bs,lbyte-fpos));
if (x==0)
exit(0);
if (x==-1)
{
fprintf(stderr, "Read error. Quiting\n");
exit(6);
}
fpos+=x;
off=0;
while(x>0)
{
y=write(outfile, buf+off, x);
if (y==-1)
{
fprintf(stderr, "Write error. Quiting\n");
exit(7);
}
x-=y;
off+=y;
}
}
return 0;
}
TTYL,
Poor Man's Cluster - Step 5
Poor Man's Cluster - Step 5
Topic: LinuxClusters
#include
#include
#include
#include
#include
#define MAX_UINT ((unsigned)-1)
#define MAX_INT (MAX_UINT>>1)
#define MIN_INT (MAX_INT+1)
//Almost as fast a #defines, no side effects
inline int max(int a, int b)
{
if (a>b)
return a;
return b;
}
inline int min(int a, int b)
{
if (a
return a;
return b;
}
int main(int argc, char *argv[])
{
struct stat stats;
int infile, outfile, bs, fbyte, lbyte, fpos=0;
char *buf;
int x, y, off;
if (argc<3)
{
printf("%s usage: filein fileout [firstbyte [lastbyte]]\n", argv[0]);
exit(1);
}
//if argv[1]=="-"
if (argv[1][0]=='-' && argv[1][1]=='\0')
infile=STDIN_FILENO;
else
{
infile=open(argv[1], O_RDONLY);
if (infile==-1)
{
fprintf(stderr, "Error reading input file. Quiting.\n");
exit(2);
}
}
//if argv[2]=="-"
if (argv[2][0]=='-' && argv[2][1]=='\0')
outfile=STDOUT_FILENO;
else
{
outfile=open(argv[2], O_WRONLY | O_TRUNC | O_CREAT, S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH /*644*/);
if (outfile==-1)
{
fprintf(stderr, "Error reading output file. Quiting.\n");
perror(NULL);
exit(2);
}
}
if (argc>3)
{
fbyte=atol(argv[3]);
if (argc>4)
lbyte=atol(argv[4]);
else
lbyte=0;
}
else
lbyte=fbyte=0;
if(infile==STDIN_FILENO || fstat(infile, &stats))
{
//Cannot get file size, defaulting to 64k
bs=64*1024;
if (fbyte<0)
fbyte=0;
if (lbyte<0)
lbyte=0;
}
else
{
bs=stats.st_size;
if (fbyte<0)
fbyte+=bs;
if (lbyte<0)
lbyte+=bs;
}
buf=malloc(bs);
while (buf==0)
{
bs/=2;
if (bs==0)
{
fprintf(stderr, "No memory available, quitting\n");
exit(3);
}
buf=malloc(bs);
}
if (lbyte==0)
lbyte=MAX_INT;
fpos=lseek(infile, fbyte, SEEK_SET);
if (fpos==-1)
{
if (errno==ESPIPE)
{
fpos=0;
while (fpos
{
x=read(infile, buf, min(bs,fbyte-fpos));
//EOF before fbyte
if (x==0)
exit(4);
if (x==-1)
{
fprintf(stderr, "Read failure. Quiting");
exit(5);
}
fpos+=x;
}
}
else
{
fprintf(stderr, "Seek error. Quitting\n");
exit(6);
}
}
while (fpos
{
x=read(infile, buf, min(bs,lbyte-fpos));
if (x==0)
exit(0);
if (x==-1)
{
fprintf(stderr, "Read error. Quiting\n");
exit(6);
}
fpos+=x;
off=0;
while(x>0)
{
y=write(outfile, buf+off, x);
if (y==-1)
{
fprintf(stderr, "Write error. Quiting\n");
exit(7);
}
x-=y;
off+=y;
}
}
return 0;
}
TTYL,
Topic: LinuxClusters
Now, to do a rendering, you have to be logged into the master as "cluster." Also, you have to download and install the latest version of povray (we used 3.6) on every worker node. We also saved temporary files in /home/cluster/distimg, so make this directory on every worker node. Finally, the following c program, chop.c, needs to be stored on the path of every worker, we put it into /bin as root.
To actually start a job we used the following commandline:
time ./chopsplit.sh advanced/biscuit.pov 400 300 biscuit.ppm 1 2 3 ...
if we wanted to render biscuit.pov on nodes 1 2 3. That's really 10.5.129.1, 10.5.129.2, 10.5.129.3 , .... Then we converted biscuit.ppm to a jpg using the gimp. The 400 is the horizontal pixel resolution and the 300 is the vertical pixel resolution in this example.
Type the code that follows using any text editor, call it "chop.c" and compile it with the commandline: gcc -o chop chop.c
#include #include
#include
#include
#include
#include
#define MAX_UINT ((unsigned)-1)
#define MAX_INT (MAX_UINT>>1)
#define MIN_INT (MAX_INT+1)
//Almost as fast a #defines, no side effects
inline int max(int a, int b)
{
if (a>b)
return a;
return b;
}
inline int min(int a, int b)
{
if (a
return a;
return b;
}
int main(int argc, char *argv[])
{
struct stat stats;
int infile, outfile, bs, fbyte, lbyte, fpos=0;
char *buf;
int x, y, off;
if (argc<3)
{
printf("%s usage: filein fileout [firstbyte [lastbyte]]\n", argv[0]);
exit(1);
}
//if argv[1]=="-"
if (argv[1][0]=='-' && argv[1][1]=='\0')
infile=STDIN_FILENO;
else
{
infile=open(argv[1], O_RDONLY);
if (infile==-1)
{
fprintf(stderr, "Error reading input file. Quiting.\n");
exit(2);
}
}
//if argv[2]=="-"
if (argv[2][0]=='-' && argv[2][1]=='\0')
outfile=STDOUT_FILENO;
else
{
outfile=open(argv[2], O_WRONLY | O_TRUNC | O_CREAT, S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH /*644*/);
if (outfile==-1)
{
fprintf(stderr, "Error reading output file. Quiting.\n");
perror(NULL);
exit(2);
}
}
if (argc>3)
{
fbyte=atol(argv[3]);
if (argc>4)
lbyte=atol(argv[4]);
else
lbyte=0;
}
else
lbyte=fbyte=0;
if(infile==STDIN_FILENO || fstat(infile, &stats))
{
//Cannot get file size, defaulting to 64k
bs=64*1024;
if (fbyte<0)
fbyte=0;
if (lbyte<0)
lbyte=0;
}
else
{
bs=stats.st_size;
if (fbyte<0)
fbyte+=bs;
if (lbyte<0)
lbyte+=bs;
}
buf=malloc(bs);
while (buf==0)
{
bs/=2;
if (bs==0)
{
fprintf(stderr, "No memory available, quitting\n");
exit(3);
}
buf=malloc(bs);
}
if (lbyte==0)
lbyte=MAX_INT;
fpos=lseek(infile, fbyte, SEEK_SET);
if (fpos==-1)
{
if (errno==ESPIPE)
{
fpos=0;
while (fpos
{
x=read(infile, buf, min(bs,fbyte-fpos));
//EOF before fbyte
if (x==0)
exit(4);
if (x==-1)
{
fprintf(stderr, "Read failure. Quiting");
exit(5);
}
fpos+=x;
}
}
else
{
fprintf(stderr, "Seek error. Quitting\n");
exit(6);
}
}
while (fpos
{
x=read(infile, buf, min(bs,lbyte-fpos));
if (x==0)
exit(0);
if (x==-1)
{
fprintf(stderr, "Read error. Quiting\n");
exit(6);
}
fpos+=x;
off=0;
while(x>0)
{
y=write(outfile, buf+off, x);
if (y==-1)
{
fprintf(stderr, "Write error. Quiting\n");
exit(7);
}
x-=y;
off+=y;
}
}
return 0;
}
TTYL,
Friday, August 7, 2009
Poor Man's Cluster - Step 4
Poor Man's Cluster - Step 4
Topic: LinuxClusters
#Arguments should be:
#pov file
#width
#height
#output file
#machines
if [[ $# -lt 5 ]]; then
echo "Usage: $0 scene width height output machines"
exit
fi
prefix=/home/cluster/povray-3.6/
imgdir=/home/cluster/distimg/
myip=`ip addr show | grep inet\ 10.5.129 | cut -f6- -d\ | cut -f-1 -d/`
if [[ ${1:0:1} == / ]]; then
scene=$1
else
scene=${1#scenes/}
fi
width=$2
height=$3
output=${4%.ppm}".ppm"
shift 4
tprocs=$(( $#*2 ))
procs=0
rm -f ${imgdir}/*
for a in $@; do
addr="10.5.129."${a#10.5.129.}
addr=${addr/10.5.129.self/127.0.0.1}
addr=${addr/10.5.129.localhost/127.0.0.1}
addr=${addr/10.5.129.127.0.0.1/127.0.0.1}
echo Starting $addr-1
ssh -x cluster@$addr "$prefix/povray +i$prefix/scenes/$scene -v -d -p +fp +w$width +h$height +sr$((procs*height/tprocs+1)) +er$(((procs+1)*height/tprocs)) +a0.3 +L$prefix/include +L$prefix/scenes/${scene%/*} +o- 2>/dev/null | chop - - $((9+${#width}+${#height})) && echo $addr-3 done 1>&2" >$imgdir/3$a${output} &
sizes[procs]=$((((procs+1)*height/tprocs-procs*height/tprocs)*3*width))
procs=$((procs+1))
echo Starting $addr-2
ssh -x cluster@$addr "$prefix/povray +i$prefix/scenes/$scene -v -d -p +fp +w$width +h$height +sr$((procs*height/tprocs+1)) +er$(((procs+1)*height/tprocs)) +a0.3 +L$prefix/include +L$prefix/scenes/${scene%/*} +o- 2>/dev/null | chop - - $((9+${#width}+${#height})) && echo $addr-4 done 1>&2" >$imgdir/4$a${output} &
sizes[procs]=$((((procs+1)*height/tprocs-procs*height/tprocs)*3*width))
procs=$((procs+1))
# echo $addr
done
echo ${sizes[@]}
procs=0
echo -e "P6\n$width $height\n255" > $output
for a in $@; do
until [[ `du -b $imgdir/3$a${output} 2>/dev/null | cut -f-1` -eq ${sizes[procs]} ]]; do
sleep 1
done
echo $a$output-3
cat $imgdir/3$a${output} >> $output
procs=$((procs+1))
until [[ `du -b $imgdir/4$a${output} 2>/dev/null | cut -f-1` -eq ${sizes[procs]} ]]; do
sleep 1
done
echo $a$output-4
cat $imgdir/4$a${output} >> $output
procs=$((procs+1))
done
echo $output built successfully!
Good Luck,
Topic: LinuxClusters
My students wrote a shell script called chopsplit.sh to scatter/gather the jobs. We rendered several ray tracings using povray with this script (see zazzle site listed in my sig). This file, chopsplit.sh, has to be in /home/cluster on the master node with permissions 755 (use chmod as root to change this). You can write the following script in any text editor:
#!/bin/bash#Arguments should be:
#pov file
#width
#height
#output file
#machines
if [[ $# -lt 5 ]]; then
echo "Usage: $0 scene width height output machines"
exit
fi
prefix=/home/cluster/povray-3.6/
imgdir=/home/cluster/distimg/
myip=`ip addr show | grep inet\ 10.5.129 | cut -f6- -d\ | cut -f-1 -d/`
if [[ ${1:0:1} == / ]]; then
scene=$1
else
scene=${1#scenes/}
fi
width=$2
height=$3
output=${4%.ppm}".ppm"
shift 4
tprocs=$(( $#*2 ))
procs=0
rm -f ${imgdir}/*
for a in $@; do
addr="10.5.129."${a#10.5.129.}
addr=${addr/10.5.129.self/127.0.0.1}
addr=${addr/10.5.129.localhost/127.0.0.1}
addr=${addr/10.5.129.127.0.0.1/127.0.0.1}
echo Starting $addr-1
ssh -x cluster@$addr "$prefix/povray +i$prefix/scenes/$scene -v -d -p +fp +w$width +h$height +sr$((procs*height/tprocs+1)) +er$(((procs+1)*height/tprocs)) +a0.3 +L$prefix/include +L$prefix/scenes/${scene%/*} +o- 2>/dev/null | chop - - $((9+${#width}+${#height})) && echo $addr-3 done 1>&2" >$imgdir/3$a${output} &
sizes[procs]=$((((procs+1)*height/tprocs-procs*height/tprocs)*3*width))
procs=$((procs+1))
echo Starting $addr-2
ssh -x cluster@$addr "$prefix/povray +i$prefix/scenes/$scene -v -d -p +fp +w$width +h$height +sr$((procs*height/tprocs+1)) +er$(((procs+1)*height/tprocs)) +a0.3 +L$prefix/include +L$prefix/scenes/${scene%/*} +o- 2>/dev/null | chop - - $((9+${#width}+${#height})) && echo $addr-4 done 1>&2" >$imgdir/4$a${output} &
sizes[procs]=$((((procs+1)*height/tprocs-procs*height/tprocs)*3*width))
procs=$((procs+1))
# echo $addr
done
echo ${sizes[@]}
procs=0
echo -e "P6\n$width $height\n255" > $output
for a in $@; do
until [[ `du -b $imgdir/3$a${output} 2>/dev/null | cut -f-1` -eq ${sizes[procs]} ]]; do
sleep 1
done
echo $a$output-3
cat $imgdir/3$a${output} >> $output
procs=$((procs+1))
until [[ `du -b $imgdir/4$a${output} 2>/dev/null | cut -f-1` -eq ${sizes[procs]} ]]; do
sleep 1
done
echo $a$output-4
cat $imgdir/4$a${output} >> $output
procs=$((procs+1))
done
echo $output built successfully!
Good Luck,
Poor Man's Cluster - Step 4
Poor Man's Cluster - Step 4
Topic: LinuxClusters
#Arguments should be:
#pov file
#width
#height
#output file
#machines
if [[ $# -lt 5 ]]; then
echo "Usage: $0 scene width height output machines"
exit
fi
prefix=/home/cluster/povray-3.6/
imgdir=/home/cluster/distimg/
myip=`ip addr show | grep inet\ 10.5.129 | cut -f6- -d\ | cut -f-1 -d/`
if [[ ${1:0:1} == / ]]; then
scene=$1
else
scene=${1#scenes/}
fi
width=$2
height=$3
output=${4%.ppm}".ppm"
shift 4
tprocs=$(( $#*2 ))
procs=0
rm -f ${imgdir}/*
for a in $@; do
addr="10.5.129."${a#10.5.129.}
addr=${addr/10.5.129.self/127.0.0.1}
addr=${addr/10.5.129.localhost/127.0.0.1}
addr=${addr/10.5.129.127.0.0.1/127.0.0.1}
echo Starting $addr-1
ssh -x cluster@$addr "$prefix/povray +i$prefix/scenes/$scene -v -d -p +fp +w$width +h$height +sr$((procs*height/tprocs+1)) +er$(((procs+1)*height/tprocs)) +a0.3 +L$prefix/include +L$prefix/scenes/${scene%/*} +o- 2>/dev/null | chop - - $((9+${#width}+${#height})) && echo $addr-3 done 1>&2" >$imgdir/3$a${output} &
sizes[procs]=$((((procs+1)*height/tprocs-procs*height/tprocs)*3*width))
procs=$((procs+1))
echo Starting $addr-2
ssh -x cluster@$addr "$prefix/povray +i$prefix/scenes/$scene -v -d -p +fp +w$width +h$height +sr$((procs*height/tprocs+1)) +er$(((procs+1)*height/tprocs)) +a0.3 +L$prefix/include +L$prefix/scenes/${scene%/*} +o- 2>/dev/null | chop - - $((9+${#width}+${#height})) && echo $addr-4 done 1>&2" >$imgdir/4$a${output} &
sizes[procs]=$((((procs+1)*height/tprocs-procs*height/tprocs)*3*width))
procs=$((procs+1))
# echo $addr
done
echo ${sizes[@]}
procs=0
echo -e "P6\n$width $height\n255" > $output
for a in $@; do
until [[ `du -b $imgdir/3$a${output} 2>/dev/null | cut -f-1` -eq ${sizes[procs]} ]]; do
sleep 1
done
echo $a$output-3
cat $imgdir/3$a${output} >> $output
procs=$((procs+1))
until [[ `du -b $imgdir/4$a${output} 2>/dev/null | cut -f-1` -eq ${sizes[procs]} ]]; do
sleep 1
done
echo $a$output-4
cat $imgdir/4$a${output} >> $output
procs=$((procs+1))
done
echo $output built successfully!
Good Luck,
Topic: LinuxClusters
My students wrote a shell script called chopsplit.sh to scatter/gather the jobs. We rendered several ray tracings using povray with this script (see zazzle site listed in my sig). This file, chopsplit.sh, has to be in /home/cluster on the master node with permissions 755 (use chmod as root to change this). You can write the following script in any text editor:
#!/bin/bash#Arguments should be:
#pov file
#width
#height
#output file
#machines
if [[ $# -lt 5 ]]; then
echo "Usage: $0 scene width height output machines"
exit
fi
prefix=/home/cluster/povray-3.6/
imgdir=/home/cluster/distimg/
myip=`ip addr show | grep inet\ 10.5.129 | cut -f6- -d\ | cut -f-1 -d/`
if [[ ${1:0:1} == / ]]; then
scene=$1
else
scene=${1#scenes/}
fi
width=$2
height=$3
output=${4%.ppm}".ppm"
shift 4
tprocs=$(( $#*2 ))
procs=0
rm -f ${imgdir}/*
for a in $@; do
addr="10.5.129."${a#10.5.129.}
addr=${addr/10.5.129.self/127.0.0.1}
addr=${addr/10.5.129.localhost/127.0.0.1}
addr=${addr/10.5.129.127.0.0.1/127.0.0.1}
echo Starting $addr-1
ssh -x cluster@$addr "$prefix/povray +i$prefix/scenes/$scene -v -d -p +fp +w$width +h$height +sr$((procs*height/tprocs+1)) +er$(((procs+1)*height/tprocs)) +a0.3 +L$prefix/include +L$prefix/scenes/${scene%/*} +o- 2>/dev/null | chop - - $((9+${#width}+${#height})) && echo $addr-3 done 1>&2" >$imgdir/3$a${output} &
sizes[procs]=$((((procs+1)*height/tprocs-procs*height/tprocs)*3*width))
procs=$((procs+1))
echo Starting $addr-2
ssh -x cluster@$addr "$prefix/povray +i$prefix/scenes/$scene -v -d -p +fp +w$width +h$height +sr$((procs*height/tprocs+1)) +er$(((procs+1)*height/tprocs)) +a0.3 +L$prefix/include +L$prefix/scenes/${scene%/*} +o- 2>/dev/null | chop - - $((9+${#width}+${#height})) && echo $addr-4 done 1>&2" >$imgdir/4$a${output} &
sizes[procs]=$((((procs+1)*height/tprocs-procs*height/tprocs)*3*width))
procs=$((procs+1))
# echo $addr
done
echo ${sizes[@]}
procs=0
echo -e "P6\n$width $height\n255" > $output
for a in $@; do
until [[ `du -b $imgdir/3$a${output} 2>/dev/null | cut -f-1` -eq ${sizes[procs]} ]]; do
sleep 1
done
echo $a$output-3
cat $imgdir/3$a${output} >> $output
procs=$((procs+1))
until [[ `du -b $imgdir/4$a${output} 2>/dev/null | cut -f-1` -eq ${sizes[procs]} ]]; do
sleep 1
done
echo $a$output-4
cat $imgdir/4$a${output} >> $output
procs=$((procs+1))
done
echo $output built successfully!
Good Luck,
Thursday, August 6, 2009
Poor Man's Cluster - Step 3
Poor Man's Cluster - Step 3
Topic: LinuxClusters
Topic: LinuxClusters
OK, at this point what we did was make a new user on each and every node just for us clustering types! I login to each node (PC) as root and use adduser/userdel command lines. This user has the same userid, say "cluster," and passwd on every node.
Then we set up ssh public key authentication for each of these new users as detailed here:
With this setup, you need only login as "cluster" on one linuxbox and you can ssh into any other linuxbox with the same userid/passwd and you don't have to specify either the userid or the passwd! Its like you are logged into all 25 PCs at once! This makes it easy to send jobs from the PC you are logged into (the master node) and recieve results from all the other nodes (the worker nodes) whenever we need to scatter/gather jobs.
Until next time,
Poor Man's Cluster - Step 3
Poor Man's Cluster - Step 3
Topic: LinuxClusters
Topic: LinuxClusters
OK, at this point what we did was make a new user on each and every node just for us clustering types! I login to each node (PC) as root and use adduser/userdel command lines. This user has the same userid, say "cluster," and passwd on every node.
Then we set up ssh public key authentication for each of these new users as detailed here:
With this setup, you need only login as "cluster" on one linuxbox and you can ssh into any other linuxbox with the same userid/passwd and you don't have to specify either the userid or the passwd! Its like you are logged into all 25 PCs at once! This makes it easy to send jobs from the PC you are logged into (the master node) and recieve results from all the other nodes (the worker nodes) whenever we need to scatter/gather jobs.
Until next time,
Subscribe to:
Posts (Atom)