Tuesday, January 31, 2012

Quarter III, Week 1: ScreenCasts, SmartNotes and Code, oh my!

OK, here we go again! We're starting Semester II at good ol' BSHS! Before you know it, it's AP Week already. We really have to put our collective nose to the grind stone and start reviewing those Barron's Review Books. You AP students know who you are...


AP Calculus BC started Unit 8 on integration techniques.
BC Calc 801 L'Hopital




BC Calc 802 Inverse Sine



BC Calc 803 Inverse Tangent





BC Calc 805 Integration by Parts





preCalculus for Seniors continued CH8 with Law of Cosines and Heron's formula!
preCalc 803 SAS and Law of Cosines


preCalc 804 SSS and Heron's Formula


AP Computer Science started CH8/EX7 on arrays and ArrayLists! We also finished one more project from EX6 involving game theory or monte carlo methods to estimate the value of Pi.
C6X12 Calculate Pi with Darts? C7X2 Investigating ArrayLists using a Gucci Bag!
Calculus Research Lab started our new anti-derivatives eText!
CRL 1.1 Reimann Sums Intro
https://sage.math.clemson.edu:34567/home/pub/285

HTH and I hope you enjoyed this week's notes!

Learning with Technology, 

Saturday, January 21, 2012

Quarter II, Week 10: ScreenCasts, SmartNotes and Code, oh my!

OOPs, I think I spoke too soon. Last week was not the last week of the quarter, this week is! Good luck on midterms next week.


AP Calculus BC finished unit 7 on derivative of a^x and logb(x) and variable separable differential equations.

BC Calc Trig Summary 



BC Calc 705 a^x and logb(x)




BC Calc 706 DiffEqus





preCalculus for Seniors started CH8 reviewing SOHCAHTOA and Law of Sines (AAS, ASA, ASS). Yes, I said it!

preCalc Vector Product Summary


preCalc 802 Law of Sines


AP Computer Science continued CH7/EX6 on loops. 
Calculus Research Lab finished CH5 on derivatives. After midterms, we start our anti-derivatives eText!


CRL 5.2 Odd Exercises
https://sage.math.clemson.edu:34567/home/pub/271/

CRL 5.4 Tangents and Normals

https://sage.math.clemson.edu:34567/home/pub/276/


Computer Math continued CH7 on chip logic. We also played with Turtle Python. Computer Math is over and there's no Advanced Computer Math next semester. You will be sorely missed! Next year, Computer Math and Advanced Computer Math will be replaced by a full year Computer Science Honors. In this new course, we will do everything we do now re Discrete Math, Python and SAGE. We will also play with TI-BASIC, Turtle Python and Visual Python! See you in Computer Science next year! Note: first video is mine, the last two are not!




HTH and I hope you enjoyed this week's notes!

Learning with Technology, 

Monday, January 16, 2012

Listservs, Forums and EDGs: Technology in the classsroom, really?


Recently, I caught the tail end of a thread on the AP Calculus Listservhttp://mathforum.org/kb/profile.jspa?userID=32552 BTW, I never know what to call it. Its an online forum for AP Calculus teachers to share ideas that we can log into from http://apcentral.collegeboard.org/ but its also a listserv I read in my email. To add insult to injury, the College Board site refers to it as an EDG? Yes, you guessed it, an "Educational Discussion Group." How did you ever guess?

Sorry, I digress. So, the following post showed up in my email:
"Thanks to those who have responded to my question about a one-to-one laptop program. I really appreciate the helpful ideas. In case there was confusion, I have had a laptop and projector for many years. Next
year all of our students (grades 9 - 12) will have laptops, so I am looking for ways to incorporate student use of laptops in math classes."

Of course, I had to chime in:
I teach in a PC Lab/Classroom:
http://shadowfaxrant.blogspot.com/2011/12/my-21st-century-classroom-aka-pc-lab.html
I have done so since forever. However, my Calculus students don't really use the PCs much.

We use the PCs more in my computing classes. Here's an example of a new one called CRL:
http://shadowfaxrant.blogspot.com/p/new-calculus-research-lab.html

I use my SmartBoard + remote mouse/tablet + wireless mic and screencaster extensively. For example:
http://www.youtube.com/playlist?list=PLAA6688F60255BEFD&feature=plcp

We do all our Calculus programs and simulations (Riemann Sums, Euler's Method, etc) on a TI-83 or TI-89:
http://www.ebay.com/sch/items/_ti_calculator_active

I post all my handouts on Edmodo:
http://www.edmodo.com/public/ap-calculus-bc-math-5hbc/group_id/208306

Sometimes, I put helpful reminders on their desktops (see unit circle above).

Another thing you may look into is a remote mouse/tablet to control your desktop from anywhere in the room! I used a Smart Airliner Slate (aka Wacom Graphire 6x8 Tablet) for years with just a laptop and
projector before I had a SmartBoard.

Here's a related note I posted on the AP Physics EDG where they are looking into iPad apps for class:
How about a RDP or VNC app to link your iPad to your desktop via WIFI? Then you could use the iPad as a remote mouse/tablet and control your SmartBoard from anywhere in the room. You probably need a decent pen interface too. The iPad have a capacitive interface good for use with a stylus, right?

Then on the AP Computer Science Forum, someone asked about setting up an intro Computer Science course, so I had to jump in:
My intro class is a full year of Discrete Math with python. I use the Litvins' Discrete Math text, http://www.skylit.com/mathandpython.html plus commandline python or http://www.sagemath.org/ and http://sagenb.org/ plus projects:
TI83/84 + TI89/92 BASIC
and some of the following as time permits
turtle.py (turtle objects)
vpython (3D physics simulations)
scipy, numpy, sympy (scientific computing)

This is my new intro course for next year. It's a full year honors course, Computer Science H aka CSH.  We follow this with a full year of AP Computer Science aka APCS.  Then we have a year of "Computing Independent Study" about Super Computing aka CIS using http://www.pelicanhpc.org/ octave,mpi4py,mpif77,mpif90,mpicc,mpiCC. Don't forget our new CRL class too!

So, CSH has some aspects of Scientific Computing, Computer Algebra Systems, Discrete Math and python programing. APCS is an introduction to Object Oriented Programming with java. CIS is all about Super Computing and Linux Clusters. Last, but not least, CRL is about Calculus, Scientific Computing and Computer Algebra Systems.

Soon, it will be course selection week at Baldwin Senior High. I've been asked to give a 10 minute presentation to a number of classes one day that week on "Why you should take Computing at BSHS." So, the next few blogs will be devoted to said presentation and I will give even more details about CSH, APCS, CIS and CRL!

I will be recruiting for AP Computer Science (APCS) and Calculus Research Lab (CRL) as well as Computing Independent Study (CIS). Another teacher will be recruiting for Computer Science Honors (CSH) which is to replace CM/ACM as a full year Honors course. We really need to encourage as many students as possible to take the new sequence of courses as listed below:

Me,
4H: preCalculus H (11th) --> 5HBC + CRL and APCS
4R: preCalculus (11th) --> 5HAB + CRL and APCS

3R: AlgII&Trig R (11th) --> 4R-11 and APCS


Other teacher,
3H: AlgII&Trig H (10th) --> CSH (11th) --> APCS (12th)
2R: Geometry R (10th) --> CSH (11th) --> APCS (12th)

2H: Geometry H (9th) --> CSH (10th) --> APCS (11th) --> CIS (12th)

Well, that's all folks!



Teaching with Technology, 






Saturday, January 14, 2012

Quarter II, Week 9: ScreenCasts, SmartNotes and Code, oh my!

OK, Quarter II is nearly over. I think we've made very good progress so far this year. AP Calculus BC and AP Computer Science are right on track so we should have a week or two left before AP Week for review! 


I will miss Computer Math as we finish Chip Logic next week, we'll have to say goodbye as we don't have Advanced Computer Math in the Spring this year. Hope to see you in AP Computer Science next year! 


preCalculus just finished CH9 on Vectors. We should have nearly all of Quarter IV for an intro to Calculus! 


Calculus Research Lab spent the whole semester on derivatives and applications using SAGE. I hope you got a lot out of it!


AP Calculus BC spent the week on ln(x) and e^x.
BC Calc 702





BC Calc 703





BC Calc 704





preCalculus for Seniors focused on the vector dot and cross products.

preCalc 905 Scalar Dot Product


preCalc 907 Vector Cross Product


AP Computer Science continued CH7/EX6 on loops. 
C6X5 Fib.java

C6X7 PrimeGen.java

Calculus Research Lab continued CH5 on applications of the derivative.

CRL 5.1 Direction of a curve
CRL 5.2 Odd Exercises

https://sage.math.clemson.edu:34567/home/pub/271/


Computer Math continued CH6 on boolean operators, truth tables and conditional statements. We also started CH7 on chip logic. Sorry, nothing new on the SAGE server.

HTH and I hope you enjoyed this week's notes!

Learning with Technology, 

Thursday, January 12, 2012

CIS(theta) 2011-2012 - Scaling the Cluster! - Meeting VIII

The following is a summary of what we've accomplished so far with the 2011-2012 CIS(theta) team. Our new Shadowfax Cluster is coming along quite well. We have a nice base OS in the 64bit Ubuntu 11.04 Natty Narwhal Desktop on top of our AMD dualcore Athlons and gigE LAN. The Unity Desktop isn't that different from the Gnome Desktop we've been using these past few years on Fedora and Ubuntu. Natty is proving very user friendly and easy to maintain! We installed openMPI and we used flops.f to scale our cluster up to 50 cores! Remember, we needed openSSH public keys so openMPI can scatter/gather cluster jobs without the overhead of logging into each node as needed. We created a user common to all cluster nodes called "jobs" in memory of Steve Jobs so the cluster user can simply log into one node and be logged into all nodes at once (you can actually ssh into each node as "jobs" without specifying a userid or passwd)!
InstantCluster Step 1: Infrastructure
Make sure your cores have enough ventilation. The room has to have powerful air conditioning too. These two factors may seem trivial but will become crucial when running the entire cluster for extended periods of time! Also, you need to have enough electrical power, preferably with the cabling out of the way, to run all cores simultaneously. Don't forget to do the same with all your Ethernet cabling. We have CAT6E cables to support our gigE Ethernet cards and switches. We are lucky that this step was taken care of for us already!

InstantCluster Step 2: Hardware
You need up to date Ethernet switches plus Ethernet cards and cores as well as plenty of RAM in each Linux box. As stated above, our gigE LAN and switches were already setup for us. Also, we have 64bit dual-core AMD Athlons and our HP boxes have 2GB of RAM. We are still waiting for our quad-core AMD Phenom upgrade!

InstantCluster Step 3: Firmware (Meeting III)
We wasted way too much time two years ago (2009-2010 CIS(theta)) trying out all kinds of Linux distros looking for a good 64bit base for our cluster. Last year (2010-2011 CIS(theta)) we spent way too much time testing out different liveCD distros. Last year, we also downgraded from 64bit Ubuntu 10.04 Desktop edition to the 32bit version on our Linux partitions. 64bit gives us access to more RAM and a larger maxint, but was proving to be a pain to maintain. Just to name one problem, jre and flash were hard to install and update on FireFox. Two years ago, we tried Fedora, Rocks, Oscar, CentOS, Scientific Linux and, finally, Ubuntu. We've done this several times over the years using everything from Slakware and KNOPPIX to Fedora and Ubuntu! This year, 64bit Ubuntu has proven very easy to use and maintain, so I think we'll stick with it for the cluster!

InstantCluster Step 4: Software Stack I (Meeting IV)
On top of Ubuntu we need to add openSSH, public-key authentication (step 4) and openMPI (step 5). Then we have to scale the cluster (step 6). In steps 7-10, we can discuss several applications to scatter/gather over the cluster whether it be graphical (fractals, povray, blender, openGL, animations) or number crunching (C++ or python app for Mersenne Primes or Beal's Conjecture). So, what follows is a summary of what we did to get up to plublic-key authentication. This summary is based on the http://cs.calvin.edu/curriculum/cs/374/MPI/ link listed below. First, we installed openSSH-server from http://packages.ubuntu.com, then:
  1. If you have no .ssh directory in your home directory, ssh to some other machine in the lab; then Ctrl-d to close the connection, creating .ssh and some related files. 
  2. From your home directory, make .ssh secure by entering:
    chmod 700 .ssh
  3. Next, make .ssh your working directory by entering:
    cd .ssh
  4. To list/view the contents of the directory, enter:
    ls -a [we used ls -l]
  5. To generate your public and private keys, enter:
    ssh-keygen -t rsa
    The first prompt is for the name of the file in which your private key will be stored; press Enter to accept the default name (id_rsa).The next two prompts are for the password you want, and since we are trying to avoid entering passwords, just press Enter at both prompts, returning you to the system prompt.
  6. To compare the previous output of ls and see what new files have been created, enter:
    ls -a [we used ls -l]
    You should see id_rsa containing your private key, and id_rsa.pub containing your public key.
  7. To make your public key the only thing needed for you to ssh to a different machine, enter:
    cat id_rsa.pub >> authorized_keys
    [The Linux boxes on our LAN, soon to be cluster, have IPs ranging from 10.5.129.1 to 
    10.5.129.24 So, we copied each id_rsa.pub file to temp01-temp24 and uploaded these 
    files via ssh to the teacher station. Then we just ran cat tempnn >> authorized_keys 
    for each temp file to generate one master authorized_keys file for all nodes that we could 
    just download to each node's .ssh dir.]
  8. [optional] To make it so that only you can read or write the file containing your private key, enter:
    chmod 600 id_rsa [optional] To make it so that only you can read or write the file containing your authorized keys, enter: chmod 600 authorized_keys
InstantCluster Step 5: Software Stack II (Meeting V)
We then installed openMPI (we had a lot less dependencies this year with Natty 11.04 64bit) and tested multi-core with flops. Testing the cluster as a whole will have to wait until the next meeting when we scale the cluster! We followed openMPI install instructions for Ubuntu from http://www.cs.ucsb.edu/~hnielsen/cs140/openmpi-install.html
These instructions say to use sudo and run run apt-get install openmpi-bin openmpi-doc libopenmpi-dev However, the way our firewall is setup at school, I can never update my apt-get sources files properly. So, I used http://packages.ubunutu.com and installed openmpi-bin, gfortran and libopenmpi-dev. That's it!
Then we used the following FORTRAN code to test multi-core. FORTRAN, really? I haven't used FORTRAN77 since 1979! ...believe it or don't!
We compiled flops.f on the Master Node (any node can be a master):
mpif77 -o flops flops.f
and tested openmpi and got just under 800 MFLOPS using 2 cores (one PC):
mpirun -np 2 flops
Next, we generated a "machines" file to tell mpirun where all the nodes (Master and Workers) are (2 PCs or nodes with 2 cores each for example):
mpirun -np 4 --hostfile machines flops
Every node has the same "machines" text file in /home/jobs listing all the IPs, one per line. Every node has the same "flops" executable file (or whatever your executable will be) in /home/jobs. Every node has the same "authorized_keys" text file with all 25 keys in /home/jobs/.ssh 
Note: last year we got about 900 MFLOPS per node. This year we still have 64bit AMD athlon dualcore processors. However, these are new PCs, so these athlons have slightly different specs. Also, last year we were running Maverick 10.04 32bit ... and ... these new PCs were supposed to be quadcores! We are still awaiting shipment.
InstantCluster Step 6: Scaling the cluster
UPDATE: 2011.1126 (Meeting VI)
Including myself, we only had 3 members attending this week. So, we added 3 new nodes. We had nodes 21-24 working well last time. Now we have nodes 19-25 for a total of 7 nodes, 14 cores and over 5 GFLOPS! This is how we streamlined the process: 
(1) adduser jobs and login as jobs 
(2) goto http://packages.ubuntu.com and install openssh-server from the natty repository 
(3) create /home/jobs/.ssh dir and cd there 
(4) run ssh-keygen -t rsa 
(5) add new public keys to /home/jobs/.ssh/authenticated_keys to all nodes 
(6) add new IPs to /home/jobs/machines to all nodes
(7) goto http://packages.ubuntu.com and install openmpi-bin, gfortran and libopenmpi-devfrom the natty repository
(8) download flops.f to /home/jobs from our ftpsite compile and run: 
mpif77 -o flops flops.f and 
mpirun -np 2 flops or 
mpirun -np 4 --hostfile machines flops 
NB: since we are using the same hardware, firmware and compiler everywhere, we don't need to recompile flops.f on every box, just copy flops from another node!
(9) The secret is setting up each node identically:
/home/jobs/flops
/home/jobs/machines
/home/jobs/.ssh/authenticated_keys


UPDATE: 2011.1214 (Meeting VII)
We had 5 members slaving away today. Nodes 19-25 were running at about 5 GFLOPS last meeting. Today we added nodes 10, 11, 12, 17 and 18. However, we ran into some errors when testing more than the 14 cores we had last time. We should now have 24 cores and nearly 10 GFLOPS but that will have to wait until next time when we debug everything again....
UPDATE: 2012.0111 (Meeting VIII)
We found that some nodes did not have gfortran installed and many of the authorized_keys files were in consistent. So, we made sure every node had a user called jobs. Then we made sure we installed openssh-server, openmpi-bin, openmpi-doc (optional), libopenmpi-dev and gfortran installed. We generated all the public ssh keys and copied then over to one master authorized_keys file on shadowfax using ssh. Then we copied the master file back to each node over ssh to /home/jobs/.ssh and we tested everything with flops, and then I wrote this on FB:
Eureka, success! By Jove I think we've done it! I give you joy, gentlemen!

After you guys left, I was still trying to find the bottleneck in the cluster network. So, I just ran "mpirun -np 2 flops" on each box without the ---hostfile machines. Guess, what I found: all the cores ran at about 388 MFLOPS except the 2 cores in PC12. These cores were only running half that speed, about 194 MFLOPS each. So, all I had to do was delete the PC12 line from the machines file on the Master Node and run "mpirun -np 48 --hostfile machines flops" and every core ran over 300 MFLOPS!

What's weird is is that when I ran "mpirun -np 2 flops" on any PC other that PC12, the cores would always yield over 380 MFLOPS no matter how many times I tried it. However, when running "mpirun -np 48 --hostfile machines flops" the yield varied greatly. Each core ran somewhere between 310 and 410 MFLOPS. I ran this several times as well. I'd say I got a mean of about 15.5 GFLOPS with a standard deviation of 2.25 GFLOPS. In other words, a typical run would fall in the range of 13.25 to 17.75 GFLOPS.

What's even weirder is that the pelicanHPC liveCD ran all 50 cores last week at nearly 20 GFLOPS with each core running more than 350 MFLOPS. What's up with that?

So, we are back on track. We got the cluster fully operational before midterms! Our next meeting (2/8/12) will be about what applications we can run over openMPI. You each should google for beginners' projects using mpicc, mpiCC, mpif77 or mpif90 and maybe even mpi4py. Simple number crunching or graphics would do! Maybe I'll also show you how pelicanHPC liveCD can PXE boot the whole room into an openMPI cluster in 5-10 minutes!
===================================================
What we are researching V 
(we used to use the BCCD liveCD, look at their openMPI code):
===================================================
What we are researching IV 
(maybe we can use Python on our MPI cluster?):
What we are researching III 
(look at some clustering environments we've used in the past):
We used PVM PVMPOV in the 1990s.
Let's try out openMPI and Blender now!
http://www.blender.org












===================================================
What we are researching II 
(look what other people are doing with MPI):
MPI intro, nice!
Sample MPI code
http://www.cc.gatech.edu/projects/ihpcl/mpi.html
===================================================
What we are researching I 
(look what this school did in the 80s and 90s): 
Thomas Jefferson High courses
Thomas Jefferson High paper
Thomas Jefferson High ftp
Thomas Jefferson High teacher
http://www.tjhsst.edu/~rlatimer/
===================================================
Today's Topic:
CIS(theta) 2011-2012 - Scaling the Cluster! - Meeting VIII
Today's Attendance:
CIS(theta) 2011-2012: GeorgeA, KennyK, LucasE; CIS(theta)
Today's Reading:
Chapter 5: Building Parallel Programs (BPP) using clusters and parallelJava
===================================================
Well, that's all folks, enjoy!