Wednesday, April 29th, 2015

Maker some math

Preparing students today is preparing for the unknowns of tomorrow. Intellectual flexibility enables someone to adapt knowledge to emerging issues. For me, the college classroom should strive to prepare students for jobs not yet created or envisioned. How do we do this? We have been. The concepts we teach in the classroom introduce concepts, of course, but also ways of thinking. Looking through the mathematical lens brings the world into a unique focus. In math, we solve problems and one need only to open the newspaper or watch the evening news to see our world abounds with questions and unresolved issues. Learning to break down a puzzle, large or small, analyze and solve it is an inherent part of math at all levels.

A year ago, I visited a Digital Studies class, Hacking, Remixing and Design, of Dr. Mark Sample of Davidson College. Dr. Sample crafts an environment that welcomes risk taking by everyone – students and professor – as he responds to the classroom environment. In particular, the class is given assignments that lead to failure. The goal is to successively improve and analyze how and why. In fact, the class wrote flogs, which detail their failures and later analyze their work to better understand their tendencies to fail and innovate. As I watched the students in the class openly share their struggles, I thought about math classes. What better ways could students engage in the unknown and step confidently even if many attempts will most probably be unsuccessful?

In an attempt to move in this direction, I’ve leaned on Davidson’s movement in the realm of digital studies. We have a Maker Space called Studio M offering opportunities in 3D printing, drones to fly and low cost computing devices, many of which do not require programming experience. My first introduction to such technologies came with Makey Makeys. I’m currently working with a student to create a school program using these low-cost computational devices, like playing a piano by touching potatoes rather than keys.


Last semester, I decided to introduce 3D printing into Calculus 3, which inherently requires visualization in 3-space. Adapting a Mathematica workbook from George Hart, I asked my students to create 3D solids to print. A student who worked in Studio M created examples that led to successful and undesirable results. Then the students were tasked with creating an equation that resulted in a solid to print.

The assignment was to create a shape, print it, take a picture of their work, and then write a page sharing their equation, intended result and an evaluation of their work. Did the shape match their intentions? If so, why? If not, why not? If the result matched their intentions, might they push limits more next time? If a shape didn’t print as desired, why not? They needed only to print once and then analyze their result.

I signed the students up for weekly shifts of printing in Studio M to keep a steady stream of students over the 5 week period. I learned from the students and those in Studio M that many students printed multiple times. Early on, I reasserted that student did not need to have 3D models that matched their expectations. One of the students who had completed his assignment laughed, “I knew that. I just really wanted to make it work. It wasn’t about the assignment or the grade.”

Isn’t that truly what we search for in education? We look for students to dig into the material from their own interests and desire to learn. In this case, many students engaged in this way. They applied knowledge from a textbook to a relatively new technology. And some students shared their work. Here is a piece made by a first-year student. She made this 3D solid for her boyfriend.


There are many ways to engage students. Challenging students so they learn to tackle problems and step into the unknown is important part of the mathematical classroom. For me, integrating 3D printing into Calculus created a teachable moment. It was the best kind of moment — one in which the students learned on their own and didn’t need me. I was mainly relevant when they were ready to share their work and, for many, when they wanted to share their excitement.

Friday, November 22nd, 2013

Decimating Gollum

Last week, Henry Segerman visited from Oklahoma State. He presented a seminar for our undergraduates and also spoke about 3D printing with the public school teachers in my Charlotte Teachers Institute (CTI) seminar. It was a delightful and enlightening time. As you see below, we were all visually and intellectually treated by our time with Henry.

In the middle of his time with the CTI fellows, Henry referred us to Thingiverse as a resource for free and adaptable 3D printing models. I began exploring the options and soon found a wonderful model called DevakingA by user YahooJAPAN. While Davidson has a new Maker space with 3D printing, my affinity to linear algebra had me looking for the wireframe data! Soon, I found an STL File Reader for MATLAB, which enabled me to visual the DevakingA model. That is, my students could now apply 3D rotation matrices to a model they liked on Thingiverse.

The next day, Henry made his way into our Studio M space and worked on 3D printing a math manipulative for a Thingiverse challenge. My mind continued to stretch. Over the next week, I continued to explore 3D models, and Henry’s work. I began to wonder about masks (given my use of them in mime) and 3D printing. I continue to explore that idea but this led me to think about taking a model and reducing the number of polygons.

Dan Goldman of Adobe helped me. He led me to MeshLab. I can import an STL file directly into that (free) software. Using the Quadric Edge Collapse Decimation, I was able to quickly get what I wanted. I decided to try it on a model of Gollum that I found on thingiverse created by user BorisBelocon. Note, Dan warned me that using the decimation method and then using it again on the results, can make MeshLab crash. I found this to be the case. How? I was so excited with my initial result, I didn’t save it! I just applied the decimation method again. And yes, MeshLab crashed!

So, much did I decimate Gollum? Look at the image below:

Happy decimating!

Monday, September 9th, 2013

Loving math and mime

In May, I was contacted by Ari Daniel regarding a series of radio pieces funded under the STEM Story Project to appear on the Public Radio Exchange. By August, Ari had contacted us and arranged meeting us at the MoMath MOVES conference in New York City. At the time, we had just returned from performing at the Bridges art and math conference in Enschede, Netherlands.

During the conference, Ari compiled over 8 hours of tape. To the right you see Ari interviewing Tanya in Central Park where we met for our last time of being interview.

Over the following weeks, Ari weaved together a story, focusing largely on my journey of math and mime and folding in Tanya’s constant companionship. The piece is entitled “Loving math and math” and frankly is really a tale of 3 loves, Tanya, math and mime. Click the image below, which was taken by Ari in Central Park, to visit the PRX site and see what you think:

Thursday, February 28th, 2013

Finding Lightning in the Cloud

A picture is worth a thousand words, but could it also be worth a thousand searches or tweets to read?

A few weeks ago when Pope Benedict XVI resigned, I sat in my office with a Davidson student discussing the news story. Neither of us knew much more than the historical decision being made. Wanting to dig a bit deeper, I demonstrated a method of searching Twitter that I developed with Brian McGue, a Davidson student, who researched data mining with me last summer and fall.

The process is simple. First, you search Twitter on a topic of interest. For instance, I searched on the word “pope”. The tweets are then placed within a word cloud, which gives greater prominence to words that appear more frequently in the source of tweets. This was easily accomplished with the Word Cloud Generator. Our search returned the word cloud seen below.

This word cloud alone, gives a snapshot of the news coverage, at least as tweeted by the collective subscribers to Twitter. At this stage, we simply looked for other words of interest. The word “lightning” caught our eyes. We postulated that folks were tweeting on how quickly the news spread. To let Twitter inform us, we refined our search to be “pope lightning”. The following word cloud appeared:

We both looked at the image and suddenly remarked, “Wait a minute! Did lightning actually strike after the decision?” Was this a Twitter joke that went viral? Was it real? We quickly moved to Google News and searched again on “pope lightning”. This led us to the Huffington Post news story Pope Benedict XVI Resigns: Lightning Bolt Strikes St Peter’s Basilica As Vatican Confirms Pontiff’s Departure.

Indeed, lightning struck St. Peter’s Basilica after the pope’s announced resignation. We saw it in the cloud.

What topic interests you? Maybe look to see what’s trending on Twitter and see if you can see more clearly with your head in the clouds.

Saturday, November 24th, 2012

Raising the record to 138

On November 20, 2012, Grinnell College played Faith Baptist Bible College in a men’s basketball game that got the NBA’s attention. Why? Grinnell’s freshman guard, Jack Taylor scored an unbelievable 138 points.

Picture credit:

Soon, NBA stars were talking to reporters and tweeting about the game. LeBron James compared the college player to the likes of Kobe Bryan and Wilt Chamberlain in an interview with ESPN. The tweets that filled cyberspace included:

  • Oklahoma City Thunder’s Kevin Durant tweeted, “Jack Taylor you deserve a shot of Jack Daniels after that performance lol…wow”
  • Retired forward Donyell Marshall commented, “This is crazy. Hope he iced his arms after game.”
  • Golden State Warriors’ Charles Jenkins wrote, “wouldn’t be surprised if Jack Taylor transfer tomorrow lol .. thats crazy 138 points”
  • Houston Rocket’s Chandler Parsons requested, “Have to see highlights of this kid Jack Taylor putting up 138 points.”
  • Thunder center Cole Aldrich tweeted, “138 points is impressive but I think shooting 108 times (71 3 pointers) is more impressive. #throwemup”

Aldrich’s tweet represents a mathematical answer to Chandler Parson’s wish. To begin, here are a few statistical highlights of the game and particularly Taylor:

  • 58 of Taylor’s 138 points were scored by halftime
  • Taylor set NCAA records for field goals (52), field goal attempts (108), 3-pointers made (27) and 3-pointers attempted (71).
  • Taylor broke Clarence “Bevo” Francis‘ record (set in 1954) of 113 points with 4:42 remaining.

Now, let’s dig a little deeper. How many 3-point attempts did Taylor make? According to AllBall NBA blog shooters attempt 30 3-pointers per round in the All-Star Weekend’s 3-Point Contest. Again, Taylor shot 70. That’s a lot of shots!

In fact, a quick game is a style of play by the Grinnell Fox Squirrels. This is partially seen by noting that Taylor broke the NCAA Division III record of 89 points set by his teammate Griffin Lentsch last season. This is even clearer when one notes that Taylor only played 36 minutes in the game. Take a moment and reflect on that stat! This means Taylor made 52 of his 108 shots in 36 minutes; this equates to 3 shots a minute. Keep in mind 70 of these shots were 3-point attempts. So, Taylor averaged two 3-point attempts per minute.

In order for Taylor alone to shoot 3 times per minute, the other team must have possession of the ball at least 2 times each minute. So, the teams are keeping the ball at most 12 seconds per possession. This seems more plausible when you recall the game was a 179-104 victory for Grinnell over Faith Baptist Bible. Further, the Faith Eagles’ David Larson scored 70 points.

Taylor truly had a stunning performance that Tuesday night; something he could easily give thanks for at that Thursday’s Thanksgiving table. At first, the performance can almost seem impossible. The stats give some insight and are, in themselves, a highlight reel of that evening.

Monday, October 22nd, 2012

Tubular Topology

Yesterday, which would have have been Martin Gardner’s 98th birthday, I posted the article Tied in a Mathematical Knot to honor Gardner’s birthday and the Celebration of Mind events that are and will occur throughout the world. Ours at Davidson will be next month.

The article was presented in the context of playing with rope or string. In Tanya Chartier and my Mime-matics show, we motivate this same idea a bit differently. First, we perform the sketch you see below.

So, how do we teach topology with the tube? We play a game with our flexible friend, Slink. We have a simple rule.

Two objects will be mathematically similar if Slinky can move from one shape to another without detaching or attaching the ends of the tube.

If Slink starts in this shape:

which of the following letters are mathematically similar to that shape?

We find many children enjoy exploring these ideas with our tube. Need your own Slink? Get a dryer vent and go at it!

Wednesday, October 17th, 2012

Getting Mean about Bill Gates

Suppose you are at a Major League Baseball game and your friend is looking through binoculars for celebrities. Suddenly, your friend turns and comments, “Did you know that the average person at this game is a millionaire?” How would you respond? Who did your friend see in the stands?

Turns out if Bill Gates, as seen to the left from his Twitter picture, attended a baseball game, regardless of the stadium, then the average net worth for someone at the game would be over a million dollars.

According to Wikipedia, Bill Gates’ net worth in 2012 is 66 billion. Also according to Wikipedia, the largest baseball stadium is Dodger Stadium holding 56,000. So, a fan at Dodger Stadium, if Bill Gates attended the same game, would have an average net worth of 61,000,000,000/56,000 > 1,000,000.

Now, suppose Bill Gates made his way into Michigan Stadium which holds 107,501. In this case, fans have an average net worth of 61,000,000,000/107,501, which is about 613,947. This is an impressive amount of money but a bit disappointing if one had attended the MLB game! But, this assumes that everyone other than Bill Gates has a net worth of 0. What would it take to raise the average to a million?

Take a moment and reflect on this question. It’s admittedly a bit mean! The average now plays the same role as before. To get to the million, the average must increase by 386,053. So, we need to find x such that (61,000,000,000 + 107,500x)/107,501. This implies that the average net worth of everyone else in the stadium must be 432,567, which is a pretty healthy sum of its own.

Now, what if Bill Gates walks in a bar?

This blog entry was inspired by @MathPlus’s tweet, “‘If Bill Gates walks into a bar, everyone [in the bar] is on average a millionaire.’ Roberton Williams #statistics #joke #Microsoft #fb” Thanks also to the Charlotte Teachers Institute fellows in my Entertaining Math seminar for encouraging me to write this entry.

Wednesday, September 19th, 2012

Olympic Speed Limit

Just over a month ago, Usain Bolt electrified the Olympic track and field stadium in London as he won a second gold medal in the 100-meter dash. The New York Times published, in a stunning interactive graphic entitled One Race, Every Medalist Ever an analysis in which every Olympic medalist in the 100-meter sprint races against each other. How far ahead would Bolt have finished before Jesse Owens or Carl Lewis?

Below, we see a table of the times in the 100-meter dash that won the Olympic gold medal between 1896 and 2012. Warning – it’s a long table but this makes it easily cut-and-paste-able.

Year Time
1896 12
19 11
1904 11
1906 11.20
1908 10.80
1912 10.80
1920 10.80
1924 10.60
1928 10.80
1932 10.30
1936 10.30
1948 10.30
1952 10.79
1956 10.62
1960 10.32
1964 10.06
1968 9.95
1972 10.14
1976 10.06
1980 10.25
1984 9.99
1988 9.92
1992 9.96
1996 9.84
2000 9.87
2004 9.85
2008 9.69
2012 9.63

Now, let’s plot these times. We see variability and also a certain trend in the overall improved times.

Suppose we model the decreasing times with a line and use least-squares to estimate this rate of improvement. As seen above, we find that the winning time y equals y = -0.0133x + 36.31.

Clearly, there is some limit at which a human can no longer run any faster. For instance, the 100-meter dash will never be completed in 2 seconds. However, where could the limit be between 2 and 9.63 seconds? John Brenkus, in his book The Perfection Point, considers such a question. Brenkus analyzes four distinct phases of the race: reacting to the gun, getting out of the blocks, accelerating to top speed and hanging on for dear life at the end. He lays out his analysis of why 8.99 seconds is the fastest 100-meter dash that can ever be run.

If we assume our least-squares line continues as the trend of improvements in speed in the 100-meter dash, then when would we reach the speed limit for this race? This involves solving:

8.99 = -0.0133x + 36.31,

which implies x = 2054.14. In fact, if you don’t round the slope and y-intercept of the least-squares line to two decimal places, you find x is approximately 2059.84. Therefore, we would reach the limit of speed in the 2060 Olympics! It’s just a model, so time will tell. But either way, we have exciting and memorable moments awaiting us.

Tuesday, August 14th, 2012

Adding some texture

In the last post, we began a series on creating computer graphics with POV-Ray. In that post, we constructed balls of various solid colors like that below.

Now, let’s wrap textures around them. For example, the image:

can be created from the code:


camera {
location <0, 1.5, -4>
look_at <0, 1, 0>

sphere {
<0, 1, 0>, 1

box {
<-3,-.1,-1>, <3,0,6>
pigment { checker Red White scale .5}

light_source { <100, 150, -200> color White }
light_source { <100, 800, 0> color Gray50}

sphere { <0,0,0>,1
texture {
pigment { Blue_Sky3 }
scale 100000

A few things to notice about the code. First, notice that there are 2 light sources. One is full white and another is a softer light. Images usually have a more natural look with more than one light source. Studios often use more than one light and have reflective material to soften shadows in a similar way.

Second, notice how the image, light sources, and camera are placed in a larger sphere. That sphere has a Blue_Sky3 lecture which is the look of the sky. As such, the entire scene is contained in a large spherical universe.

Finally, at the beginning of the code, we include various libraries which allow us to refer to existing code that allows us to use words like White and Blue_Sky3 within the programming.

Now, take the line that reads:


and alter it to read:


This simple change creates the image:

Now, change this same line to read:


and you get:

Finally, have the line read:

texture{Peel scale 0.3}

and you get the Escher-like image:

Try changing the value of 0.3 and see what happens. Want to try more? You can find many more POV-Ray textures online. Remember that you may need to import a library. Here are a few you may wish to try:

PinkAlabaster, Blood_Sky, Lightning2 or NBwinebottle

Remember, spelling counts and so does capitalization! Keep in mind, you can change the texture of the sky, too!

Wednesday, July 25th, 2012

Become your own Pixar

Computer generated images are created, as clearly indicated by their name, on computers. How do computers generate such lovely pictures? A key is math modeling – underneath such programs are models of the physical world that determine the color of a pixel on the screen based on the light and the physical characteristics of the objects in the scene.

This is the first of a series of blog postings in which we will learn to use POV-Ray, which is free software package that creates ray-traced images. You will need to download the software to create the images in this posting. Note, at the time of this posting, there is only a version that works on Windows.

This blog follows how I’ve taught POV-Ray in 50 minutes to my students semester after semester, with many noting it as one of their favorite topics. I dare say this is not simply because this isn’t going to be on any of their tests! This particular posting takes about 15 minutes in class.

In this posting, we’ll learn to construct the image below.

Can you imagine your making this, with your own artistic decisions, moments from now? If you hesitate as you may struggle with drawing, keep in mind that a great advantage of ray-tracing is that the burden of drawing is put on the computer! Ray-tracing can require millions and even billions of complex mathematical calculations. All you do is create a source file that is simply a text file.

For example, the code below created the picture we saw above:

// Filename: spheres_color.pov

// Point straight ahead at the yellow sphere's center
camera {
location <0, 1, -6>
look_at <0, 1, 2>

// yellow sphere
sphere {
<0, 2, 2>, 1
texture {
pigment { color red 1 green 1 }
finish { phong 1 }

// A magenta sphere on the right of the yellow sphere
sphere {
<3, 1, 2>, 1
texture {
pigment { color red 1 blue 1 }
finish { phong 1 }

// A cyan sphere on the left of the yellow sphere
sphere {
<-3, 1, 2>, 1
texture {
pigment { color green 1 blue 1 }
finish { phong 1 }

// The checkered floor
plane {
<0, 1, 0>, 0
pigment {checker color red 1 green .5 color blue .8 red .5 green .5}

// A white light in front of the camera to the right
// and above.
light_source {
<3, 3, -3> color red 1 green 1 blue 1

First, consider the code:

camera {
location <0, 1, -6>
look_at <0, 1, 2>

This puts the camera at the coordinates (0,1,-6). Regardless of where you place the camera, it will be pointed at the point (0,1,2). In order to create our first POV image, we need to understand the coordinate system used by the program. You will place many objects into 3D space using POV. Looking at your display, the x-axis is horizontal and the y-axis is vertical with positive x and y running to the right and up, respectively. Finally, positive z goes into the display and away from the viewer.

Now, let’s change the location of the camera. Let’s change

location <0, 1, -6>

to read

location <0, 10, -6>

What just happened? We moved the camera up to y = 10. Here is the resulting image:

Now, let’s change this line to read

location <10, 1, -6>

which results in the image

Let’s return the line above to read:

location <0, 1, -6>

and find the following code:

sphere {
<0, 2, 2>, 1
texture {
pigment { color red 1 green 1 }
finish { phong 1 }

This places a sphere of radius 1 with its center at (0,2,2). It is colored with the full intensity possible in red and of green. This creates the yellow sphere in the image. Let’s change the radius of the sphere to be 2 which involves making the second line of code to read:

<0, 2, 2>, 2

This results in the image:

Now, let’s return to a sphere of radius 1 but change the center to be at (0,0,2) by writing the line:

<0, 0, 2>, 1

This creates the image:

Now, let’s move the center to (0,2,2). Can you do this? It should create the following image:

Finally, let’s change the color of the sphere. We change the code:

texture {
pigment { color red 1 green 1 }
finish { phong 1 }

to read

texture {
pigment { color red 0.5 green 1 }
finish { phong 1 }

which reduces the amount of red intensity in the color to half its possible intensity.

With these tools, you can begin to create your own images. Here are some exercises, although you may find more interesting directions!


  • Experiment with other positions for the camera.
  • Change the position and even radius (there is documentation in the code) of one of the other spheres.
  • Alter the coloring and phong (keep the numbers between 0 and 1) of the objects. See what happens as you alter these attributes. What does phong control?
  • Experiment with the light_source command. What happens if you remove it entirely?
  • Try adding more spheres and see what you can create.
  • Try stacking the spheres on each other.

Have fun creating images with math and computing!

Next Page »