It’s Not Just for Snapping Pictures
As a heat sensor, microscope, telescope, or device for tracking faces, a camera becomes a versatile teaching tool.
By Chris Rogers
Students are confident using a camera, making it an ideal classroom tool. They use their phone cameras to exchange notes, grab a snapshot of the mathematical derivation on the board, film the steps the teacher is taking on the computer, grab a contact from someone else’s phone, or just share with their friends what they are doing. Documentation stations within the classroom get students to record not only their final product but also the journey they took to get there—one of the toughest things to measure as a teacher. Add software, and a camera becomes even more convenient. My latest use: Put whiteboard material on your laptop (www.thinkboardx.com) so you can take notes with a pen, and then snap an image with Rocketbook software (www.getrocketbook.com). It gets properly filed away (digitally) before you erase it and start again. You’re effectively making digital notes with an old-fashioned pen, with no need to open your computer and get sidetracked.
Yet what makes the camera an impressive and versatile teaching tool for engineering is its capability as a sensor. And here this ubiquitous device has yet to make a big dent in the classroom. Today’s cameras can be used as sensors of much more than visible light. For example, Flir cameras (www.flir.com) can get students of all ages discussing, questioning, and analyzing heat transfer on themselves, their peers, or their environment. Edgertronic cameras (www.edgertronic.com) allow students to see in slow motion at an affordable price—as slow as 25,000 frames a second. Microscopes with cameras open up the world of the very small, and telescopes reveal the world of the very big. Cameras are now being combined with intelligence to provide augmented-reality images that allow you to even “see the unseen”: from Wi-Fi packets being sent between a computer and a router, to what a robot is thinking.
A decade ago, cameras that could think as well as see (embedded processors) were prohibitively expensive, but advances in microprocessors and machine learning have dramatically lowered their price. Now a PiZero (www.raspberrypi.org) with camera costs under $50. With the growth of cloud intelligence, the processor does not need to be particularly powerful; all it needs is a Wi-Fi connection to one of many machine-learning cloud services.
One particular favorite of mine is the OpenMV camera (www.openmv.io)—a $70 camera that runs micro-python and has an impressive array of capabilities (from reading QRCodes to optical flow, to tracking faces, to machine learning). For a little more money, you can add a Wi-Fi board or a screen or a servomotor controller, although my students have instead connected it to everything from the NI myRIO to LEGO hardware to develop robots with vision. For instance, with just a few lines of code, one can develop a LEGO-based dice thrower and have it run overnight determining if the dice are fair or not. Having the camera mounted on a few servomotors and actively tracking a red ball (for physics class) or a human face (for robotics class) can be done fairly easily. We have played with developing smart trading cards (QR codes) that can be used to “program” a robot (present the robot with the sequence of actions you want it to do). And finally, my students are currently developing robots that play card games with young kids (tired of playing Uno?) using this little gem.
Someday, computers will probably be able to do what our brains now do easily, like identifying objects in images regardless of lighting conditions, or estimating relative distances and sizes of objects with enough accuracy that they can throw and catch. Students of all ages will be able to turn low-cost cameras into smart sensors that can measure everything from velocity to volume and communicate with the cloud or with robots.
They might also just snap a photo.
Chris Rogers is a professor of mechanical engineering at Tufts University.