A SHOUT OUT at the State of the Union address tells a federal agency its work finds favor at the highest levels. So when President Obama lauded brain research as an example of how the government should “invest in the best ideas,” National Institutes of Health program directors were thrilled, thinking he meant their current efforts to map the static brain for answers to Alzheimer’s and other ills. In fact, Obama had a bigger challenge in mind for NIH: a $3 billion, 10-year initiative to plot the complex activity of some 100 billion cells that make up the typical human brain — an undertaking that could yield the same broad technological breakthroughs and economic returns as the Human Genome Project or America’s war on cancer.
“The brain is the last great frontier,” says Story Landis, head of the NIH National Institute of Neurological Disorders and Stroke (NINDS), whose agency may collaborate on the Brain Activity Map (BAM) with the National Science Foundation, Defense Advanced Research Projects Agency, and other government research arms. “It’s what makes us human, how we think, how we write poetry. And the burden of disease that affects the brain is pretty extraordinary,” she told USA Today.
Equally extraordinary is the potential reward from mapping this frontier, a reason the National Academy of Engineering made reverse engineering the human brain one of the profession’s 21st-century Grand Challenges. By figuring out the brain’s inner workings, engineers can simulate its activities and develop new drugs for mental disorders and smarter prosthetics for amputees. Studying neural circuitry may one day spawn implants that work around damaged tissue to halt dementia’s memory loss or let blind people see. BAM could yield advances as well in artificial intelligence, robotics, and manufacturing. Learning how the brain learns also has implications for the design of supercomputers that can process multiple streams of information rather than today’s one-step processes.
While critics have questioned devoting $300 million a year to this one initiative, the quest to develop a Brain Activity Map has generated some of the excitement and urgency of the space race. The federal money, if approved by Congress, would gain added leverage from privately funded and overseas research efforts. Earlier this year, the European Union announced a $1.5 billion, decade-long initiative led by Swiss researchers to build a supercomputer simulation of the human brain based on the inner workings of its molecules, neurons, and circuits. Meanwhile, several private ventures — including a nonprofit research institute founded by Microsoft cofounder Paul Allen, who lost a mother to Alzheimer’s disease — have invested heavily in figuring out how the brain is wired.
The engineering and scientific challenges of mapping the brain are enormous, calling to mind the words of James Watson, codiscoverer of DNA: “The brain boggles the mind.” Reverse-engineering means dismantling something to find out how it works and then building copies. That may work for a machine, but the human brain is no bucket of bolts. It has circuitry and a complex wiring pattern honed by evolution. At the most basic level — neurons — a nerve impulse travels down the cell’s long, tail-like axons and triggers the release of chemicals called neurotransmitters into a synapse, the space between neurons. Those chemicals then spur another neuron to fire, and thus the signal passes from one cell to the next.
Neuroscientists traditionally have used electrodes to detect the activity of one or several neurons within a particular region. Since brain circuits involve millions of cells, however, these methods are pointless, noted the six researchers who first proposed the BAM project in a June 2012 Neuron magazine article.
It has taken a decade for NIH researchers to develop a “circuit diagram” of the roundworm’s nervous system, which has 300 neurons that make about 7,000 connections. BAM requires a quantum leap beyond that into territory that until recently was the preserve of science fiction writers. As NIH Director Francis Collins noted in PBS News Hour interview, each of the human brain’s 100 billion neurons has 10,000 connections. Developing tools and techniques to track where the mind’s roughly 1,000 trillion connections occur – and recording every firing of every neuron in every circuit – is like untangling an incredibly complex bowl of spaghetti. And mapping is not simply a matter of seeing which neuron connects where. How does gray matter code and store information? Understanding how the brain cements or retrieves information, or goes haywire in people with mental illness, autism, or depression “is not going to be an overnight experience,” says Collins.
No one disputes the strides made already as a result of years of interdisciplinary research. With tools developed by engineers, like the functional magnetic resonance imaging machine (fMRI) and micro-endoscopes, science can now address such fundamental questions as “Why do we sleep?” or “What is memory?”
Neuroscientists can peer into the active brain as someone learns, remembers, sees, or snoozes. With fMRI scans that show blood flow, they can glimpse the different areas of the brain that light up, say, for strong readers versus those with dyslexia. Diffusion tensor imaging (DTI), a variation of conventional scans that tracks the movement of water molecules in the brain and can detect subtle wiring problems in axons, has been used to study addiction, schizophrenia, and traumatic brain injury.
Advances in computer science have paid off in algorithms used in speech recognition technologies and automated “seeing” machines used in factories. Indeed, engineering and neuroscience discoveries “go hand in hand,” yielding hundreds of technologies, says bioengineer Kip Ludwig, who oversees brain-research technologies as a NINDS program director. He cites devices to record extracellular and intracellular voltage levels, neurotransmitter detectors, and specialized microscopes and voltage-sensitive dye that can display the activity of hundreds of neurons. In the collaboration, “engineers design new tools for neuroscientists to use to study the brain and neuroscientists say, ‘This is good, but what we really need now is a tool that will do ABC,’ and ask engineers for tools to do that.”
Mark Schnitzer, an associate professor of biology and applied physics at Stanford University and a Howard Hughes Medical Institute (HHMI) investigator, worked with his team to develop a system that “reads” the minds of mice as they run around an enclosure. The rodents have genetically engineered neurons that express a fluorescent protein when the brain cells fire, releasing calcium ions. By implanting a micro-endoscope connected to a camera chip just above the hippocampus, an area of the brain sensitive to the environment, researchers can monitor the firing of hundreds of neurons in near real time in the living, behaving mouse. By looking at these lights — they resemble random bursts of little green fireworks on the computer display — “we can literally figure out where the mouse is,” says Schnitzer, noting that different neurons would fire at specific spots. In essence, the mouse’s brain made a representational map of its space. Schnitzer’s team has linked that activity to long-term information storage — offering a potential tool for studying new therapies for Alzheimer’s and other brain diseases.
Terry Sejnowski is another brain-research pioneer. For years, the HHMI investigator and neurobiology professor at the University of California, San Diego (UCSD), has tried to bridge the gap between molecules and systems. Starting his career as a physicist, he found it harder and harder to collect data without big instruments. By contrast, neuroscience offered many interesting problems to tackle. So he signed up for a neuroscience course at the Marine Biological Laboratory in Woods Hole, Mass. “That was a turning point,” says Sejnowski, who went on to get a postdoctoral position, enjoying the lab work so much he switched fields.
Now at the Salk Institute for Biological Studies in La Jolla, Calif., Sejnowski directs a lab focused on understanding the way brain mechanisms – such as how neurons communicate at the synapses or what regulates the flow of information to the cortex — link to behavior. Unlike many labs, however, his laboratory looks at the brain at multiple levels. Some teams examine the synapse, while others look at the ion channels or a whole system. They also collaborate with researchers elsewhere to figure out, for example, why we sleep. Behind this big-picture approach is Sejnowski’s fascination with how different gray matter is from computers. “When you look at the brain, you realize this is not the way an engineer would design the system,” he says in his HHMI profile. “The brain is redundant, massively parallel, and regenerative,” with neurons dedicated to one activity able to be recruited for new uses.
The approach has produced significant breakthroughs, including a state-of-the-art cellular simulation program called MCell. A collaborative effort between the Pittsburgh Supercomputing Center and the Salk Institute, with support from the NIH, HHMI, and NSF, MCell took more than 15 years to develop and allows researchers to account for every molecule and protein inside and outside a cell, and to document their activities by the microsecond. Sejnowski’s lab also has forged fruitful collaborations to probe why humans sleep. Matching EEG measurements with fMRI imaging and in vitro studies, for example, has revealed patterns of nerve impulses that correlate to sleep patterns and has yielded new algorithms to automatically classify sleep states from single EEG readings. Analysis of spike patterns overturned the long-held assumption that the most important information transmitted in the visual cortex, the section of the brain responsible for vision, was the total number of times the neurons fired. In fact, the timing of those neural spikes is important in coding and transmitting information. Ironically, notes Sejnowski, researchers know more about how the brain learns than how it pays attention.
Deciphering the brain’s biology can lead to new technologies, and vice versa. “Once you’ve figured out a principle,” explains Sejnowski, “it becomes possible to build on that principle and develop practical devices.” For example, Tobi Delbrück of the Institute of Neuroinformatics at the University of Zurich and colleagues have built a camera based on the timing of neural spikes, which is how the eye’s retina processes images for the brain. The process of engineering a device, in turn, helps confirm that basic understanding. “If all you have is a bunch of simulations, then you have a shadow of what’s there,” says Sejnowski. “If you can actually build a device that uses the same principles, and the device works in the same way that the complex system does, then you’ve achieved a better understanding of that process.”
Genetic Software Decoded
At the Allen Institute for Brain Science, launched in 2003 with $500 million from Paul Allen, researchers have developed cutting-edge technologies to decode the mind’s genetic software, not just its wiring. The Seattle-based nonprofit has a mission to accelerate the progress of neuroscience research by generating publicly available databases. “Generally, the best discoveries in biology are made by connecting many data sets,” notes Allen Institute researcher Michael Hawrylycz, director of the modeling and analysis groups, “so you try to link them up.” In 2006, Allen Institute researchers completed a genetic atlas of the mouse brain. Last November, they published an atlas of the human brain. Current efforts include using a laser to peer into a live mouse’s mind through a surgically implanted glass window in its skull, capturing images that could illuminate how nerve impulses from the eyes become behavior as it runs.
The institute’s broad goal is to generate data in a very comprehensive and systematic way, so that the entire neuroscience community “can have the benefit of access… without having to do the experiments themselves,” says Allen Institute neuroscientist Ed Lein. “Generating any single piece of this data could occupy months to years of a particular researcher’s time to do. And then most of that data actually doesn’t become public. So the concept here is based on the model of the genome projects: Do it once, very, very well—and make it completely open access.”
To map the mouse brain’s genes, Allen Institute scientists took tissue sections and then looked for these expressed genes using a technique called in situ hybridization. This involves binding labeled probes to the tissue. The probes attach themselves to bits of RNA, which indicate the genes being expressed in those cells. In situ hybridization allows researchers to capture images of gene expression and then plot each expression’s physical location in the brain.
Since the human brain is 1,000 times as large as that of a mouse, the mapping process was even more complicated. Allen Institute researchers acquired two “control” brains for analysis — ones that were as normal as possible. That in itself was a tall task: If a person suffered from disease or took drugs, it could alter the brain’s circuitry or chemistry. Taking tissue samples, the researchers used microarrays containing probes for many possible RNA sequences. (Each brain had 1,000 different structures from which 20,000 genes were assayed.) The results then were mapped back onto coordinates of the brain obtained earlier by magnetic resonance imaging.
“What’s really unusual about the data set,” says Hawrylycz, “is that it’s so broad anatomically that for the first time we can look at brain wide expression patterns.” For example, researchers found genetic signatures for various regions of the brain, including patterns in the cortex, the wrinkly outer layer that does much of our higher-order information processing. They now can pinpoint which part of the cortex a tissue sample came from just by looking at the pattern of genes expressed. If researchers find that particular genes active in some regions are also active in others, “that has implications for how diseases may be treated,” Hawrylycz explains. “So we try to understand the global relationships in the brain.” Since completion of the brain atlas, the institute has acquired four more brains, generating enormous amounts of data. Researchers hope to determine the common signatures among the six, and perhaps expand into examining diseased brains.
“Neuromorphic engineering” is the name given to the emerging field of using what neuroscientists learn about the brain’s architecture to improve computer simulations and thinking devices. IBM, creator of Deep Blue, the supercomputer that defeated chess grandmaster Gary Kasparov in 1997, and of Watson, which bested two Jeopardy! champs, is building on its 2011 development of a cognitive computing microchip. Created in collaboration with DARPA’s Systems of Neuromorphic Adaptive Plastic Scalable Electronics, or SyNAPSE, project, the TrueNorth chip uses advanced algorithms and silicon circuitry to “simulate the phenomena between spiking neurons and synapses in the brain,” according to the company’s announcement. Last November, IBM Research announced it had used the chip to simulate 530 billion neurons and 100 trillion synapses by mimicking the connectivity of a monkey’s brain. IBM Research and four university labs now are working “to bring together neuroscience, supercomputing, and nanotechnology to create a radically different computer architecture that mimics the function, low power, small size, and real time of the human brain,” electrical engineer Dharmendra Modha, manager of cognitive computing at IBM’s Almaden Research Center in San Jose, Calif., explains in an IBM video.
Despite such achievements, the era of brain-in-the-box learning systems remains a long way from becoming reality. “We have not built a biologically realistic simulation of the complete human brain,” IBM’s TrueNorth team cautioned in a paper. While experiments involving neuromorphic networks have included learning to play the game of Pong, the knowledge was gained offline — not acquired on the fly, as humans learn.
The point, Modha says, is to draw inspiration from the brain, not construct an artificial one. In the world of cognitive computing, that means designing networks capable of parallel, short, complex thinking. For Ralph Greenspan, associate director of UCSD’s Kavli Institute and one of the six researchers who first proposed a large-scale, public Brain Activity Map project, the key to “functional connectomics” lies in designing novel imaging techniques and nanoprobe-sensing tools. The project, which emerged from a 2011 conference hosted by his institute, could help answer questions such as how humans move their fingers or understand economics. Meanwhile, Greenspan told local public radio station KPBS, the “fancy, science fiction-y new kinds of detectors” needed to record and analyze all that neuroactivity could produce technological spinoffs that have nothing to do with the brain, So far, engineers have “done incredible proof-of-concept demonstrations,” says NIH’s Ludwig, but commercially viable products – like a durable, inexpensive thought-controlled prosthesis as opposed to a multi-million-dollar, fragile prototype – are in some cases years away. Still, the potential job creation clearly fires the president’s imagination. In his State of the Union speech, he noted that every dollar spent on the Human Genome Project returned $140 to the U.S. economy.
You don’t need a map to find those connections.
Mary Lord is deputy editor of Prism. Corinna Wu is a print and radio journalist specializing in science.
Among pet and wildlife lovers, few topics incite more fury than the use of animals in laboratories. But federal regulators and many scientists agree that mice, along with rabbits, dogs, and even horses, are preferable to human guinea pigs for risky experiments. And just as medical researchers resort to animal tests before trying out new drugs or vaccines on patients, biomedical engineers conduct similar trials to help perfect medical devices, tissue regeneration techniques, and even plastic surgery tools like TissuGlu®. An adhesive that reduces the need for drainage after tummy tucks, TissuGlu is the product of a start-up, Cohera Medical Inc., cofounded by University of Pittsburgh chemical engineering professor Eric Beckman. The material began trials involving 150 patients only after it was tested on dogs. Beckman is on the faculty of the McGowan Institute for Regenerative Medicine at the University of Pittsburgh, one of hundreds of academic settings in the United States where animal use is commonplace.
“Currently, [animal testing is] a necessity if you want to get through the FDA and get your product to market,” says Kristen O’Halloran Cardinal, an associate professor of biomedical engineering at California Polytechnic State University, referring to the U.S. Food and Drug Administration approval process. Though not engaged in animal testing herself, Cardinal says, “There’s just a lot of information you can get from animal models that you can’t get anywhere else.”
Soon, however, there may be an alternative—and one developed by engineers. At the University of Maryland’s Clark School of Engineering, Cal Poly, and elsewhere, researchers are developing human tissue in a lab, opening up a new pathway for testing medical therapies without involving patients—or subjecting animals to what some consider inhumane treatment. To Andrew Rowan, chief scientific officer of the Humane Society-U.S. and CEO of HS International, this kind of research is the wave of the future. He predicts that together with other changes in practice brought about by regulation, the new technological advances will result in an abolition of invasive testing by 2025 and all testing by 2050.
Life on a Chip
For the best data, human cells trump animal ones, says William Bentley, founding chair of Maryland’s Bioengineering Department. So, he and his team are helping develop a way to mimic our organs and bodily systems without any danger to humans. Real cells are grown in a lab and placed, in a specific order, on computer chips. “If you can get human cells and human tissues to function on a chip, then that’s what you want. That’s the goal,” he says, since it eliminates a lot of the variables with an animal model. Pioneered by Michael Shuler, biomedical engineering chair at Cornell, the “human on a chip” concept is being pursued by researchers at the Massachusetts Institute of Technology and other universities. But much of the essential ground-level research is being accomplished by Bentley’s research team, which is part of a Biochip Collaborative. His group studies how bacteria communicate with each other through molecules, a process called quorum sensing. It is discovering how to disrupt this communication when the bacteria are near human cells on a chip.
The result? Bentley’s work might eventually replace the use of antibiotics and stop drug-resistant bacteria in their tracks. While there are limitations, such as testing metabolism, immune system response, and behavioral changes, he says, “the promise [of the research] is so large” that “we gotta go there.” He doesn’t necessarily think the end result of his work will replace animal testing, but he contends that it will be cheaper, which would be enticing for pharmaceutical companies and institutions. And reducing animal-based research is “clearly a big motivation.”
A different approach is being taken at Cal Poly, where Cardinal is building blood vessels from scratch. The many uses of her bioreactor-produced blood vessels include direct grafting into the human body and testing whether coronary stents are tolerated by the endothelial cells that make up the vessel walls. Stents are used to keep an artery passage open following a heart attack. Now cell response to the device can be measured in a laboratory setting without the need of a living subject.
Typically, stents would be tested inside the artery of a rabbit’s leg, since it is fairly similar in size to a human coronary artery, but this invention could eliminate that need. “We don’t think these will totally replace the need for animal models,” Cardinal says. “We’re not there yet.” But she thinks “it definitely has potential.”
Cardinal has her students wrestle with the pros and cons of animal testing in a 10-week graduate-level class on the FDA process for approving medical devices. “The ethical concern with taking animals’ lives is what frames the argument.” Is the final product worth hurting animals if it has the potential to save human lives? What are the arguments against it? But she doesn’t tell her students if the practice is right or wrong. Instead, she lets them decide. “It’s important for the students to come to their own opinion on that,” she says.
In the here and now, even researchers who would prefer to abandon animal tests find their hands tied. While those in the biomedical fields “feel guilty,” according to Rowan, and “tend to get defensive,” since their work does result in the deaths of animals, the FDA requires that all chemical entities, in order to be approved for human trials in the United States, must be tested on animals first. This includes tests on drugs, vaccines, and surgical adhesives.
At Maryland, Bentley is part of an effort to modernize the regulatory process for medical research. He is a leading investigator at one of two Centers of Excellence in Regulatory Science and Innovation. The other is at Georgetown University. As part of a collaborative effort among the universities, corporations, and the FDA, the centers aim to improve rules for evaluating drugs and medical devices and help the process keep up with new areas of science like cell therapy and nanotechnology.
Gaps in Regulation
Mice and rats account for an overwhelming proportion of laboratory animals. Scientists at the Jackson Laboratory, the Bar Harbor, Maine-headquartered institution that breeds mice for researchers worldwide, say the rodents are ideal for drug efficacy testing and modeling complex human diseases ranging from atherosclerosis to glaucoma, neuromuscular disorders, and cancer. Among other purposes, mice are used in the early phases of testing to determine whether or not a substance is toxic. These animals are never given pain medication, and Rowan says they “always suffer.” They’re not covered by the Animal Welfare Act of 1966—neither are birds, horses, farm animals, or invertebrates—and thus procedures performed on them are not necessarily regulated, especially when no federal funding is involved. When labs break the few laws and regulations currently on the books, those responsible are rarely brought into courtrooms, and almost never see jail time, Rowan says. But treatment of animals, he adds, is “far better in corporations than in academia,” thanks to tighter controls.
Rowan also insists that “animal data is not a gold standard,” and it is an unreliable indicator of what a substance will do inside humans. In order to meet 21st-century needs for understanding and testing the more than 30,000 chemicals in existence and their interactions with the human body, using animals would take too long, as well. “[The tests are] not good. They’re mediocre at best.” Each chemical requires multiple trials to determine if it’s safe and each test uses up hundreds of animals. A test for carcinogens alone requires 400 mice and rats across three years to complete, according to Rowan. Such a cumbersome process is one reason for a steady decline in animal-based research, he says.
Hoping to see a complete end to animal testing, Rowan is a staunch champion of the “human on a chip” and similar ventures. Meanwhile, Humane Society lobbyists are seeking stronger regulations against mistreatment of lab animals, and they’re working to restrict or eliminate the use of some animals—such as primates—altogether. In 2012 alone, the group helped persuade Air Canada and United Continental to refuse shipments of primates meant for lab experiments, and it was instrumental in getting Europe to drop many of its animal test requirements in pesticide regulations, the largest ever reduction of its kind. In the United States, after lobbying from the organization, Minnesota stopped all pound seizures, where lab animals are taken from shelters instead of bred, and NIH released hundreds of chimps to a special sanctuary, with more to be “retired” soon.
But at least when it comes to rodents, science is pushing in the opposite direction.
The Jackson Laboratory has created a “humanized” laboratory mouse through planting human blood cells into animals with suppressed immune systems. And a January 2013 study published in the Proceedings of the National Academy of Sciences describes a new genetically engineered breed of mice with humanlike immune system responses. The value of data derived from drugs tested in mice is greatly improved by these and similar breakthroughs. As a result, there has been a recent uptick in the laboratory use of mice with altered genes, which allow researchers to better mimic human bodies without actually using them.
Jaimie Schock is assistant editor of Prism.
Like many faculty members at well-known engineering schools, Gary Lee Downey wears several hats. On campus, he teaches engineering classes and shepherds new research projects. Off campus, he serves as a consultant for such companies as Michelin North America, the U.S. operation of the world’s second-largest tire maker. But here’s an eyebrow-raiser: The Virginia Tech professor is an anthropologist. Although he earned a B.S. in mechanical engineering, his master’s and Ph.D. are in cultural anthropology.
By infusing his students with social science theory and methods, Downey aims to improve their ability to function as engineers. “There’s a widespread belief that engineering education is too rigid, and its graduates aren’t always flexible enough to work with people outside of engineering, who tend to define problems differently from those who have gone through the traditional engineering curriculum,” he says. “Our hope is to help bridge that.” He’s not the only academic attempting to span the gap. Across the country, universities have begun to leaven engineering training with exposure to workshops, courses, and research projects conducted by social scientists. Collaborators include sociologists, psychologists, economists, political scientists, anthropologists, geographers, historians, and even philosophers. While the movement has yet to sweep the engineering education establishment, it is gaining momentum. National organizations such as ABET, ASEE, the National Science Foundation, and the National Academy of Engineering now encourage such collaborative efforts, prompting more schools to move in that direction.
“It’s part of the pathway for having a bigger impact for your work,” says Rebecca Wright, a professor of computer science at Rutgers University who heads its Discrete Mathematics and Theoretical Computer Science program. “If you’re providing a solution, you need to know how it would work in the real world. Social scientists can help you.”
The strongest push for this interdisciplinary approach comes from a growing group of schools that have set up science and technology studies (STS) departments. Downey, for example, works in Virginia Tech’s STS program. These programs meld natural and social science to broaden the horizons of future engineers and influence engineering education.
The main way social scientists contribute to engineering education is by getting students to pay attention to the users of products they design. By learning how people operate machines, drive on highways, or deploy robots, engineers will develop handier, more practical items. “When I was an undergraduate engineering student, I was handed equations and told to plug data in,” recalls Jameson Wetmore, an assistant professor at Arizona State University’s Consortium for Science, Policy, and Outcomes. “But technological systems work only if they mesh with social systems.” A case in point: ASU’s Global Resolve program developed smokeless, odorless, and efficient ethanol cookstoves for African villages, only to see them sit unused. Later, a newly arrived ASU expert on international development pointed out the problem: The stoves were too small for typical households and too narrow for their clay pots. Moreover, brewing ethanol was laborious. Asking such an expert to review the design before production might have helped match the stoves to residents’ needs, Wetmore observes.
Opportunities for cross-fertilization seem boundless. Political scientists, for example, can help engineers understand the dynamics of securing government approval. Psychologists might foresee organizational problems that would have slipped under the designer’s radar, while economists bring a fresh perspective to evaluating the fiscal and energy impact of a particular engineering approach.
In his work with Michelin, Downey helped resolve sharp differences in approach between U.S. and French engineers. The Americans had proffered a plan to sell more tires, but the French were lukewarm. As Downey explained it, the U.S. engineers were taught to be problem-solvers, and so concentrated on improving mass-production techniques. The French, steeped in mathematical refinements, were intent on new tire designs. Downey worked with the U.S. design team to come up with – and pitch – a new proposal. Mathematically exquisite, it could both accommodate new designs and cut the time between design and production. Paris was enthusiastic, and the two sides began speaking the same language. “It’s an example of an engineer asking an anthropologist for help in communicating with other engineers,” says Downey.
At the very least, social and behavioral scientists can acquaint engineering students with basic investigative techniques and analytical methods that can enhance their ability to tailor designs, construct new research projects and products, or evaluate a prototype. Consider the impact of a seminar Miriam Kahn, a University of Washington anthropology professor, and a colleague conducted for Boeing engineers when the firm was vying for business in India’s expanding air-travel market. “They taught us about the ethnographic approach — observing, doing interviews, and setting up some focus groups,” says Calsee Robb, an aeronautical engineer who was Boeing’s point person on the project. The engineers then studied how Indians used railcars — the main mode of travel for most – and were fascinated to learn that passengers sat facing the center of the car and moved freely around the train socializing. While that wasn’t practical for airline cabins, Robb says, the finding led to small design changes in the passenger compartment. Inspired, Robb went on to take one of Kahn’s university courses.
On a broader scale, social scientists offer engineers a set of qualitative tools to assess risks, navigate potential political obstacles, and communicate with the public — particularly about complex global challenges. Addressing climate change or developing sustainable cities, for instance, will require a working knowledge of human behavior and foreign affairs along with a deep grasp of chemistry and civil engineering. On narrower challenges, social scientists can aid engineers in drafting plans for managing a department, designing experiments, preventing wrong moves that might cause consumers to reject a product, and advising firms on how to compute costs, repatriate profits, and decide whether to outsource jobs.
Social scientists in so-called applied fields — urban planning, for example — seem particularly compatible with engineering. Like engineers, experts in criminology, communications, energy policy, industrial psychology, and sustainability use social science techniques to solve problems. Juan Lucena, an associate professor in the liberal arts and international studies division at the Colorado School of Mines, says engineers are gradually learning that everything in the technical world has social dimensions and human dimensions, and that the two are always interconnected. “No longer can engineers say, ‘We’ll do our thing and you do yours,’” he says. “There’s no longer any simple separation.”
Rapid globalization and the advance of nanotechnology, biotechnology, and robotics heighten the need for engineers to seek out social scientists. “With the emerging technologies, we now have more complex systems to deal with, and they’re more obviously intertwined with social systems,” says Donna Riley, professor of engineering at Smith College and an ardent proponent of fortifying engineers with social science perspectives. “There’s more public awareness involved.” She points to the raging controversy over the expanding production of genetically modified foods, which Americans tend to accept matter-of-factly but which still ignites protests in Europe and the developing world. But even conventional technologies can involve increased interaction with people, Riley says. The construction of huge electric power grids, for example, requires engineers to site the networks’ major components in ways that respect both the natural environment and the layout of communities.
To be sure, fostering interdisciplinary collaboration between engineers and social scientists isn’t always easy, especially on campus, where new interdepartmental efforts of any kind can bring about collisions in approach, language, and values. Moreover, engineering students already carry such heavy course loads that convincing them to add an elective that doesn’t seem directly related to their required math and science classes can be a tough sell. Structural problems can stymie efforts, too. Faculty members in one discipline or specialty typically can’t advance by teaching in other departments — or publishing research in journals outside their field. Collaboration can also mean splitting research money several ways.
Historical prejudice lingers. M. Granger Morgan, a physicist who teaches engineering and public policy at Carnegie Mellon University, wryly recalls “the widespread notion” among physicists “that nothing goes on in the social sciences that a physicist couldn’t invent at a cocktail party.”
“That’s not true, of course,” Morgan adds. At the same time, many social scientists have had little contact with math or science beyond master’s-level statistics courses and worry they would lack the requisite depth of understanding if they joined forces with engineering departments. Morgan and others say collaboration works best if social science participants have a track record of empirical studies that produce hard results, as opposed to opinion-based papers or social commentary. “There are empirical research activities in all the social science disciplines,” Morgan observes. Which discipline is appropriate for a joint project “depends very much on the problem you’re addressing,” he says. To flourish, interdisciplinary projects require enthusiasm on both sides, institutional encouragement, and a willingness to credit each other’s contributions. The key to working across disciplinary boundaries, concludes Morgan, “is figuring out how to build an institutional relationship that involves stabilizing ways for people to work together — so you really can understand each other.”
To date, STS departments are relatively few and small-scale. Pioneers include Carnegie Mellon, Cornell, Rutgers, Rensselaer Polytechnic Institute, Arizona State, Virginia Tech, MIT, and the Colorado School of Mines. Carnegie Mellon and RPI maintain interdisciplinary engineering programs that offer dual majors incorporating both engineering and social science skills. The University of Virginia’s School of Engineering offers STS as a minor. The Colorado School of Mines grants a graduate certificate in science and technology policy.
Yet collaboration between engineers and social scientists seems destined to spread as a result of encouragement from ABET, the accrediting agency, and the National Science Foundation, where Director Subra Suresh made interdisciplinary cooperation a priority. “Much of NSF’s involvement is motivated by the observation that engineering problems are really people problems, too,” says Jeryl Mumpower, an NSF division director who teaches at Texas A&M. Further encouragement has come from groups such as the newly formed Society for Social Studies of Science (known as 4S) and the International Network for Engineering Studies. Large U.S. corporations, as well, are hiring social scientists to collaborate with engineers.
Growing classroom interest could spur the pace. Patricia Ann Kramer, a University of Washington professor trained in engineering and anthropology, says younger students are eager to get more exposure to social science. Kramer proposes including the social sciences in undergraduate engineering education — first in classes and later for senior projects. “If we begin the collaboration early, it will continue for the rest of their lives,” says Kramer, a civil engineer who holds a Ph.D. in anthropology. In time, as Virginia Tech’s Gary Downey can attest, today’s bridge-building may yield better engineers and more useful designs.
Art Pine is a freelance writer and former Washington correspondent for the Los Angeles Times and the Wall Street Journal.
Mysteries of the Mind
Coming up with its 14 Grand Challenges for the 21st century, the National Academy of Engineering mused that to date, artificial intelligence devices had been designed “without much attention to real ones.” It proposed the ultimate biomimicry endeavor: discovering how the mind works by reverse-engineering the brain. This route, the academy said, “promises enormous opportunities for reproducing intelligence the way assembly lines spit out cars or computers,” along with “deeper insights about how and why the brain works and fails.”
Researchers are taking up the challenge, as Mary Lord and Corinna Wu report in our cover story. Functional magnetic resonance imaging and micro-endoscopes enable them to peer into the active brain and begin to answer questions like, “What is memory?” or “Why do we sleep?” Their efforts could gain a further boost if Congress approves President Obama’s expected 10-year Brain Activity Map project.
Just as brain research requires cooperation between neuroscientists and engineers, researchers and engineering instructors are finding benefit in collaboration with anthropologists, political scientists, and psychologists. Physical scientists, not to mention certain members of the U.S. Congress, have traditionally looked askance at this kind of teamwork. As Carnegie Mellon’s M. Granger Morgan tells writer Art Pine, they had the idea “that nothing goes on in the social sciences that a physicist couldn’t invent at a cocktail party.” But in our feature “Strange Labfellows,” Smith College’s Donna Riley offers another view: “With the emerging technologies, we now have more complex systems to deal with, and they’re more obviously intertwined with social systems.”
Riley’s own students recently won top prizes for their interdisciplinary pursuit: an NAE ethics video competition on the theme of energy policy fairness and sustainability. Another ethics topic is of longer duration: animal testing. No matter what the eventual benefits – better medical devices, pharmaceuticals, or understanding how the brain works – critics say the practice inflicts unnecessary pain and should be halted. Engineers have used animal tests in developing such products as an adhesive to aid recovery in stomach surgery. But, as Jaimie Schock reports in our third feature, engineers also are exploring ways that the same kind of data – or better yet, actual human data – can be gathered with advanced technology.
We hope you enjoy the March-April Prism. As always, we welcome your comments.
Mark Matthews email@example.com
A cavernous concourse and imposing façade qualify New York’s Grand Central Terminal – 100 years old in February – as an urban icon. But the innovations that made it the city’s most important transportation hub lie mostly belowground and in plans envisioned by William John Wilgus, a railroad engineer trained on the job and through a Cornell correspondence course – that era’s MOOCs. Removing surface tracks that separated Manhattan’s east and west sides, he devised a two-level underground station, freeing 30 city blocks for lucrative development, writes structural engineer and author Richard G. Weingardt. With electrical engineer Frank Sprague, Wilgus invented the third-rail electrical track system and replaced fume-spouting steam engines with electric power. The building above, the platforms, and the new structures nearby were surrounded by what has been described as “the most elaborate system of steel framing ever built.” Wilgus also introduced air-rights leases that helped cover construction costs. With justification, the Western New York Railway Historical Society called Wilgus (inset, right) “a civil engineering genius well ahead of his time.”
Green, renewable energy from the wind and sun has two hurdles to overcome. It’s intermittent, so green power can’t work as a baseline source the way nuclear or fossil-fuel plants can. And production often far outstrips demand. Since electricity is hard to store, all that excess power typically goes to waste. Belgium—which hopes to wean itself off nuclear power and also has extensive North Sea offshore wind farms—has come up with a novel solution to both problems. The country plans to build a two-mile-wide, doughnut-shape artificial island made of sand — about two miles off the coast. When nearby wind farms produce more power than needed, that electricity will be used to pump water out of the central reservoir. When demand is high or when winds have slackened, the water will be let back in, turning electricity-producing turbines in the process. Belgium could eventually generate 2,300 megawatts of power from its wind farms. Its two nuclear plants now generate around 3,000 MW. Officials say it could take at least five years to construct the island. But what to name this manmade atoll? The Isle of Homer, of course, after the doughnutloving Simpsons character. – Thomas K. Grose
Just when folks have gotten their heads around the concept of 3-D printing — where machines construct objects one thin layer at a time by following three-dimensional, computer-aided designs — an MIT researcher has introduced what he calls 4-D printing. Skylar Tibbits, an architecture lecturer, says his innovation adds time to the traditional three dimensions. What he’s demonstrated, with the help of Stratasys, an Israeli 3-D printer company, is a self-assembly construct. It’s essentially a jointed strand of standard, rigid plastic that’s combined with another polymer that absorbs water and expands. As the absorbent plastic expands, it pushes the other strand into a new shape, based on a blueprint worked out in advance. In a video, one strand self-forms into a cube; another spells out the letters MIT. Tibbits says that depending on the types of polymers used, other ambient energy sources, including light, sound, or heat, could be used to generate assembly. One possible use, he says, would be underground water pipes that grow or shrink to handle varying flows of water—eliminating burst pipes—or self-assembling furniture. That said, even the most sophisticated CAD programs would most likely crash trying to figure out Ikea assembly instructions. –– TG
Retinitis pigmentosa is a rare genetic eye disease that strikes around 100,000 Americans. Sufferers initially lose their peripheral vision and eventually go blind. The U.S. Food and Drug Administration recently approved the world’s first retinal implant that allows sufferers to regain some vision. The disease destroys the retina’s photoreceptors, the cells that convert light into electrochemical impulses that travel up the optic nerve to the brain, where they are decoded into images. Second Sight Medical Products’ Argus II Retinal Prosthesis System implants 60 electrodes into the retina and uses special glasses affixed with a tiny camera to send them images. Of the 60 patients treated so far, all have had varying degrees of vision restored, though mainly just an ability to detect dark and light. A few, though, were able to read newspaper headlines. The Argus II system already has been approved in Europe, where it costs around $100,000, with surgery an additional $16,000. The company expects it will cost more in the United States. Meanwhile, at MIT, electrical engineering professor John Wyatt is working on an implant system that would use 400 electrodes. And at Stanford, Daniel Palanker, an associate professor of ophthalmology, has been experimenting with a system that would implant 5,000 photovoltaic cells to help those blinded by macular degeneration, a condition that typically strikes the elderly. – TG
Golden Rice has long been the cause célèbre of bioengineered foods. Developed around a dozen years ago by German and Swiss researchers, the grain is engineered to help combat vitamin A deficiency, a cause of blindness and death in the developing world. A Lancet study estimated that 668,000 children younger than age 5 die from this scourge each year. Greenpeace and other environmental groups opposed to genetically modified foods have long fought against Golden Rice, stymieing planting efforts. Anti-Golden Rice activists claim it’s better to treat vitamin A deficiency with supplements or food-fortification programs. But, as a recent article on the website Project Syndicate explains, supplemental programs cost $4,300 for every life saved, and fortification efforts cost $2,700. The engineered rice? Just $100 for each life saved. Two new studies found that two ounces of Golden Rice can provide 60 percent of the daily recommended intake of vitamin A. As this evidence mounts, the Philippines will allow Golden Rice to be grown there later this year, and Bangladesh and Indonesia are set to soon follow. The Guardian now reports that Australian researchers are working on a banana that will boost not only Vitamin A levels, but iron levels, too. – TG
Hangover “cures” typically require ingesting something disgusting or reaching for that old standby: the hair of the dog that bit you. The only real cure is waiting many hours for your liver to filter the booze from your bloodstream. But another treatment may be in the offing. A researcher at the University of California, Los Angeles, has mixed two complementary enzymes into a chemical cocktail that could ease the effects of imbibing too much. Yunfeng Lu, a professor of chemical and biomolecular engineering, says the enzyme concoction works just like the liver to rid the body of alcohol — only faster. One enzyme turns the booze into hydrogen peroxide. But since hydrogen peroxide is toxic, the second enzyme immediately kicks in and forces it to decompose into harmless water and oxygen. Tipsy mice treated with the enzymes had blood alcohol levels 34.7 percent lower than untreated mice after three hours. Rodents given the treatment and the fed alcohol had levels that were 36.8 percent lower after three hours. In capsule form, the enzymes could work as a prophylactic that helps protect the body from a heavy night out or as an after-the-fact cure that offers speedier relief. Cheers! – TG
Aircraft designers have known for years that a blended-wing-body fuselage, which resembles a manta ray’s silhouette, would be more efficient and consume less fuel than a tubular-bodied plane. But the traditional fuselage is easier to construct in a way that keeps the cabin pressurized while also withstanding outside forces, notes Technology Review. Now, engineers from NASA, Boeing, and Pratt & Whitney have developed methods to construct a sturdy hybrid-wing aircraft that would use just half the fuel of today’s jetliners. The hybrid wing is constructed from a carbon-fiber fabric stitched over carbon-composite rods and then coated with an epoxy to make it rigid. Tests showed that sections of the fuselage could withstand the outside forces that jets typically face and maintain cabin pressure. If some part failed under extreme pressure, the stitching stopped cracks from spreading. The plane would be powered by the new, fuel-sipping Pratt & Whitney ultrabypass ratio engine. Because the engine’s front fan is larger than the engine core, it’s hard to attach beneath standard wings, the magazine explains. But the hybrid flying wing uses a top-mounted engine configuration. Some of the manufacturing techniques NASA developed could be ready to use within 10 years to improve conventional aircraft construction. The flying wing itself is at least 20 years away from commercial takeoff. – TG
TOKYO – Students at Kyoto’s Higashihikari Elementary School have a new classmate – a 47-inch robot named Robovie. Despite primitive conversation skills, the android has already wowed the children with its formidable memory: It has been programmed to recognize every one of the 114 kids in the fifth grade and their teachers by face and voice. And since it already has the class science textbook on its hard drive, Robovie is way ahead of even the class geeks – and has its own nerdy sense of humor. “What’s another word for curly copper wire?” the teacher asked. “A coil,” came the reply. “I’ve got one in my body – it moves my motor!” “What do you use for food?” a child asked. “I eat electricity,” the bot replied. “Not crazy about water, though.” The first-of-its-kind, 14-month experiment started in February 2013, and while teachers are hoping the project will inspire kids in science class, the maker aims to reap reams of observational data to help make future robots behave more naturally around their human overlords. Masahiro Shiomi, a researcher with the Advanced Telecommunications Research Institute, says, “We want the robot to learn right alongside the children. Just think of it as the slightly odd new kid, who hangs out in the lab.” – Lucille Craft
The Beat Goes On
If the U.S. Air Force had had its way back in the 1950s, residents of Roswell, New Mexico, wouldn’t be the only ones claiming to have seen flying saucers. Recently declassified documents, including schematics, from the National Archives reveal plans to build a fleet of aircraft that looked like something straight out of a sci-fi movie. A cutaway view of Project 1794 from 1956 shows a saucer-shaped vehicle with a pilot’s cockpit housed in a bubble-like protuberance in the middle. The craft was designed for vertical takeoff and landing and flying at speeds of up to Mach 4 with a ceiling of 100,000 feet. Two prototype “proof of concept” subsonic versions of Project 1794 were built by the Canadian aeronautical firm Avro Aircraft. Tests, however, showed both to be unstable, and the Air Force canceled the project in 1961. – Pierre Home-Douglas
Feel No Evil
As fans of Spider-Man comic books know, protagonist Peter Parker gained “spider sense”—an ability to detect danger—after he was bitten by a radioactive arachnid. Why not build a suit to mimic that skill for real, thought Victor Mateevitsi, a University of Illinois at Chicago computer science postdoc. Mateevitsi’s version has seven modules, each containing sonar sensors that can pick up ultrasonic reflections from nearby objects. As the wearer gets closer, the sensor triggers a small, mechanical arm, which presses down with ever increasing force, warning of lurking perils much as Spider-Man can sense punches before they land. “You can feel imminent danger,” says Mateevitsi. To test his SpiderSense suit, a blindfolded Mateevitsi tossed cardboard ninja stars at colleagues who moved at him as if to attack. He reports an impressive 95 percent accuracy rate. That’s a fun stunt, but Mateevitsi envisions serious, practical applications for his invention: helping the blind to better navigate the physical world and enabling firefighters to maneuver through dark, smoky environments. If they materialize, such useful applications would make Mateevitsi a true superhero — of research. – TG
Sun Never Sets
Behold, the mighty MOOCs. Coursera and edX, two leading companies providing these massive open online courses, have recently doubled in size by greatly expanding their global reach. Both are but a year old. The Harvard-MIT nonprofit edX is an open-source education platform that, along with offering free online courses, also researches how students learn. The newest six universities joining edX are the Australian National University, the Netherlands’ Delft University of Technology, Switzerland’s EPFL, McGill and Rice universities, and the University of Toronto. Meanwhile, for-profit Coursera, created by two Stanford University computer science professors, added 29 more partner-universities to its platform, bringing the total to 62. Among them: Penn State, the University of Rochester, École Polytechnique of France, the Technical University of Denmark, the Chinese University of Hong Kong, and National Taiwan University. Coursera has received about $18 million in venture capital money, while MIT and Harvard each kicked in $30 million to launch edX. Like many digital start-ups, however, both pioneers are still trying to figure out how to make money. Now there’s an idea for an online course. – TG
In August, 2011, the President’s Council on Jobs and Competitiveness declared that the United States needed 10,000 more engineering graduates a year. The council is now defunct, but as of 2011, engineering schools were already on their way to meeting its goal. That year saw 82,903 engineers graduate, a jump of 9,523 – or 13 percent – over 2007, and a majority of schools surveyed by ASEE reported an increase. With recent enrollments up and retention strengthened, the trend should continue.
An MIT-trained congressman won’t exempt research funding from cuts.
Looking from the dais during a House Science, Space, and Technology Committee hearing on research and development, Kentucky Republican Thomas Massie recalled an earlier encounter with Charles Vest, one of the experts invited to testify. As an MIT student, Massie had been called on then President Vest’s carpet for hacking into the system that controls automated blackboards, making them move in a way that displaced and hid professors’ scribbles. Avoiding any serious sanction, Massie finished at MIT with degrees in electrical and mechanical engineering, carrying off the first $30,000 Lemelson-MIT Student Prize for invention. He went on to build a company with his wife to market the PHANTOM, a tactile interface that let users “feel” objects found in cyberspace. His patents continue to produce income for both him and the university.
As the February 6 House hearing got under way, laying the groundwork for reauthorization of the landmark America COMPETES Act, Massie framed a question for Vest, who is soon to retire as president of the National Academy of Engineering. Massie says he planned to preface it by expressing appreciation to Vest for going easy on that long-ago student prankster. He then wanted to ask what advice Vest would offer universities that had not followed MIT’s successful model for patent royalties – with the university, tech-transfer office, and student-inventor each receiving one-third shares.
Massie’s question might have raised doubts about R&D management and universities’ cry for robust government funding, but he never got to ask it. An indignity endured by freshman lawmakers is being the last to speak, and this particular hearing ended before his turn came. But Massie, a Tea Party enthusiast and fan of both Ron and Rand Paul, has already made his presence felt on the House floor, bucking GOP leaders by voting against John Boehner’s re-election as speaker, the defense authorization bill, the New Year’s fiscal cliff deal, and Hurricane Sandy relief. And lack of seniority seems unlikely to suppress his independent, if not mischievous, streak on the science panel, where he chairs the Technology and Innovation Subcommittee. In a phone interview from Kentucky as he was en route back to Washington, Massie made clear that university researchers shouldn’t count on his unstinting support.
“The greatest threat to our economy is the national debt,” Massie said, and that “trumps everything else.” Even though he once worked in federally supported MIT labs, he says no part of the government should be exempt from belt tightening. “Folks looking to me to carve an exception are going to be disappointed.” His going-in assumption is that “there is waste in every department.”
None of this means Massie flatly opposes federal R&D support or lacks enthusiasm for cutting-edge technology. After selling his start-up, SensAble Technologies, he bought a farm, built an off-grid, solar-powered house, and experimented with methane from cattle manure as an energy source. He also became board chairman of a medical-device start-up and put in an early order for an $80,000 Tesla boasting a long-range battery. (He tools around Washington in a 1993 Mustang.) Yet he disagrees with the taxpayer-funded subsidy he’s due for buying an electric vehicle and says he has yet to see evidence that carbon emissions are causing droughts, as President Obama declared at his second inaugural.
While his own firm benefited from Small Business Innovation Research grants, Massie thinks the program could be better administered: “I saw companies that got stuck in a rut and became SBIR mills,” devoting too much effort to renewing government funding and too little to commercializing technology, he says.
For Massie, an encouraging moment at the House hearing came when Vest and fellow witnesses Shirley Ann Jackson, president of Rensselaer Polytechnic Institute, and Texas Instruments CEO Richard Templeton joined in lamenting the woeful state of grade-school science, technology, engineering, and math education. “I feel that the focus hasn’t been there,” Massie says. As someone initially drawn to science by school science fairs, he was also pleased to hear Vest praise the Maker movement as a way to engage the young.
Engineers, not artists, should drive interdisciplinary design projects.
A recent U.S. News & World Report article on graduate engineering programs began by stating that in designing a bridge, engineers “tend to draw upon designs that have worked in the past.” As if that were a totally bad thing, the article suggested we look to artists for a “more creative approach.” By collaborating, engineers and artists could “push the limits of what is already known about their respective fields.”
The quotes came from a graduate engineering student who had enrolled in what the article described as “a new movement in engineering schools toward the interdisciplinary study of science and art.” Reinforcing those ideas, another graduate student opined that “engineers tend to make very small, incremental improvements on things” and “don’t really allow their creativity to take full force.” Furthermore, she added, “artists can teach you to be more open to new things.”
Even a supposedly sympathetic professor of music did not seem to be reaching out when he implied that engineers were responsible for “ugly buildings and clunky gadgets.” He did admit, however, that engineering advances were “allowing artists to think new thoughts and express new ideas.”
Interdisciplinary programs obviously can help both engineers and artists to think outside their respective boxes, but there is a down side to interdisciplinary thinking carried to extremes. It may be exciting to push the limits, but since engineering’s true boundaries are often unknown, figuring out how hard to push can prove difficult.
What should not be left out of the curriculum of any interdisciplinary program are case studies of success and failure that provide concrete examples of what engineers have done right—and interdisciplinary teams got wrong. Only by bringing the conversation down to this level of specificity can students and their professors fairly test generalizations and hypotheses about engineers and artists.
While it may be true that engineers tend to advance the state of the art in small increments, there are plenty of counterexamples. The main-span length of suspension bridges grew by only about 12 percent over the half century following the completion of the Brooklyn Bridge. However, the main span of the George Washington Bridge, completed in 1931, was 95 percent longer than the previous record holder. A similar observation could be made about the tallest building in the world today—the Burj Khalifa in Dubai—which surpassed the former highest building by 63 percent.
The involvement of artists on interdisciplinary teams can promote creative approaches to design, but the case of London’s Millennium Bridge provides a cautionary tale. The design competition for this structure required that teams include an engineer, an architect, and an artist. Aesthetic appearance played such a dominant role in the bridge’s design that it wobbled under the first crowds of pedestrians and had to be closed and redesigned. The design of the infamous Tacoma Narrows Bridge, completed in 1940, was so driven by aesthetic goals that its unprecedented slenderness proved its undoing.
Interdisciplinary teams and goals are wonderful in the abstract, but the reality of past experiences warns us about valuing art and aesthetics over engineering and function. Failures stemming from ill-advised interdisciplinary enterprises, in which the creative urge to do something different has distracted designers from applying sound technical thinking about consequences, should be essential reading for students and teachers of all disciplines. Effective interdisciplinary design teams should involve not only creative engineers and visionary artists but also ghosts of the past who rightfully haunt us with their monumental failures.
Henry Petroski is the Aleksandar S. Vesic Professor of Civil Engineering and a professor of history at Duke University. His latest books are An Engineer’s Alphabet: Gleanings from the Softer Side of a Profession and To Forgive Design: Understanding Failure.
A government initiative to advance American manufacturing is a good, if modest, step.
The White House recently announced a public-private-academic partnership to accelerate the advancement of the U.S. manufacturing industry. If funded by Congress, the National Network for Manufacturing Innovation (NNMI) will comprise 15 regional innovation hubs with a mission to fund applied research and provide shared research facilities; develop workforce-training programs; and assist industry in expanding manufacturing capabilities and supply chains.
Government-sponsored innovation efforts usually miss the mark, but this one seems to be surprisingly well conceived. The U.S. manufacturing renaissance that is already happening needs all the help it can get. Automation technologies such as the $22,000 Baxter adaptive robot from Rethink Robotics are rapidly changing the cost/benefit ratio of manufacturing goods in the United States. Designed to work safely alongside humans, Baxter has two arms, a face that displays simulated emotion, and cameras and sensors that detect the motion of its coworkers. It can perform assembly and move boxes — just as humans do. Its operating costs are comparable to Chinese labor rates. And Baxter will work 24 hours a day without complaining.
Beyond cost advantages, locating manufacturing close to engineering expertise also fuels innovation. GE found it could profitably manufacture appliances once again at Appliance Park in Louisville, Ky., by automating its production lines. It saw great synergy in having its engineers and plant managers work side by side. Tesla is building the world’s most advanced electric vehicle, the Model S, in Fremont, Calif.—next door to Silicon Valley. Although the area has some of the world’s highest labor costs, Tesla uses robots to do the vehicle assembly, mitigating the bottom-line impact. And proximity to Silicon Valley gives the automaker access to some of the best design and engineering talent in the world. Apple Inc. also sees the benefit. It recently announced its intention to start manufacturing computers in the United States again.
With the labor-cost advantages that robots provide, it no longer makes sense to ship raw goods to China to have them assembled and shipped back across the Pacific. But there are logistical challenges that prevent manufacturing from returning home. Entire supply chains are now located in China, so there is a chicken-and-egg problem. It is also very hard to find workers with the skills needed to operate and maintain sophisticated computer-based equipment. And large capital investments are required to set up manufacturing plants. This is where the government can help: retraining the workforce and assisting businesses with new factory setup.
Even as robots are beginning to disrupt traditional assembly-style manufacturing, other technologies are advancing that will put the robots out of business in the next decade. New materials such as carbon nanotubes and ceramic-matrix nanocomposites (and their metal-matrix and polymer-matrix equivalents) are enabling designers to create products that are stronger, lighter, more energy-efficient, and more durable. Then there is 3-D printing, which promises to change the process of manufacturing itself. Rather than power-driven machine tools that physically remove or “subtract” material to make a product, “additive manufacturing” produces goods based on 3-D models by laying down successive layers of materials. This allows manufacturers to create complex objects without any tools or fixtures — “laser printing” goods rather than assembling them.
More research is required to accelerate the creation of advanced materials and to standardize the processes, materials, and equipment for additive manufacturing. Most critical is to unclog the pipeline from university lab to industry. These are all within the scope of NNMI, but it has incredibly long time horizons and relatively small amounts of funding. The government expects to invest $70 million to $120 million over a five- to seven-year span in each of the 15 hubs to be launched over the next two to three years. This is a drop in the ocean for a trillion-dollar market opportunity and competitive global advantage. What we need is a Manhattan Project-scale initiative — with far greater investments and much shorter time frames.
Vivek Wadhwa is a scholar specializing in entrepreneurship. He is vice president of academics and innovation at Singularity University and is also affiliated with Duke University’s Pratt School of Engineering, Stanford University, and Emory University.
A case study reveals how engineers integrate mental models and simulations to tackle complex problems.
Researchers have long had an interest in the processes of engineering design, particularly how a design engineer makes progress under conditions of uncertainty and constraint to arrive at a solution. Over the years, laboratory studies have yielded valuable insights into the problem-solving steps of both expert and novice designers. However, nothing can replace direct observation of an engineer working to solve a significant design problem, or “design in the wild.”
In our longitudinal case study, we observed and chronicled the reasoning and problem-solving processes employed by an engineering master’s student as she designed a micro-fluidic device to measure the response of blood T-cells to a chemical stimulant over time. Such “labs on a chip” bring together many complex experimental processes and include such constraints as cell damage and other factors unique to biological material.
For this study, we undertook field observations, wrote field notes of the design work unfolding on the lab bench, and conducted informal, unstructured interviews with the engineer at work. At weekly lab meetings, we took notes and audiotaped her presentations and the accompanying discussions. We analyzed the engineer’s PowerPoint lab presentations, two posters she created for conferences, two of her publications, and her master’s thesis. In our analysis, we traced her iterative actions and activities as she progressed from a partial design to a fully working lab on a chip.
The engineer designed the device in an iterative and parallel fashion, revising different facets repeatedly. When design impasses loomed, she generated representations that served multiple cognitive functions. One design challenge involved determining the geometries of a herringbone sluiceway to mix the cells and stimulant. The engineer built a computational fluid dynamics model that allowed her to simulate flow patterns for different channel widths and liquids. She compared the simulation results against actual flow patterns in prototype channels, using water and sucrose to mimic viscosity. Adding a fluorescent tag to the liquids allowed her to observe and track the level of mixing. She then compared confocal microscope images of the flow against patterns generated by the model’s visualization. Once a good correspondence between simulated and actual output was established, the validated model was used to generate and test different possible geometries. The model geometries that produced the desired mixing the fastest were built and tested.
In this single design phase, we observed how the engineer utilized external models to simulate possible scenarios, visualize the device output, tag device elements to track the level of mixing, and interrogate to gain greater understanding of a local problem, making it possible for her to efficiently design her device.
This study in the wild establishes that there was an interactive process between the internal conceptual models of flow and viscosity that the engineer had gained from extensive prior electrical engineering experience and the locally generated external representations in the computer simulations. Consistent with distributed cognition theory, this integration of mental models, drawings, and prototypes formed a distributed cognitive system that constituted the reasoning process, leading to a novel object with features that supported new experiments.
This case study also clarifies the need to revise our students’ understanding of learning and problem solving to one that appreciates the bootstrapping value of distributing complex cognitive tasks across internal and external representations. Engineering educators should help students understand that sketched or built models that support simulation, visualization, and interrogation are essential to successful design. Additionally, the classroom is a place where students can begin to develop the representational fluency and flexibility exhibited in the design process by the engineer in this study.
Joshua Aurigemma is a recent graduate at the Georgia Institute of Technology; Sanjay Chandrasekharan is on the faculty of India’s Tata Institute of Fundamental Research; Nancy J. Nersessian is a professor in the School of Interactive Computing at Georgia Tech, where Wendy Newstetter is the College of Engineering’s director of educational research and innovation. The work was supported by NSF grant DRL097394084.
A Penn State scientist recounts his ordeal as a target of climate-change deniers.
The Hockey Stick and the Climate Wars: Dispatches From the Front Lines
by Michael E. Mann, Columbia University Press, 2012.
Like the climate, the climate change wars are heating up. After muted mention in the 2012 presidential debates, global warming came to the fore during the February State of the Union address when President Obama declared that 12 of the hottest years on record have occurred within the past 15 and that recent extreme weather is no “freak coincidence.” Obama urged a bipartisan, market-based solution by Congress, suggesting that his administration will take action if Congress does not. Issuing the Republican response, Marco Rubio dismissed what he termed job-killing laws, arguing that “government can’t control the weather.” The Florida senator has subsequently questioned human responsibility for changes in the climate, noting that among scientists, there exits “reasonable debate on that principle.”
In The Hockey Stick and the Climate Wars, climatologist Michael E. Mann asserts that consensus about human impact on the climate is, in fact, very strong; that most “debate” takes place outside the scientific community; and that much of it has been anything but reasonable. Few scientists deny the urgent need to address this issue: As a 2010 letter in Science magazine, signed by 250 National Academy of Sciences members, put it, “for a problem as potentially catastrophic as climate change, taking no action poses a dangerous risk to our planet.” The same letter denounced “McCarthy-like threats” against scientists by groups seeking to obstruct political action. Hockey Stick tells the story of Mann’s own harrowing experience as a key target of such groups, many funded by the fossil fuel industry.
The focus of “Climategate” was a study first published by Mann and colleagues in 1999 with a graph illustrating global temperatures dating back to the year A.D.1000 The graph gained the “hockey stick” label for its shape: a line of moderately varying temperatures up to the 20th century with a sharp increase over the past 50 years – not unlike the blade and shaft of a hockey stick. While the study did not ascribe causes to the increased warming, Mann claims that early climate change opponents aimed to discredit the findings and the scientists who produced them. The hockey stick study gained even greater notoriety after appearing in a 2001 report for the Intergovernmental Panel on Climate Change.
As early as 1999, Mann found himself swept into a maelstrom of orchestrated attacks in blog posts, newspaper articles, and cable talk shows. By 2009, Mann writes, a fierce campaign was waged to force his employer, Penn State University, to investigate and fire him; for colleagues to denounce him; and for his National Science Foundation grants to be revoked. Mann’s email was hacked, with select passages published on the Internet; a belittling video appeared on YouTube; and several House Republicans called for investigations of his work. Mann and his colleagues were eventually exonerated of wrongdoing, and the tactics used against them were denounced by many. Nonetheless, climate change denial continues unabated today.
Now director of the Earth System Science Center at Penn State, Mann describes himself as a researcher who was once content to remain in his lab analyzing data and “pursuing curiosity-driven science.” Today he realizes the imperative for scientists to commit their expertise to public discourse on issues of pressing societal importance. Public advocacy may not be a comfortable role for many academics, he writes, yet “without a science-literate and politically aware populace, there can be no match against well-funded, well-organized groups that place little value on honesty or integrity, that cleverly masquerade denialism as skepticism.”
Addressing that need, Hockey Stick serves as an excellent primer on climate change science, with early chapters detailing the methods by which climatologists analyze their data. Just as significant are the book’s closely documented account of “a massive disinformation campaign funded by powerful vested interests” and the warning it issues to researchers who prefer to remain above the fray of politics. Given the urgency of our global situation, Mann argues, it behooves not just scientists but everyone to become informed and engaged.
Robin Tatu is Prism’s senior editorial consultant.
Drum Major for Diversity
As an engineer and educator, Ray “Doc” Haynes was destined to devote his career to championing diversity. It’s literally in his DNA: Cherokee (Deer Clan) on his mom’s side, Mexican on his dad’s. “Diversity was part of my life from birth,” explains the retired Northrop Grumman recruiter and Cal Poly professor, who grew up on the border in Nogales, Ariz. Ditto engineering—his father, uncle, and brother were all engineers. “I basically followed the family profession,” says Haynes, who earned a B.S. in aerospace engineering from the University of Arizona, then rounded out his education with a master’s in systems engineering, an M.B.A., and a Ph.D. in operations research management. “I just wanted to be called ‘doc’ like my grandfather and namesake.”
Coming from a diverse community taught Haynes the importance of helping people overcome their differences by treating them with “respect and honesty.” It also may have buffered him from discrimination as he climbed the corporate ladder as an engineer and manager. (Haynes did encounter segregation, however; his parents had just enrolled him in third grade in their hometown of Claremore, Okla., when the principal came in and asked his teacher to move him to the “Indian” classroom.) On leave from TRW to teach in Cal Poly’s graduate engineering management program from 1989 to 1999, Haynes saw the need to “reach further down the education pipeline to find future engineers” and to encourage more people from diverse backgrounds to pursue engineering. He has worked tirelessly toward that goal ever since, not only on ASEE’s leadership team but also as an adviser to minority engineering and science associations, industry, and schools. Now retired, Haynes continues to push the K-12 envelope as STEM education director for DaVinci Schools, a group of industry-sponsored startup California charter schools that combine project-based learning with college-prep curriculum, internships, and student presentations. Middle school students, for instance, might build trebuchets to solve quadratic equations, or create flash animations to analyze projectile paths. Haynes also is forging K-12 STEM opportunities at Arizona’s Cochise College. Diversity, he says, is like any other design challenge. In his experience, “a sense of humor and ready smile helps to bridge any real and/or perceived differences in individuals and goes a long way towards mitigating the challenges of working, living, and studying together.”
Leader and Mentor
Whether it’s for veterans, women, minorities, or the odd person out in a team project, Stephanie Adams will figure out a way to make engineering education work. She might urge colleges to award course credit for military technical training, for instance, or tell professors they can’t just stick three or five students together and expect a seamless unit. If there’s a theme that runs through the career of this interdisciplinary engineer and ASEE leader, it’s a commitment to the student whose promise might get overlooked in the normal course of things, to the detriment of both the student and engineering. Mentoring figures high in her approach, as underscored by the Holling Teaching/Advising/Mentoring Award she received during her decade at the University of Nebraska-Lincoln. “I would not be where I am today without a whole lot of people,” Adams says. “Therefore, I have an obligation to help those behind me, both professionals and students.”
The daughter of educators – her father’s climb into college administration took the family from Virginia Beach, Va., to Syracuse, N.Y., and then to South Bend, Ind. – Adams at first hoped to become a doctor, then got excited about biomedical engineering while recovering from knee surgery in the ninth grade. She went on to major in mechanical engineering. After a series of internships at 3M Corp., her interests turned toward systems and industrial engineering, and then to education. Her doctorate at Texas A&M combined engineering, management, and education, giving Adams the language to express as an educator “things that I knew intuitively.” With a break for a National Science Foundation/AAAS fellowship in Washington, she moved between teaching, administration, and research at the University of Nebraska – Lincoln and Old Dominion University and now heads the Department of Engineering Education at Virginia Tech’s College of Engineering. Her series of awards over the years includes the 2008 DuPont Minorities in Engineering Award from ASEE. A gregarious yet forceful presence at ASEE conferences, Adams chairs the Public Interest Council I and is nearing the end of a two-year term on the Board of Directors. She will serve on the Nominating Committee next year. Expect to see her playing key roles at ASEE into the future.
We should open students’ eyes to the aesthetic appeal of STEM.
At a recent professional society meeting attended by distinguished researchers from engineering and computer sciences, the moderator solicited ideas for symposium themes highlighting the beauty and benefits of scientific research. The group was prolific in coming up with themes of benefit to the society but had trouble articulating the beauty of its endeavors. Our inability to relate the science, technology, engineering, and mathematics enterprise to beauty might be at the core of the problems we are facing in recruiting and retaining students in STEM disciplines.
Only 6 percent of 24-year-olds in the United States have earned a first degree in the natural sciences or engineering, placing the country 20th in a comparison group of 24 countries. There is a huge racial disparity in STEM graduate production. While 33 percent of white students and 42 percent of Asian American students who enter college aspiring to major in a STEM discipline complete their degrees within five years, only 22 percent of Latino students, 18 percent of black students, and 19 percent of American Indian students do so. These numbers call for a multidimensional understanding of the problem, and articulating beauty in STEM disciplines is an important dimension.
There is perhaps a historical basis for seeing science and technology as disconnected from nature and beauty. The industrial development and advent of machines, dubbed the “second creation” of the Garden of Eden, had skeptics right from the beginning. Henry David Thoreau, in the 19th century, proclaimed that “the dappled sunlight falling across the path of the poet provokes joy beyond that which human technology can bring.” Walther Rathenau, the early 20th-century German industrialist and philosopher, was concerned that mechanization would turn humans into mere components of systems. These historic attitudes pose important challenges in STEM education: How do we train our students to see beauty in all forms, man- and nature-made? How do we accomplish a paradigm shift away from viewing the human-built world as competing against the natural world? Technology does not have to be set up against the romantic view of nature. Love of beauty is innate to STEM students just as it is to all humans. They must come to see that well-designed, well-performing technology has its own elegant appeal.
STEM training is necessarily logic-based. The training required by these disciplines involves probing the objective realities, understanding the phenomenological world, and formulating theories, concepts, and designs. Our paradigm shift should be toward providing this training within a framework that integrates acquiring scientific and technical knowledge with experiencing the beauty of all forms. The common denominator of all our learning endeavors, STEM or otherwise, is learning to be creative, whether it is in the creation of an engineering infrastructure in the objective world or in the expression of a subjective reality through a painting or a poem. Pairing the training of mathematical logic with the ability to see beauty in one’s own subjective world is one of the distinctive traits of great inventors and entrepreneurs. Visionaries see a possibility beyond what is visible and work within objective constraints to achieve it. In STEM disciplines, constraints are imposed by the laws of nature; in the liberal arts, they are the limitations of language, music, and color compositions. Our ability to imagine a creative outcome provides the impetus to navigate through these constraints.
Just as the beauty of a rose lets us endure the pricks from its stem, we can help our students overcome their struggles in STEM by firing their imaginations with its beautiful potential. .
Lakshmi N. Reddi is dean of the University Graduate School at Florida International University.