Blinded by the Byte
Our over-reliance on computers can hide design flaws—with costly, sometimes fatal consequences.
Opinion By Jim Hanson
Last year’s Boeing 737 MAX crashes thrust the danger of depending on computer systems into the global spotlight, but that peril has existed since we began using computers to perform engineering analysis. Sometimes we allow the precision of the calculations to blind us to the sensitivity of the results. Or we don’t adequately anticipate behavior of the boundary conditions. Or we make a mistake with inputs.
Modeling errors, either by hand or when using a computer, happen daily for practicing engineers. The key is identifying them as early as possible in the design process, since the cost to fix an error rises exponentially as projects proceed toward implementation. That seems like a no-brainer, so why do we see so many costly mistakes?
There are many contributing causes. Deadline pressure can cause engineers to postpone, cancel, or ignore quality-control checks, for example. The project may lack a quality-assurance process. A system’s complexity can make it difficult for reviewers not intimately familiar with the design to identify problems. In addition, the engineers involved might not have the skills or experience base to recognize that there is an issue.
That last factor should trouble all engineering educators. Have you ever had students present clearly flawed computer results on a project or homework assignment? Did many of them say “but that’s what the computer gave me” after you pointed out the errors? Their response is a symptom of a deeper woe: an inability to recognize when computer results are off.
Discussing this issue with practicing engineers, I frequently hear a common complaint: “Junior engineers trust the computer too much.” Faculty from around the world share the concern. Their lament: Students have amazing modeling skills but no idea what their results mean or if they are reasonable.
To solve this ubiquitous problem, we need to know if engineers can be taught to recognize flawed results or whether they must learn from experience. And if the skill is teachable, whose job is it: academia or employer?
Terrified that we might be headed into an era where engineers are blindly dependent upon computers, I set out to discover how experienced engineers spot errors. This was no small challenge, since the engineers who learned how to identify errors in lengthy hand calculations are retiring daily. I met with 35 practicing structural engineers from 10 different firms. Their design experience ranged from one to 55 years, with a median of eight years. In interviews, the engineers recounted 71 specific instances of how an error was discovered. Nearly one-fifth of errors were identified in the field—the most expensive place to find glitches—but two-fifths were found using skills someone had taught the engineers.
Rather than wait for engineers to learn on the job, I decided to teach undergraduates in two classes the skills practitioners use to identify errors. The first, Structural Analysis I, is a mandatory course for all civil engineering majors at my institution. The second, Structural Analysis II, is an elective course taken by students who want to specialize in structural engineering.
To measure and evaluate results, I created a multiple-choice test that requires students to identify and then justify which answer is the most reasonable. Students can get partial credit in the justification category by explaining why certain answers are not reasonable. I began by giving the tests to practicing engineers and to students. Unsurprisingly, the practitioners were able to identify the most reasonable answer almost twice as often as students who had learned structural analysis using a traditional curriculum. Since then, for the past 14 years, I’ve taught students evaluation skills.
The impact has been significant. In both courses, students with training in how to evaluate results substantially reduced the gap between themselves and practicing engineers when it came to identifying the most reasonable result. In terms of their ability to justify their choice, Structural Analysis I students are almost as good as practitioners while those in the elective course now surpass the experienced engineers.
Rather than send engineering graduates who blindly rely on the computer into the workplace, let’s empower our students to come to our offices saying, “I know my computer results are wrong, but I can’t figure out why.” Even better, let’s equip them to have confidence in their computer-aided results—and to know why those results are reasonable.
I’ve been able to document these tools for structural engineering and would like to encourage other instructors to lead the charge in their fields. Talk to professionals in your specialty about the tools they use to pinpoint design flaws. Train your students. We all win, and so does society, when our graduates can use computers responsibly and find mistakes early in the design process. We have an effective way to prepare students for practice. Isn’t it our privilege and obligation as educators to do so?
Jim Hanson is a licensed professional engineer and a professor of civil and environmental engineering at Rose-Hulman Institute of Technology. His textbook, Structural Analysis: Skills for Practice (Pearson, 2020), implements the techniques described in this article.