One of the best courses Speed School has to offer is Engineering 111. The course had a super simple premise, design and build a windmill. The entire windmill. Design and construct the blades, the stand, build an electromagnetic motor, design and 3D print a motor mount, then wire it all up to a control unit and write the code to measure the electricity. Fingers crossed at the end of it you had blades that spun and a light that turned on.
It was a course that forced you to see why you do something, even if it was on a small scale. Why do we care about accurate design drawings? Because when you only have enough time to make 2-3 reprints and you’re taking 6 other classes, you really want it to work the first time. Why do you have to learn how to write clean and legible code? Because your group partner who is up with you at midnight troubleshooting would really appreciate it if they can understand what on earth you were thinking.
I vividly remember being frustrated numerous times throughout my undergraduate career asking myself why I needed to learn a subject. Why do I need calculus? Why do I need to know about data structures? Why do I need linear algebra? And why do I need algorithms? In each of these courses they often kept your view very narrow, only explaining the topic at hand and weren’t able to explain the application, obscuring the why.
Sitting in my Artificial Intelligence class this semester, our professor asks us to create an artificial intelligence that plays a game with the user. Working on this project it all began clicking into place. The problem we were tasked with was the largest we had ever been asked to tackle. Problems with over four trestrigintacentillion possible answers (it’s a number, I promise, 4x10402) and there's one correct solution. Brute forcing and trying every solution on some of the fastest processors available would take one novemviginticentillion (that’s 1090) years. That's about 80 quinviginticentillion times the age of the universe, or in engineering terms, you might want to find a better solution. Especially given that we only had 2 weeks to work on the assignment.
All the algorithms, all the data structures, all the linear algebra that I questioned so much in the past started to prove themselves when constructing this solution. Using those despised algorithms and data structures those 80 quinviginticentillion universes became 8 seconds. The why started to come into focus. The same class, however, also made clear how much we do not know, and how important it is that we see the why of a broad range of subjects.
Our professor asked us the question: “What rights do AI entities have?” I sat there, dumbfounded, realizing this was the first time I’ve been asked a question in an engineering class to which there is no answer. What happens when we create the first sentient intelligence? Is it afforded human rights? Could it be exploited for labor without restrictions, or would there be limitations? What if our goals and its goals are not aligned? These problems are not just computer science problems. These are legal, military, economic, and ethical problems just to name a few. AI already makes art, vacuums our homes, drives our cars, determines stock trades, and pulls the trigger on drone strikes.
You can’t just brute force philosophy. We are facing wicked problems that demand as much broad understanding as deep focus. These problems are here—and this time we don’t have 7.5 million years to wait for the answer to the Ultimate Question of Life, the Universe and Everything.
Kieran Waigel is a McConnell Scholar in the class of 2022. He is studying computer engineering, computer science, and political science at the University of Louisville.
