by Tony Moore
In the realm of technology, artificial intelligence has emerged as a transformative force, revolutionizing industries and reshaping societal norms. From self-driving cars to personalized health care, AI’s impact is pervasive and undeniable. But what does this hold for the world of academia, where tradition and rigorous inquiry have long held sway?
Here we delve into the fascinating intersection of AI and various academic disciplines, exploring how this powerful technology is poised to reshape the way we teach, learn and conduct research. Through the perspectives of Dickinson professors, we’ll gain insights into the anticipated transformations in fields such as biology, psychology, literature and physics.
As we embark on this journey to understand the impact of AI on academia—through a single question—it’s important to approach the topic with an open mind. AI is not a replacement for human intelligence; rather, it is a powerful tool that can be used to augment human capabilities. By understanding the potential of AI, we can harness its power to improve the way we learn and create new knowledge.
Note: Yes, this intro was written by Bard (now known as Gemini), the large language model from Google AI. Prompt grunt work and editing by Tony Moore
John MacCormick, Professor of Computer Science
I’m a computer scientist who has worked in AI, and I happen to be writing a book about AI right now. But I still feel hopelessly unqualified to speculate on AI in the future. To see why, let’s rewind 50 years.
Suppose that in 1973 I was asked, “How will electronic calculators reshape your discipline?” Handheld electronic calculators were starting to become widely available in the 1970s, and an insightful person may have successfully predicted some of their effects: Will we still need to teach arithmetic in grade school? (Correct prediction: yes.) Should we continue teaching how to use a slide rule? (Correct prediction: no.) But correct predictions of that kind completely miss the larger point. Calculators were merely an incipient glimmer of the true revolution that arrived over the following decades: the widespread use of general-purpose computers. Any correct prediction about the effect of calculators is irrelevant, compared to the effect of modern computers.
Which leads me to wonder: What if our present capabilities in AI are as feeble as a 1970s calculator, compared to a possible technological revolution that may sweep through our society over the next few decades? Personally, I’m optimistic. I already love computer science, and I think it’s only going to get better. Despite the challenges that exist on our planet, we are lucky enough to live at a time of immense promise and excitement. Revolutions can be fun. Let’s see where this one will take us.
Todd Arsenault '99, Associate Professor of Art
Artists are constantly responding to the world around them through their work. Engagement with emerging technologies has long been a source of interest for many artists, whether toward challenging conventional modes of making with new tools or broader commentary on our relationship to technology. Since the advent of AI image generators, the technology has become both a source of curiosity and contention in the artistic community. As in other areas, there are worries that AI will threaten the livelihoods of creatives.
The onset of the digital age initially brought the misnomer that meaningful artmaking would become largely digital, supplanting the use of physical materials. It was quickly found that while the computer was a useful tool and appropriate for certain types of making, it was not a replacement for the kind of discovery that takes place with physical materials such as paint, wood and clay. In many ways the digital revolution reinforced the magic of working with physical mediums and working with your hands.
While AI will likely have an impact on the future of our department, it is difficult to know what that might look like at this early juncture. AI offers interesting possibilities for pushing the creative process when used in a critical manner, while a less informed approach could lead to derivative works devoid of personal voice. A primary goal in our department is for students to develop a critical eye toward making that considers technical, historical and conceptual elements. Curiosity is an important part of this approach, and AI will likely become part of this larger conversation.
Lars English and David Jackson, Professors of Physics
AI can be used to explore predictions of existing theories that are mathematically intractable by traditional means. For example, systems with a very large numbers of particles often give rise to emergent behavior that may be impossible to predict or simulate, and AI might help uncover such behavior. Similarly, many areas of physics require massive quantities of data to be analyzed, and AI should ease the burden of finding patterns in such large data sets. There is skepticism as to whether AI will be able to operate on a higher level and discover new theoretical paradigms, on par with Einstein’s theory of relativity. While it cannot be categorically ruled out, the consensus is that such a level of creativity is not imminent. Meanwhile, ideas from physics are already aiding AI development.
It is more challenging to predict how AI will reshape our department. Currently, AI is not terribly good at solving physics problems, but it is only a matter of time before students will be able to use AI to do their homework. When this happens, our curriculum will need to change, and this might allow us to focus on teaching students higher-level thinking skills, leaving the more mundane tasks to AI programs. While such a change will have benefits, there are also potential drawbacks. As AI becomes more entrenched in the physics curriculum there will undoubtedly be some skills that get lost along the way. The challenge will be to figure out which skills should be retained while making room for the new skills and techniques that will arise.
Ben Basile, Assistant Professor of Psychology
AI will surely change the way we teach and research psychology. But so did word processors and statistical analysis computer programs. Those didn’t kill the field. They only made tedious jobs easier so researchers and students could turn their efforts to loftier efforts. In my research, I use AI for the little things that computers do well, like brainstorming counterarguments to anticipate in my papers or writing computer code to generate testing stimuli. (The code crashed, but my code always crashes the first time, too!) Five years ago, generating those stimuli would have taken a couple hours. Twenty years ago, it would have taken a couple undergraduates toiling for a week. That’s time we can use for tackling the meatier parts of science. Also, I felt a little like Geordi La Forge talking to the Enterprise’s computer. That’s pretty neat.
But what about the impending apocalypse of AI cheats? Students can already pay strangers on the internet to do their assignments (yes, I’ve caught one). Professors know this. A novel cheating option doesn’t change our best practices for encouraging honest, engaged work. For my assignments, ChatGPT produces a mix of passable writing, vacuous nonsense and fabricated citations. Someday, AI will write primo papers, but it’s not today.
Overall, psychology needn’t fear AI. Psychology didn’t end because students could analyze their statistics using computer programs. It just meant our stats classes shifted to teaching students how to use those programs as a tool to support their science education. The future of AI in psychology is likely similar. Professors will incorporate it into their labs and classrooms as yet another way to help students reach the field’s final frontiers.
Dick Forrester, Professor of Mathematics and Data Analytics
Artificial intelligence is rapidly transforming the field of data analytics and will undoubtedly have impacts on how we do things in the Department of Data Analytics. AI will continue to provide more sophisticated tools for such things as pattern recognition and predictive modeling and will automate many of the time-consuming and repetitive tasks that are currently performed by data scientists. To ensure our graduates are prepared for this evolving landscape, we will continue to adapt our curriculum to incorporate the latest AI technologies while staying true to the core tenets of a liberal arts education.
In the era of AI, data scientists with a liberal arts background will be absolutely essential. Their unique combination of technical skills, critical thinking, communication prowess, ethical awareness, creativity and broad-based knowledge ensures they will be able to adapt to new technologies as they emerge. These qualities not only empower them to ask insightful questions and derive meaningful insights from data but also enable them to comprehend the power and limitations of AI. This understanding positions liberal arts-trained individuals to better collaborate with AI and thus maintain relevance in a future shaped by it.
Ed Webb, Associate Professor of Political Science and International Studies
I work in two departments and many programs but will focus here on political science. Departments change slowly, and disciplines more slowly still—I don’t think it makes sense to predict that large language models and other forms of generative AI will “reshape” what we do in the short to medium term. Some kinds of AI show up in how we do our work already without us choosing it, built into software we use every day. Some of us might more actively choose to use emerging tools to speed up how we construct spreadsheet formulas or to generate elements for a classroom policy simulation, among other examples. For some there will be new research questions we could pursue, depending on our specialization.
But it’s important not to get caught up in the hype. Increased use of AI in war or disinformation may pose ethical as well as empirical puzzles, but these are generally not fundamentally new kinds of questions: technological advances typically generate new investigations and theories based on enduring concerns of who wins, who loses and how. Algorithmic tools and machine learning may be used to evade regulations, disguise new forms of labor exploitation and facilitate further concentration of wealth, along with doing more productive things in medicine and other fields. I hope political scientists and neighboring disciplines will be alert to those phenomena and provide the public and policymakers with sound analysis and the necessary critical tools.
Shamma A. Alam, Associate Professor of Economics
Predicting the long-term impact of various technological innovations is a challenging task. However, insights into short-term effects can be gleaned from historical trends. For instance, when the internet gained widespread popularity in the US, initially, there was speculation that it might reduce the demand for education. Contrary to this expectation, the integration of the internet empowered professors to enhance their courses by leveraging easy access to information. Similarly, during the proliferation of social media in the 2000s, we thought it may have a transformative effect. Instead, those anticipations were met with a surge in disinformation, underscoring the continued significance of higher education.
In the realm of Economics, I cannot think of a notable instance of a major technological innovation having a significant detrimental impact on the field. Drawing parallels with the internet, my perspective is that AI will likely enhance faculty productivity in the short term. Nationwide, faculty members are already incorporating AI tools to improve various aspects of their courses, including syllabus development, grading rubrics, and assistance with research writing. Nevertheless, due to the rapid evolution of AI compared to past technological breakthroughs, accurately predicting its long-term effects on my discipline remains a challenging endeavor.
Chelsea Skalak, Assistant Professor of English
When ChatGPT was first introduced, there was an immediate panic among some English professors that the major faced an existential crisis that could eliminate its study altogether. This shouldn’t be a surprise: new technologies inspire new fears just as easily as new hopes. When the printing press was introduced, people wrote dire warnings about how the disappearance of the handwritten manuscript would mean the end of literature. Instead, literacy flourished, and so did English studies. I have no reason to believe that the introduction of AI will have a different result.
The immediate fears surrounding AI have been that it will make it impossible for professors to know if a student is turning in their own work, leading to a sharp drop in critical thinking and learning. Many professors have attempted to “AI-proof” their assignments, focusing on skills that AI currently performs poorly with, or even returning to handwritten assignments. I think this approach is a mistake, and ultimately a losing game with a constantly moving target. Instead, I’m encouraged by the new questions that AI has inspired us to ask of English studies: namely, what are the true goals of the study of English literature, and how are we achieving them? Why is it important that students possess these skills? What is the true value of the human in the humanities? We are involving our students in these discussions, making it clear where the goals of education are incompatible with AI and where they are not. The study of English literature requires us to read and think critically about texts across the spectrum of space and time, deeply engaging with each other’s humanity. As we continue to adapt to the existence of AI, I believe we will drop some modes of assessment, but more importantly, we will truly understand the value and centrality of human connection to English studies.
Xiaolu Wang, Assistant Professor of International Business & Management
[Disclosure: written by human, checked by AI.] The obvious impact of AI on businesses is that tasks across all skill levels—whether routine or contingent, mechanical or creative, physical or intellectual—are increasingly automated, and the human-AI interaction has become significantly easier and cheaper with the advent of large language models. The implication is twofold: While businesses need fewer workers for shorter times, the barrier for the average person to start their own business is becoming significantly lower. With affordable AI assistance throughout all stages of business development—idea generation, product/service design, accounting and financial management, marketing and sales—almost anyone can become an entrepreneur/micropreneur of some capital-light business. Considering that many of Gen Z are already doing multiple part-time jobs, it is conceivable that in the foreseeable future even those in full-time employment are likely to run a side business of their own.
Therefore, a major challenge/opportunity for business education is to pivot toward cultivating general entrepreneurship with a moral compass in a world with AI. Looking at the big picture, driven by AI, the business world is becoming a silicon-, data- and algorithm-based, self-directing, closed system, with humans being just one component. Without heightened entrepreneurship, the agency of businesspeople is to be reduced to doing multiple-choice tasks, with all the options provided by AI (who might even make the choice themselves). This will limit the space of possibilities in the long run, and only the entrepreneurial spirit can promise the future business world to remain the land of the brave and the free.
Holley Friedlander, Assistant Professor of Mathematics
Mathematicians are friends of technology. From the abacus to modern calculators, we have adapted to new technologies again and again. Mathematicians routinely use computational tools to experiment with objects, to look for patterns, and to formulate conjectures. In mathematics courses at Dickinson, these tools also enhance student learning. Students use animated, interactive applets to visualize abstract concepts and computer algebra software and graphing calculators to investigate real-world applications of course content.
Artificial Intelligence (AI) promises advances that will allow mathematicians to push the frontiers of research further than previously thought possible. And like other technological tools, AI will help mathematics learners better understand applications and limitations of current theory and conceptualize abstract ideas. AI will change how we do and how we learn mathematics.
Despite AI’s awesome potential, current large language models like ChatGPT have a major flaw: they often formulate mathematical prose that sounds correct but in fact contains logical errors. Even in the future when we can “trust” AI, students will still need to gain competency in mathematical concepts to use it effectively. A calculator is of no use to a person that does not understand what it means to add or multiply – what good is an AI-generated solution that is incomprehensible to a user?
Just as calculators allow us to perform computations in seconds that would be impossible by hand, AI will grant efficiencies that will allow mathematicians to synthesize ideas at an unprecedented level. Change is coming, and mathematics is poised to adapt.
Heather Bedi, Associate Professor of Environmental Studies
Recent research highlights how artificial intelligence (AI) servers require large amounts of electricity. ChatGPT alone boasts 100 million users, putting electricity strain on the associated servers. Depending on the type of electricity used to power the data centers, this could include high greenhouse gas emissions from fossil fuel energy sources (including coal and oil). Data scientists calculated that AI data center energy consumption by 2027 will be equivalent to the electricity used by Sweden annually. Despite this growing demand, models do not factor the new AI electricity demand into projected national greenhouse gas emissions. This is problematic as a 2023 United Nations analysis projects that the globe will warm by at least 2.5 degrees Celsius by the end of this century due to rising greenhouse gas emissions, including carbon dioxide. Global energy-related carbon dioxide emissions rose in 2022 following pandemic-related declines in previous years. This increase includes a rise in oil and coal-related emissions. The largest sectoral increase in 2022 came from increased global electricity demand. This is likely to increase, in part driven by growing AI server demand.
Read more from the spring 2024 issue of Dickinson Magazine.
Published August 2, 2024