COLUMBUS, OHIO — Artificial intelligence-based products and software for college admissions and operations are proliferating in the higher education world.
How to choose from among them? Well, leaders can start by identifying a problem that is actually in need of an AI solution.
That is one of the core pieces of advice from a panel on deploying AI technology responsibly in college administration at the National Association for College Admission Counseling’s conference last week.
Jasmine Solomon, senior associate director of systems operations at New York University, described a “flooded marketplace” of AI products advertised for a range of higher ed functions, from tutoring systems to retention analytics to admissions chatbots.
“Define what your AI use case is, and then find the purpose-built tool for that,” Solomon said. “If you're using a general AI model or AI tool for an unintended purpose, your result is going to be poor.”
Asking why before you buy
It’s also worth considering whether AI is the right tool.
“How does AI solve this problem better? Because maybe your team or the tools that you already have can solve this problem,” Solomon said. “Maybe you don't need an AI tool for this.”
Experts on the panel pointed out that administrators also need to think about who will use the tool, the potential privacy pitfalls of it, and its actual quality.
As Solomon put it, “Those built-in AI features — are they real? Are they on a future-release schedule, or is it here now? And if it’s here now, is it ready for prime time or is it ‘here now, and we’re beta testing.’”
Other considerations in deploying AI include those related to ethics, compliance and employee contracts.
Institutions need to be mindful of workflows, staff roles, data storage, privacy and AI stipulations in collective bargaining contracts, said Becky Mulholland, director of first-year admission and operations at the University of Rhode Island.
“For those who are considering this, please, please, please make sure you're familiar with those aspects,” Mulholland said. “We've seen this not go well in some other spaces.”
On top of all that is the environmental impact of AI. One estimate found that AI-based search engines can use as much as 30 times more energy than traditional search. The technology also uses vast amounts of water to cool data centers.
Panelists had few definitive answers for resolving AI’s environmental problems at the institutional level.
“There's going to be a space for science to find some better solutions,” Mulholland said. “We're not there right now.”
Solomon pointed to the pervasiveness of AI tools already embedded in much of our digital technology and argued untrained use could worsen the environmental impact.
“If they're prompting [AI] 10, 20 times just to get the answer they want, they've used far more energy than if they understood prompt engineering,” Solomon said.
Transparency is also important. At NYU, Solomon said the university was careful to ensure prospective students knew they were talking with AI when interacting with its chatbot — so much so that they named the tool “NYUAdmissionsBot” to make its virtual nature as explicit as possible.
“We wanted to inform them every step of the way that you were talking to AI when you were using this chatbot,” Solomon said.
‘You need time to test it’
After all the big questions are asked and answered, and an AI solution chosen, institutions still have the not-so-small task of rolling the technology out in a way that is effective in both the short and long term.
The rollout of NYU’s chatbot in spring 2024 took “many, many months,” according to Solomon. “If a vendor tells you, ‘We will be up in a week,’ multiply that by like a factor of 10. You need time to test it.” The extra time can ensure a feature is actually ready when it’s unveiled for use.
The upside to all that time and effort for something like an admissions chatbot, Solomon noted, is that the AI feature can be available around-the-clock to answer inquiries, and it can quickly address the most commonly asked questions that would normally be flooding the inboxes of admissions staff.
But even after a successful initial rollout of an AI tool or feature, operations staff aren’t done.
Solomon described a continuous cycle of developing key metrics of success, running controlled experiments with an AI product and carefully examining data from AI use, including by having a human looking over the shoulder of the robots. In NYU’s case, this included looking at responses the chatbot gave to inquiries from prospective students.
“AI is evolving rapidly. So every six months, you really do want to test again, because it will be different,” Solomon said. “We did find that as we moved forward, we could decrease the number of hard-coded responses and rely more on the generative. And that was because the AI got better, but also because our knowledge got better.”
Solomon recommended regular error checks and performance audits and warned against overreliance on AI.
“AI is not a rotisserie. You don't set it and forget it. It will burn it down,” she said. “It's changing too fast.”