Nine in ten campus administrators now use AI. Two-thirds of institutions have adopted it. And yet, more than half cite data privacy as a barrier, a third aren’t sure whether AI is even in their strategic plan and training demand has topped the resource wish list for three years running. The tools are everywhere. The infrastructure to use them well is not.
The adoption story is over. The integration story is just beginning.
For the past few years, the defining question around AI in higher education has been: are faculty and staff using it? That question now has a clear answer. According to Ellucian’s third annual AI in Higher Education Survey of 779 administrators across more than 300 institutions, personal AI use has crossed 91% — up from 84% the year before and effectively at saturation.
The more consequential number is institutional: adoption at the organization-wide level jumped from 49% in 2024 to 66% in 2025. That 17-point surge in a single year is not incremental. It signals that AI has crossed from novelty into operational expectation.
But here’s the problem. Widespread adoption does not mean strategic integration. It means a lot of people are using tools, in a lot of different ways, with varying levels of governance, oversight and institutional support. That’s a fragile foundation for the next phase.
Most institutions are still building the plane while flying it.
Only 43% of survey respondents say AI is included in their institution’s strategic plan. Another 27% say it is not. And nearly a third are unsure — which in itself tells a story. If a third of administrators don’t know whether AI is a strategic priority at their institution, it probably isn’t being communicated as one.
Budget is following a similar pattern. Nearly two-thirds of executive leaders report that their institution allocates funds for AI — but most of that funding lives inside broader technology or innovation budgets rather than dedicated AI lines. Intentional investment and informal investment are not the same thing and institutions that conflate them tend to be surprised when accountability questions arise.
This is not a criticism. It’s a description of where most institutions genuinely are: ahead of where they were, behind where they need to be and operating without the structures that will allow them to sustain momentum responsibly.
Not all departments are in the same place and that matters for how leaders approach this.
The survey identifies three distinct adoption tiers across departments and the gaps between them are significant.
Information Technology (81%), Data & Analytics (75%) and Executive Leadership (73%) are the “AI Leaders” — furthest along, most comfortable, most likely to be driving institutional strategy. These are also the departments where AI delivers the clearest efficiency gains with the most manageable risk profiles.
Academic & Student Affairs, Alumni Relations & Advancement and Business & Operations sit in a middle tier — “Emerging Adopters” with roughly 60% active use and growing momentum. These departments see real potential but are still figuring out where AI fits within their specific workflows.
Financial Aid and Marketing, Admissions, & Enrollment are the “Cautious Navigators” at 43% and 47% respectively. Nearly a third of Financial Aid professionals report no current plans to adopt AI. That caution isn’t irrational: these are departments where AI-influenced decisions carry direct consequences for student access and financial outcomes. Trust must be earned before it can be extended.
The implication for institutional leaders: AI adoption is not a single rollout. It’s a portfolio of change management challenges, each with different risk tolerances, readiness levels and definitions of what “good” looks like.
Trust is the variable that decides where AI goes next.
The survey’s data on confidence and skepticism tells a clear story about where the field is headed. Administrators are most confident in AI applications that protect systems or generate predictive insights: cybersecurity threat detection tops the list of high-value use cases among executive leaders (55% rating it “very valuable”), followed by revenue and expense forecasting (46%) and identifying at-risk students for early intervention (42%).
Skepticism concentrates where AI touches high-stakes, human-centered decisions. The share of respondents who believe AI does more good than harm in student learning dropped 10 percentage points in a single year, from 55% to 45%. Teaching and instruction is near the bottom of the confidence list. Financial aid chatbots and AI-recommended program redesign — despite being actively used — rank among the lowest-value use cases in leadership’s estimation.
The pattern is consistent: administrators are comfortable with AI as an analytical layer. They are not comfortable with AI as a decision-maker. That distinction needs to shape how institutions design their governance structures, not just their tool selections.
The barriers aren’t shrinking fast enough.
Data security and privacy have topped the barrier list for two consecutive years: 56% at the institutional level, 61% personally. That number has not meaningfully moved. For institutions processing student financial records, health information, academic history and admissions data, this is not a perception problem. It is a real infrastructure and vendor-vetting challenge that requires sustained attention.
Two newer concerns are also accelerating. Environmental impact — barely mentioned in prior years — is now cited by more than one in five respondents as a top-three barrier. And fears of role displacement have doubled year over year, from 7% to 14%. These concerns are not going to disappear as AI becomes more capable. Institutions that ignore them will find the cultural resistance to adoption growing even as the tools improve.
Meanwhile, the need for training remains the most-requested resource for the third consecutive year. Demand is highest in Financial Aid, where 83% of respondents say they need training on AI technology and its applications. That is the department with the lowest adoption rate and the highest stakes. The relationship between those two facts is not a coincidence.
What responsible integration actually requires.
The survey’s recommendations are practical and, if implemented seriously, would represent a meaningful departure from how most institutions have approached this so far.
Build AI literacy through structured practice, not memos. Assign teams to use approved tools on real projects weekly, then debrief together on what worked, what didn’t and what ethical questions surfaced. Policy documents don’t build fluency. Repetition and reflection do.
Start with low-risk, high-visibility use cases. Chatbots for financial aid inquiries. Automated communications in advancement. Predictive analytics for enrollment planning. These applications deliver measurable value, create institutional confidence and generate the proof points that make larger investments easier to justify.
Create sandboxed environments for exploration. Leaders cannot imagine use cases for tools they have never touched. Giving faculty and staff protected space to experiment — without the pressure of production deployment or career consequence — is how institutions develop the internal imagination they need.
Implement human-in-the-loop governance before expanding high-stakes use. In admissions, financial aid and academic decision-making, the question is not whether humans should remain in the loop. They must. The question is how that oversight is structured, documented and auditable. Institutions that build those systems now will be better positioned when scrutiny arrives — and it will.
The window for thoughtful integration is open. It won’t stay that way.
Institutions that treat AI integration as a technology project will build tools. Institutions that treat it as a cultural and operational transformation will build capacity. The difference between those two outcomes will be visible within the next two to three years — in staff confidence, in student outcomes, in the ability to respond when something goes wrong.
The data is clear about where higher education is: past adoption, short of integration and under pressure to close the gap responsibly. The institutions that do will have made a deliberate choice to invest in the conditions for it — training, governance, trust-building and accountability — not just the tools.
Download the full report: Artificial Intelligence in Higher Education: From Widespread Adoption to Strategic Integration