How Colleges Are Using Artificial Intelligence To Improve Enrollment And Retention

Artificial intelligence (AI) has gradually been adopted by colleges and universities as an effective tool to automate a range of tasks effectively and efficiently. Chatbots can answer students’ questions about lesson planning or ask them about their mental health. AI-generated emails can remind students of important deadlines, prompt them to register for classes, turn in assignments, and pay their fees on time. And in one particularly controversial application, AI-based software is increasingly able to detect plagiarism of tasks.

A professor at Georgia Tech even used AI to build a virtual teaching assistant named Jill Watson. It turns out that “Jill” gets very positive student reviews.

Higher education is progressing from its first forays into digital transformation, which included automating daily tasks, digitizing workflows, developing more complex datasets, and creating dashboards to enhance its analytics. Today, institutions don’t just use technology to do the same things better. They use AI to do better things.

University leaders have learned that AI can do more than issue routine prompts and generate helpful tips. They’re beginning to use technology to address some of their biggest and most persistent challenges — including fundamental problems like increasing enrollment, improving student retention, and allocating financial aid.

And as AI expands into these core university practices, new concerns are also being raised about the tool’s privacy threats and its vulnerability to systematic bias.

According to Arijit Sengupta, founder of Aible, a San Francisco-based AI company, colleges and universities are beginning to catch up with other industries like banking and healthcare by using AI to influence performance metrics.

Sengupta told me in a recent interview that he now has between 5 and 10 higher education clients who are using AI to make progress on key outcomes such as helping and optimizing alumni donor acquisition.

Sengupta knows that university leaders have often been disappointed with the results of previous AI projects, and he agrees that in many cases AI is a waste of time and money because it is not designed to achieve tangible goals and specific outcomes needed for the Institution are most important .

With this in mind, Sengupta offers its customers the following guarantee: If Aible’s AI models and prescribed interventions do not generate value in the first 30 days, the customer will not be charged. He told me that many college officials believe they need to understand AI logarithms and models before they can apply them, but according to Sengupta, they’re dead wrong. “Our approach is to teach the AI ​​to ‘speak humanly’ and not the other way around.”

Once an AI model sorts through the complexity of a large amount of data and recognizes previously hidden patterns, the focus needs to be “what do we do about it – in other words, who do we need to target, with what intervention, and when.” Getting bogged down at this point colleges, says Sengupta. “Your computing experts are looking for the perfect algorithm instead of focusing on how best to change their practices to take advantage of machine learning in the form of predictions and recommendations.”

For example, a private, medium-sized university wanted to increase the percentage of applicants who eventually enrolled in the university. Thousands of dollars have been spent buying student lists and hundreds of hours have been spent calling the students on those lists. But the end result was disappointing – less than 10% of the applicants ever officially enrolled in the university.

Rather than bombarding every name on the list, Aible was able to create a model that led the university to target students much more precisely. It identified a subset of applicants who—based on their demographics, income levels, and family history of college attendance—were most likely to respond to timely faculty phone calls. It also determined the amount of financial assistance that would be required to influence their enrollment decision.

It then advised the university to personally call those students along with the tailored financial aid offers. The time required for this intervention – from identifying and collecting the relevant data to developing the algorithm and recommending the intervention strategy – was about three weeks. Preliminary results indicate that the university is likely to see an approximately 15% increase in its enrollment rate.

When Nova Southeastern University at Ft. Lauderdale, Fla., wanted to use its data to improve student retention, it used an Aible solution to identify the students most likely to leave the company. This helped the university’s Center for Academic and Student Achievement to target and prioritize its retention efforts for the most at-risk students.

While most retention efforts are reactionary—activating only after a red flag is found that a student is in academic danger—an effective AI strategy should help a college target curriculum changes, increase its guidance, and offer support services much sooner before a student begins to experience problems.

One thing I discovered while researching this article is that universities are often reluctant to admit they are using AI for purposes like this and insist on remaining anonymous in press reports. That concern didn’t surprise Sengupta, who believes it stems from a belief that using AI increases the risk of someone’s privacy being violated.

One way to ensure that individual privacy is protected is to store all data on the university’s servers rather than on a vendor’s servers. Another is not to use information about groups smaller than 25 so that no inferences can be drawn about individual information.

Hernan Londono, senior strategist for higher education at Dell Technologies, believes privacy concerns aren’t the only reason universities may be reluctant to use AI and are reluctant to admit it when they do. “AI-based interventions can be biased because different student populations may be excluded from the data differently,” he told me.

AI not only reflects human biases, it can amplify them by feeding unrepresentative data into the algorithms, which are then used to make important decisions. For example, Amazon stopped using a hiring algorithm after realizing that it favored applicants based on words like “executed” or “captured,” which were more common on men’s resumes than women.

As significant as privacy and bias concerns are, higher education will inevitably increase their reliance on AI. It’s too powerful a tool to just leave on the college shelf. Its uses will continue to grow, and with the right controls and precautions, it can be used to improve college performance while encouraging student success.

Leave a Reply

Your email address will not be published. Required fields are marked *