Skip to main content

Using Generative AI in Education: Learning by Doing

Over the past two years, educators in higher education and K-12 struggled to determine the best ways to integrate generative AI into their classrooms.

During that time, educators debated, and continue to debate, between two very different paths: find ways to discourage or prohibit student use, or open up opportunities for experimentation and ways to integrate generative AI into classrooms.

That was the discussion between education and industry leaders during 1EdTech’s Learning Impact Conference. Panelists in the “Intersection of AI and Education” general session made the argument that avoiding generative AI simply puts off the inevitable, but they do understand educator hesitancy. They said at least for now, experimenting with and using AI is the only way to really understand it, and the best way to prepare students and educators for the future.

“From my experience, I think everyone has to experience this to learn,” said Tom Andriola, Vice Chancellor of, IT and Data at the University of California Irvine, and moderator of the discussion. “Then they have to experiment and get their own productivity hack, then they start to use it and it becomes a habit.”

“My daughter is a freshman in college, and professors are making AI part of the learning process,” said Ryan Lufkin, Vice President of Global Academic Strategy at Instructure. “Meanwhile, my son is a junior in high school and they aren’t using it. My fear is that because they aren’t teaching students to learn and use AI at that early level, they will hit college and it will be a free-for-all.”

“At Georgia Tech, we see some people not always thinking about the potential risks because everyone wants to move forward, so we have done some education about privacy and data protection,” said Steve Harmon, Executive Director, Center for 21st Century Universities and Associate Dean of Research in Lifetime Learning at Georgia Tech. “We do have an internal instance with an AI provider to mitigate the risk, and we do encourage more people to use AI because we see the benefits outweighing the risks.” 

“I think it’s important to reassure people that they aren’t alone in their anxiety around AI,” said Shawna Dark, Chief Academic Technology Officer and Assistant Vice Provost at UC Berkeley. “The first thing I hear people say is ‘I’m not an expert, but …’ We are all learning AI. Yes, there are experts in the field, but in terms of teaching and learning, we’re all just starting.” 

While the panelists encouraged experimentation, they did say some precautions should be taken. They encouraged the audience not to forget that current laws and policies for security and data privacy can be applied to AI.

“We have a set of policies that are broad enough to incorporate new technology,” said Harmon. “We are doing a lot of training in our center for teaching and learning and outlining some best practices and things they need to consider when using AI.”

“There are privacy and security policies that apply to AI,” said Dark. Sometimes, people try to reinvent the wheel and ask for an AI policy, but I don’t think we’re there yet. For now, let’s take advantage of what already exists.”

Beyond privacy and security, the panelists emphasized that generative AI isn’t going away, and both educators and learners need to become AI literate so they are prepared for at least two major challenges they see with AI.

The first is combating biases.

“If you get a response from AI and see bias, you can go back and tease out a better answer, but if you aren’t actually looking for or aware of that bias in the first place, then you’re somewhat limited in what you can do,” said Lufkin.

“Creating that training data set is so important because it makes a huge difference in the outcome you get with AI,” said Harmon. “So, A, be aware of it, and B, examine the training data to see what biases are there.”

The second challenge is understanding what is real and what is fake in the age of AI.

“I am concerned about the AI literacy of faculty because they are narrowly focused on their disciplines, so there isn’t a lot of opportunity for them to learn, and we can’t force them, but we also know students want to learn about AI and use it,” said Dark.

“I do think introducing AI literacy earlier, for younger students, is important because these kids are digital natives,” said Lufkin. “They’re plugged in, so they need to understand the bias and when things are being AI generative, so they can understand when they’re seeing those false results, or when they’re reading AI-generated content is key.”

“As we talk about this, and talk about the durable skills learners will need in the future, maybe one of those skills is skepticism,” concluded Harmon.

When considering how to implement AI in the learning process, the 1EdTech community created an AI Preparedness Checklist to help guide institutions through the process. Supplier members may also fill out the TrustEd Apps Generative AI Data Rubric to answer some of the questions institutions want answered in the procurement process.


About the Author

Sandra DeCastro

Sandra DeCastro has more than 30 years of leadership, strategy, marketing, and event management experience in the corporate and education technology sectors. As Vice President of Marketing and Higher Education Programs at 1EdTech, she provides strategic leadership to assist 1EdTech’s higher education members in the adoption and application of 1EdTech's standards to address the most pressing education challenges today and into the future.

 

Published on 2024-08-27

PUBLISHED ON 2024-08-27

Photo of User
Sandra DeCastro
Vice President, Marketing and Events
1EdTech
Help us improve the accessibility of this site by emailing recommendations to web@imsglobal.org