AI Did My Homework with Dr. Lyra Stein, Dr. Nathalie Moon, and Stephanie Silberstein – College Bound Mentor Podcast #28

Welcome to the College Bound Mentor podcast! Each episode, hear trends, case studies, and interviews with students who have gone through it all.

This is Episode #28 and you’ll hear us talk AI in college with Dr. Lyra Stein, Dr. Nathalie Moon, and Stephanie Silberstein. Listen to the episode on Apple Podcasts, Spotify, and your other favorite podcast spots – follow and leave a 5-star review if you’re enjoying the show!

  • Episode Summary & Player
  • Show Notes
  • Learn more about the College Bound Mentor podcast
  • Transcript

College Bound Mentor Podcast Episode #28: AI Did My Homework with Dr. Lyra Stein, Dr. Nathalie Moon, and Stephanie Silberstein

AI is taking over everything. College is no different. What is allowed – and should be allowed – when using AI in college classes? In this episode, we welcome on special guests Dr. Lyra Stein of Rutgers University, Dr. Nathalie Moon of the University of Toronto, and Stephanie Silberstein of Rutgers University, New Jersey City University, Fairleigh Dickinson University, Seton Hall University, and Kean University. They reveal how much students are using AI in college, the pros & cons of relying on AI for coursework, how grading & plagiarism are changing in the world of AI, how much professors are using or resisting AI, and how AI proficiency will help students throughout their careers. This episode covers everything from artificial intelligence to Broadway. Here’s a small sample of what you will hear in this episode:

  • How are professors using AI in their lessons & assignments?
  • What is the psychological impact of using AI?
  • How is AI use being regulated in colleges?
  • Should college students be allowed to use ChatGPT?
  • What are some Myths & Truths about AI in college?

Connect with Lyra, Nathalie, and Stephanie on LinkedIn, and Subscribe to College Bound Mentor on your favorite podcast platform and learn more at CollegeBoundMentor.com

Check out the episode and show notes below for much more detail.

Show Notes

  • AI Did My Homework with Dr. Lyra Stein, Dr. Nathalie Moon, and Stephanie Silberstein
    • [0:19] Welcome to College Bound Mentor
    • [0:28] Lisa Bleich, Abby Power, Stefanie Forman
    • [1:08] Connect with Lyra, Nathalie, and Stephanie on LinkedIn, and Subscribe to College Bound Mentor on your favorite podcast platform and learn more at CollegeBoundMentor.com
    • [3:18] How much are college students using AI (Artificial Intelligence)?
    • [9:05] What are the pros & cons of college students using AI?
    • [12:50] How are professors using AI in their lessons & assignments?
    • [18:25] Are professors using AI for grading assignments?
    • [19:55] What is the psychological impact of using AI?
    • [22:55] How is AI use being regulated in colleges?
    • [25:58] Should college students be allowed to use ChatGPT?
    • [34:15] Are professors resistant to AI?
    • [36:05] The Studio on Apple TV
    • [42:34] How will AI help college students in their careers?
    • [43:34] What are some Myths & Truths about AI in college?
    • [51:18] Maybe Happy Ending on Broadway
    • [55:03] Connect with Lyra, Nathalie, and Stephanie on LinkedIn, and Subscribe to College Bound Mentor on your favorite podcast platform and learn more at CollegeBoundMentor.com
    • Theme Song: “Happy Optimistic Americana” by BDKSonic

What is the College Bound Mentor podcast?

Lisa, Abby, and Stefanie know college. They also know students. With over 30 years combined experience mentoring young people, they’ll show you why understanding yourself is the key to finding the right college. Each episode, hear trends, case studies, and interviews with students who have gone through it all – giving you valuable insight to survive the college application process and beyond. Hosted by Lisa Bleich, Abby Power, and Stefanie Forman, Partners of College Bound Mentor.

Transcript

Please note: this transcript is not 100% accurate.

Dr. Lyra Stein 0:05
At this point, I assume they’re all using it for everything.

Lisa Bleich 0:19
Hey, CBMers, welcome back to College Bound Mentor, where we help you survive the college application process and beyond. We’re your co hosts, Lisa and Abby, and we’re missing Stefanie, who is at the French Open right now, so we’ll get her next week. On today’s episode, we’re going to be discussing AI and its use in college. And the impetus for this topic was that one of our essay specialists, who is also a former English teacher and now a parent of teenagers or queens, wanted to get a better understanding of what is actually allowed when using AI. It seems like it’s all very confusing, both to students and to parents, in terms of what’s there. And so I thought that’s a great topic for an episode. So I decided to reach out to some of my clients that I knew worked at Rutgers, and so that’s where I found Dr. Lyra Stein, and then I reached out to my son in law, and I found Dr. Nathalie Moon at University of Toronto. And then Stephanie is actually someone that I know as a client. So it all worked out. So I’m really happy to have all of these wonderful people here. So just to give you some background, Dr. Lyra Stein is an associate teaching professor in the Department of Psychology at Rutgers University in New Brunswick. She has received numerous accolades, including the 2024 hypotheses social learning Innovator Award in social sciences and the 2025 co intelligence exploration Teaching Fellowship. Dr. Stein has played a leading role in faculty development through service on curriculum committees, mentoring programs and national teaching organizations, and she’s an advocate for inclusive research, informed pedagogy. Nathalie Moon is an assistant professor teaching stream in the department of statistical Sciences at the University of Toronto. She completed her undergraduate studies in mathematics and statistics at Queen’s University and earned her master’s and PhD in biostatistics from the University of Waterloo. Her recent work focuses on pedagogy, with an emphasis on cultivating a supportive learning environment, empowering students to take an active role in their education, and designing opportunities for experiential learning and authentic assessment. And last but not least, Stephanie Silberstein has been an adjunct professor of theater acting at Kane University, Fairleigh Dickinson University and the New Jersey City University. She is a theater director for Seton Hall University, and has taught musical theater for the Paper Mill Playhouse for over a decade. She holds degrees from Cornell a BS in communications, UT Austin and MFA in acting, and NYU and MA in theater and English education. She lives in New Jersey with her husband, four children and one year old puppy. So welcome everybody. We have, like, an amazing lineup here. So thank you guys for for

Dr. Lyra Stein 2:46
joining us. Thank you. Yeah.

Lisa Bleich 2:48
So just to get started, I mean, we have, there’s been so much talk in the press, just talking to people how AI is taking over. It’s rampant in the classroom. Students are using it to write their papers. Kids are using it really much for all of their assignments. So I just wanted to get a pulse to see, like, how much of that do you feel is true? How much are you seeing that, and how has that shifted, really, in the last, in the last couple years, and I imagine for now, it’s probably been around for longer, because it’s actually what you’re teaching as other things, right, or using it. So who wants to kick us off on

Dr. Lyra Stein 3:17
that? I frequently survey my students, and I know they’re using it very frequently, and they are somewhat hesitant to admit it, because they think that it’s considered cheating, and I make it known from the first day of my courses that they can use it, because they’ll be using it after they graduate. They’re going to have to know how to use it, and in my classes, they have to work on critically evaluating the output. And I think that’s what they’re going to need after they graduate, they now see it as something and at least in my experience, that they can use to cheat and I tell them to be competitive. After you graduate, you’re going to have to show your employer what you can do over and above AI. You’re going to have to think about how accurate the output is and let your employer know that they need you to evaluate this and not just to copy and paste as you as they may want to do in in their classes.

Lisa Bleich 4:37
Interesting. How about you? Natalie, how do you how are you using it? Or, excuse us, using it. Students are

Dr. Nathalie Moon 4:45
using it. There’s no question. I came back to the classroom a year and four months ago. So January 2024, and that was my first term teaching after it was released. I pulled my students as well. And yes, they are using it. They were. Using it interestingly, I asked two questions. One, I asked them what they what kind of tasks they used it for themselves. And I gave them a bunch of choices, and I asked them what they thought their classmates were using it for, and for some tasks, no difference. Other tasks, definite difference. So I agree with what you were saying, Lyra, that I suspect there’s a social desirability bias in how you report those things. That was a year ago. I think there’s less of that now, less taboo, less stigma about using it. My students are using it more than ever, including for things that they really don’t need to use it, things that you know Google, like regular old school, Google or Wikipedia would be a better source. I teach statistics, so you know, if you’re looking for, you know, a formula or something, you really don’t need to ask chatgpt for the formula. You could just look it up in a static source, and it’s going to be quicker and equally or more accurate. But it’s where they go first for so many questions, it’s hard to convince them otherwise.

Lisa Bleich 6:02
That’s so interesting that you say that because, well, first of all, it’s, I mean, AI is now embedded into Google, so when you put that’s the thing that comes up, right? That’s the first thing. But I actually had my first client who asked me a question about how her son should focus on his summer. And, you know, there were three choices, write essays, do the essay T, or do an EMT. And then she said, Well, chat GPT said, should focus on his, on his essay T. And I was like, oh. And I had given her a different response. And I said, Well, I’m glad to see that I have a different take than ChatGPT. So I just thought that was the first time a parent has said to me, Oh, well, ChatGPT said, Now, Stephanie, you teach in the humanities, so how do you see it as as different, and how do you see it as impacting your students?

Stephanie Silberstein 6:49
Yeah, I teach an acting class which is so focused on class participation and that hands on learning and less on writing or finding the right answer. It’s it’s more about, there’s discussion and just getting up and executing the work and performing different monologs and scenes and things like that. There is a writing component to the class that I teach and but it’s based on seeing an actual production and giving your own personal reflections. So it was interesting, because I did get a couple of those papers that that seemed maybe it was first paragraphs or concluding paragraphs, like certain certain phrases that felt very familiar and repetitive and but at the end of the day, I question, what are the goals of the class? What am I trying to get out of this assignment? And where is AI helping them or or hurting them in that learning process for them? You know, not to sound cliche, but it’s, it’s like, you know, you’re just hurting yourself, so when, when you use that, as opposed to, you know, relying on yourself and your own ideas and and things like that. But at the universities that where I teach, there’s a and and the class that I teach, because I don’t teach majors often, I It’s an elective class, and so I get a wide variety of backgrounds that contribute to the class. So it’s interesting to see the the different levels of writing ability. And so sometimes I’m reading the papers and I’m craving AI, you know, like, oh, please run this through some sort of program to, you know, help, help you flesh out those ideas, you know. Or you know, if you’re not going to go to the writing center on campus available to you and using those resources. And so sometimes they’re not using AI when there’s part of me that wishes they would. So, so it’s interesting, it’s, yeah, it’s for good or for evil, that kind of thing, trying to figure out where the line is.

Lisa Bleich 8:56
So it’s

interesting that you guys have all just, obviously accepted that’s what it is, and that’s what’s going forward. But what do you, what do you think’s being lost for students, you know, what do you think’s being gained? Like, what do you? How do you think it’s impacting critical thinking, the ability to actually do something and wrestle with words or ideas that you don’t just have something spit out at you.

Stephanie Silberstein 9:24
Well, I think somebody said some something earlier, along the lines of, they’re using it when they don’t need to, and so that fear that it will become a crutch, you know, where they don’t trust their own instincts and their own thoughts, and that’s their go to is AI, yeah, that’s the danger.

Dr. Nathalie Moon 9:39
I think one thing that I wrestle with, and I know a lot of my colleagues in statistics do as well. I mean, we’re figuring this out at the same time as they are. Maybe they’re a little ahead of us, but, but we’re this is a new technology, and we’re not. We’re certainly not more expert on on these tools than the students are. But what we have that’s different from the students is. Is life experience and learning experience before AI and we learned these things in the more traditional way, in the before times until, what, two years ago. So I’m sure things will change even more in the future. As you know, we’ll have kids growing up with this. But even for our students, when I’m teaching them statistics. How do we get them to internalize these foundational skills in them without because right now they can just they can go to AI for anything they’re doing at home. And I have less of a problem with people using AI once they have some core foundational knowledge, but our students don’t have that. For some areas, at least in university, you’re learning loads of new areas you’ve never studied before. And I don’t know how to do that, and that will become more and more of a problem, I think, across more disciplines as students, even from elementary school, grow up with with this, and it’s hard to convince the students that it’s important to do it on their own. I don’t know that’s resonating with them.

Dr. Lyra Stein 11:08
Interesting. Are you finding the same? Lyra, are you? I have really thought about this a lot in the last, what, two and a half years. So I created assignments where first they have to analyze what we discussed in class, and then they have to use ChatGPT to compare it to and what they are realizing is chat GPT isn’t always correct, and if they take the output and they just copy it into whatever assignment, it doesn’t mean it’s correct, they automatically assume it is. And I’m hoping these assignments will show them that they really have to think about the output that they have to grapple with it. They have to say, okay, my professor said one thing, and ChatGPT is saying another. How do I critically think about this to determine the correct answer

Abby Power 12:17
that makes sense. So actually, interestingly, the University of Michigan, this past year for admissions to their honors program, they have an essay, and it’s usually pretty challenging essay. And this year, their essay was their their prompt was this an essay written by chat GPT, critique it. Well,

Lisa Bleich 12:34
they had to write the essay they were they were told to write their essay using chat GPT, and then they had to critique it. So it was an interesting, uh, exercise for them to see what they what they would have done differently. So yeah, that’s a good point. Abby,

Abby Power 12:48
so along the lines of what Lyra was just talking about, I’d love to hear from Natalie and Stephanie how you are using AI in the classroom, like, how are you integrating in into your lessons or your teaching, or your preparation for your lessons.

Stephanie Silberstein 13:04
Well, as I’m sitting here, I’m I’m reflecting on the class discussion component of my class, and realize, and I’ve always assumed it was just maybe maturity, or, you know, just just different obstacles. But I’m, I’m wondering now if maybe it’s because they are so relying on, on AI to formulate those thoughts. It, it can be like pulling teeth sometimes, to have enriching discussions about, you know, theater, you know. And I try, I try to integrate things from pop culture and, you know, things that would appeal to them and get them talking. But yes, there’s, there’s a certain lack of confidence when it’s just them out there speaking, you know, on their own, for themselves and their own thoughts. Yeah, I’m in real time figuring out different ways to maybe have chat GBT lead the discussion, maybe, or give those prompts, you know, back to them, as opposed to the other way around. And and in terms of myself, like, I yeah, I don’t use it. I don’t necessarily know the different ways you know it can be helpful. But in terms of gathering new resources, you know, like, I tend to utilize the same scenes and monologs and clips from recordings of live theater or film or television that demonstrates certain concepts that I’m teaching. And it might be, it might be helpful to have chatgpt Help me, or AI in general, help me source out new material, you know, more updated material. I want to test that out a little bit

Abby Power 14:48
fun. I like that. We’re problem solving on the fly. Yeah,

Stephanie Silberstein 14:51
thank you. Thank you,

Dr. Nathalie Moon 14:53
Natalie. I mean, I’ve experimented with it to generate ideas or examples or sometimes on the side. Assignments or exams, even I have to come up with a description of a study and then ask them questions about that, and it’s hard to come up with those on your own. So chatgpt is very good at that. Now I typically almost never can. I actually just use it directly. I have to modify it to make it match with the rest and fit my course and fit my voice and fit the learning objectives, but it’s still helpful. I haven’t used it a ton in the classroom, but the one place I have used it that I thought was very successful, very productive and fun as well, is in a fourth year Statistical Consulting class in preparing the students for their first consulting meeting. So the students were going to be working in groups with a real client, not no money, but but a person with a real problem that they would work with over a course of months, and you know, in the past, we’ve we’ve had conversations about how those meetings go, you should ask them about their project and ask clarifying questions. And the client may not know a lot about statistics, so you need to, sort of sometimes work around, like ask questions around the topic, to try and figure out yourself what the objectives are and how the data was collected. And that’s fine, and we had those conversations, but this year, I wrote a prompt a paragraph for them to copy into their LLM of choice. Basically, the LLM was playing the role of a client who didn’t know much about statistics, and my students were the statistical consultants, and I so the prompt sort of set that up for the LLM, and then they started the conversation. They’re like, I can tell me about your project. And I was walking around as they were having these conversations, they got super into it. They loved it. They really cared about these fake projects. It was funny, but some of the questions they were asking were were terrible, and it was but it was all fake, so it was a lot easier to address that and take it up. I don’t know if any of you who are have background in statistics, but some of one group in particular asked the LLM, how many degrees of freedom do you have, which, if you’re talking to a non statistician, they’re not going to be able to answer the question, and even if they answer the question, it’s your job as a statistician to figure that out, not just believe them when they tell you a number. I mean, it was funny to you know, raise that and have a conversation about it, and immediately they realized the problem. But having that with that, that interaction with a with a fake person made it a lot more lighthearted and much easier to have those those conversations before a real person became involved. So for things like that, I think it can be incredibly useful as a safe space to develop those skills that they well clearly need to develop and practice.

Abby Power 17:38
I really agree with that. It’s funny. My my youngest son just graduated from college, and he spent the last year interviewing for jobs. He was interviewing for some consulting jobs. And when I was in graduate school, I would practice with my friends for the case interviews for consulting. And he was doing with ChatGPT, and he was recording himself and sending it to me so I could see how he was doing. And he in his head, he was talking to a real, I mean, yes, normal and nervous, as if it was real person, but there was a lot less risk. I think it was really useful and and it learned from him, it clearly was asking better questions as it went on. It was super interesting. I think that’s, that’s really interesting. So have any of you used AI for grading assignments? I’m just curious,

Lisa Bleich 18:25
or how do you grade? Or how do you like grade an assignment when you’re incorporating like, have you had to change your rubrics for for grading things?

Dr. Lyra Stein 18:33
As I said before, I added in a portion where you have to enter the prompts into ChatGPT, and if I were just using ChatGPT, it would give me the output that they actually paste in there. What I found was, and I teach a course in psychopathology, that it tends to recommend cognitive behavioral therapy for every disorder, and students copy that right. And no assignment was compare what we discussed in class to the output, and they just said, Oh yeah, it was great not realizing that we didn’t say that cognitive behavioral therapy was the therapy of choice for all of these disorders, and when they had points deducted, I’m really hoping they learned that they can’t rely on it as much as they used to. So I find it counterproductive to use it for grading if they are using it and they have to distinguish the differences between course content and the output,

Lisa Bleich 19:49
it’s interesting from a psychological perspective. I mean, I know it’s so so new, right? But do you do you anticipate, as a psychologist, that there will be Psych. Psychological impact. If you feel like you’re having this relationship with with AI, which many people feel like they’re having these conversations with real people,

Dr. Lyra Stein 20:08
it’s really interesting. Sometimes I host extra credit movies where we watch a movie and we apply psychological concepts, and the movie I chose was the matrix, which is about AI taking over. They were so candid. They would say things like, ChatGPT should get my degree, because that’s how I earned all of my grades. And what they did say they thought it would be a good adjunct if they couldn’t afford to see a real therapist. But then other students were saying it’s not ethical because they haven’t been trained as licensed clinicians have. So we had an interesting discussion about that, and they mentioned that their friends, some of their friends, would rather talk to ChatGPT, then go out, meet people in person. Some people said they use it. They upload all of their text messages to psychoanalyze people and tell, you know, ask them how they should respond to others. So really, they’re using it for all of their big life decisions, and they are counting on it to be correct and to give them the correct advice. That’s scary.

Stephanie Silberstein 21:33
I was telling Lisa that I’m dabbling in in stand up comedy, and one of my lines that I used was that I’m basically co parenting, you know, along with my husband, with Alexa and Siri, and that on Sunday for Mother’s Day, my my teenage years are basically going to bring them breakfast in bed. You know, it’s true, you know, I use, you know, it’s, it’s a form of AI, you know, all of these, you know, Alexis, Siri, that kind of thing, as my whole house lights up. No, I’m kidding. They’re not in this room with me. Their input or, you know, how do you address an issue or something? And I haven’t used them for that. It’s more of just like typing out a text. You know, voice to text. That’s how I use those, those features. But yeah, if they, if they have answers to problem solving or issues, you know, how do you what should you say in terms of empathetic listening and not lecturing? But, you know, being there as a listener, you know that that kind of stuff I can I can imagine. Yeah, the this generation is definitely going to use that as a resource more than actual human beings. It

Lisa Bleich 22:46
sounds like all of you have just, well, Stephanie, it’s a little bit different in what you’re doing, but have incorporated AI and just it’s like any other tool. What sorts of conversations are you having with your colleagues across I don’t know if you’re talking across the institution, or just within your departments, and is there any blanket regulation, or, you know, anything that they’re the that the colleges are saying, you know, this is what we’re going to use, or this is what we’re not going to do. This is okay. This is not okay. And I would imagine in the writing centers, it’s probably maybe less acceptable because you’re being taught how to write. But what are you guys seeing?

Dr. Nathalie Moon 23:25
The University of Toronto is very large and a lot of things are sort of decentralized. One thing that, as I understand it is, is a policy that is across the board for AI. So we’re not allowed to use it for grading. We’re not currently, we’re not allowed to submit student work to third party software, and we’re not another thing we’re not allowed to do is to run student work through third party software to try and detect if they used AI to generate it or

Lisa Bleich 23:53
not. You’re not allowed to, Oh, interesting, okay,

Dr. Nathalie Moon 23:57
for two reasons, as I understand it. One is the concerns about intellectual property and copyright, all of that, like we certainly if we’re submitting student work to anything, even something like turn it in, they need to know about it, and there needs to be some understanding of what is happening with that material. And also those checkers, as I understand it, are not very good. The accuracy is not great. On both sides, I think if you use a tool like that and are like, well, it passed, therefore it means that they didn’t use it. They didn’t use AI. That’s not true. And on the flip side, if it says this was used like, this is 100% AI generated. That doesn’t mean that it is. So I don’t understand, I don’t know exactly how the tools work. I just know that they’re not, they’re not at a level that the University of Toronto is as an institution comfortable using. Now, I don’t know what people are doing on the ground, but we’re certainly not able to prosecute cases, if you will, on the basis of that. But like we would be able to for something like turn it in if there’s, you know, evidence that someone plagiarized, then then that, that is evidence that can be used. But ChatGPT detectors are not not allowed. But in terms of what we can do in our classes, that is the policy of the university is that each instructor can decide what is and isn’t allowed within their course, and even within, within the course, there could be differences across assignments, which can I imagine be very confusing for students,

Lisa Bleich 25:27
right? You brought up a good point, and I’d like to hear from Lyra and Stephanie about plagiarism, because it feels like it’s become quite fussy. Like, how does somebody know it used to be pretty clear, like, you plagiarized you you lifted this off of something and you didn’t cite it properly. But now, if you’re introducing what ChatGPT said, and everyone’s doing something similar, how do you know and how do you enforce, if somebody’s plagiarized, have those rules changed.

Dr. Lyra Stein 25:52
I tell them that they can use it for everything. I find that restricting it is counterproductive. I have some colleagues that are going back to the old days where they just do, you know in class, right? I’m like, this is not going to help them once they graduate. I would rather simulate what they may encounter when they graduate. So I give them free reign to use it. I teach them how to use it, but I do say you do need to cite it, and I show them how to cite it, and I say the university takes plagiarism very seriously. So if you do use it and use it to generate ideas, if you use it to change the flow of your writing, I say you don’t need a citation, but if you use it to generate ideas, that is when you need to cite it. And I say, I’m not going to deduct points if you use it, just make sure that you cite

Stephanie Silberstein 26:53
it. I’m looking at the King University. There’s a faculty hub where they give you resources and how to address or how to learn more about AI and yes, as Natalie, I think you were saying before, it’s just it is very hard to prove definitively. I think there have been lawsuits involved, you know, just so and a lot of times, you know, Professor will just kind of not want to get involved kind of thing. I think with plagiarism, yeah, it is. It is a matter of giving credit to a human author, and so that’s interesting about, you know, citing that AI was used in the, you know, in the making of this paragraph, and that kind of thing. But as opposed to plagiarism, there start to be questions of, you know, almost the ethical lines and equity lines, you know how, how different is, is AI to a tutor, or, you know, you know how you get support and assistance in writing papers, or, you know, doing any, any assignment, you know, where, why is it okay for someone that can afford a tutor, or has that time, or, you know, those have the given circumstances to allow for, for that, as opposed to, well, my tutor is AI, and so it’s very hard to Start deciphering between, you know why that’s okay, but this isn’t okay. And so I think at the end of the day, professors are just kind of back off and, you know, not get too involved in it.

Dr. Lyra Stein 28:30
So yeah, that is why I teach all of my students how to use it. They can all use it as a tutor, and they all use the free version. So there’s no inequity in terms of being able to hire a tutor or not.

Abby Power 28:46
This is a bit anecdotal, but my son was at USC at University of Southern California, and last year there was a very strict AI policy. They were using anti ChatGPT software. It was, I think, like Natalie said, wrong a lot of the time. There was a lot of friction, a lot of back and forth. It was very inefficient this year. I don’t know what the official policy is, but the professor, you know, my son’s professor, said we were not using it anymore. But I will say he had one philosophy professor that was really like, very well loved, and he begged the kids not to use it. He was just like, this is about your thing. I mean, you took this class. This is literally about how you think, please, please, please, please, please, don’t use it. And he was very well respected. You know, he was kind of like a pal. And I’ll tell you, what was super interesting was my son, in an attempt to like, Please, this guy really tried to be original in his writing by making connections to other things they’d covered in class that would have been very difficult to prompt ChatGPT to do, so he almost developed, like a new writing style that he was really psyched about. So yeah, I think this is all evolving, and there will be definitely bad things that come out of it. But I think kind of like Echo. And also the kids trying to figure out how to make their material their own is kind of an interesting outcome. Also, I think, I think it’s very fluid right now.

Dr. Nathalie Moon 30:09
That’s fantastic, but to me, that’s because he really was motivated to do it. He really cared. And the relationship with that instructor, that professor, is what got him there. But totally that’s what I’m trying to think about. Like when I think about these things, I I’m trying to think about how I can align my students motivations in the direction that I think is productive for them. And it sounds like that that professor was able to do that for your son. It’d be interesting to see if his classmates No. Not all of them, perhaps a chunk, a bigger chunk, than others. But there probably are still people who didn’t have that kind of relationship and maybe didn’t care to the same extent, yeah, but it sounds like something clicked for your son in that context, and that’s fantastic. And he was happy

Abby Power 30:59
in his economic you know, like there were, yeah, like there were plenty in his statistics class, his econometrics class, he he used it and then in his mind he had to understand it, because the exams were in class, a lot

Stephanie Silberstein 31:11
of content. I mean, obviously time is a factor, but to do as much in class, you know, writing and discussion in class, so as opposed to, you know, homework, so it could change what homework looks like and working together in groups and that kind of thing. So it may change curriculums and just, yeah, what the what the assignments look like, what the syllabus looks like, in order to keep as much in class as as possible, which you know could be good bad. You know, it depends. You know, it’s unfortunate if you have to utilize time to let you know kids write independently without, you know, monitor that and not just use the the Honor Code or just trust. There’s, you know, the students, but it’s possible,

Dr. Lyra Stein 32:04
yeah, I will say I redesigned my entire course. I tell them at the beginning you won’t have to memorize anything, because out in the real world, when you’re at your job, you won’t have to memorize I did because I didn’t have access to as much information. When I was in college, they take all of their exams online, but it’s all application to what I said in class, and I learned that chat GPT may tell you something different. Now, I teach very large classes, anywhere from 250 to 400 and I always have an aspect of group discussion, and then I asked them to rate everyone in their group. And what’s very interesting is that students will tell me, Hey, this student just used chat GPT to do all of their work, and it was wrong, and we had to correct it. So I don’t think this student should get credit, so I was amazed that they were willing to say that. And there is this attitude toward you know, Professor Stein tells you everything you need to know. If you need to use chat GPT, then something is wrong. And sometimes they do communicate that to their classmates, interesting.

Lisa Bleich 33:25
And did you change the grading for that particular student? That was,

Dr. Lyra Stein 33:29
was I did I tell you? You know, if you’re not participating, your classmates will tell me, and I look at the anonymous feedback, and you know, you can come, you can try to argue against what they say, but I want to see that you are thinking originally, and if you do use ChatGPT, you are evaluating it in terms of what we discussed in class. Yeah,

Lisa Bleich 33:56
I think that’s really good. Well, it sounds like you have really it sounds like Lyra and Natalie and Natalie and Stephanie are getting there, but have really, like had to rethink, it’s like a completely new way of approaching teaching. But I would imagine that not all professors are in the same, same process, you know, along the same journey, in the same way that you are. And so are you finding that some of your colleagues are kicking and screaming they don’t want to change. Are you finding it based on how long they’ve been there? Is it or like? And how do you move everybody along?

Stephanie Silberstein 34:28
Well, I know, I’m I’m here as the, the humanities representative, I guess. And when you, when you look at it, the the word human is in humanities, and it’s one of those things where you know, you know it, when you when you see it, or when you hear it, you know it’s just something you feel, you know instinctually in your bones, that kind of thing and so and that goes for when you have discussions. And my classes are so hands on and based in participation. You know, they, I have so many students that are, you know, calculating their their attendance. And I’m just like, or this was excused, this was an excused absence versus unexcused. And I’m just like, I don’t necessarily not that. I don’t care about what’s going on and if it’s an excused absence, but you know, I need you physically in the room participating, because that’s, that’s where the learning is going to take place. You know, this isn’t something you really do have to be here for. It’s not a notes and things like that. And then in performances, yeah, you can, you can run your lines, you know, they’re all consumed about memorizing their lines. And Sure you can. You can work with an AI generated scene partner, but it is very different, like when you’re connected to the scene partner, that real human being, and you make that eye contact, all of those things you start appreciating by taking that kind of class. And I was watching the studio. I don’t know if anybody’s watching that show on Apple TV? I saw a couple episodes. Yeah, there was, did you see the episode where they had to do a rewrite of a movie, and they were going to use AI to do it, and they were at a conference, or, you know, sort of a theater situation, you know, where they were, there was an audience, and people started, the screenwriters started standing up and saying, we hear you’re using AI to rewrite the movie and let you know they’re, you know, basically going on strike. And you know that in terms of job replacement for Humanities based fields, you know, where creativity, yeah, the source, you know, really should come from a human, you know, and those thoughts. But at the same time, I my son, was showing me how, you know, things work, and he’s like, you know, ChatGPT, write an episode of Seinfeld, you know, in or write a Shakespearean play in the style of Seinfeld, and they, you know, in three seconds, it was done, and it was, it’s really quite scary, but I hope that it’s one of those visceral reactions you have where humanity, you know, can show through.

Lisa Bleich 37:16
No, I think that’s a really good point. And I think for Lyra and Natalie, like, how do you get a feel for the student’s authentic voice? I know you’re teaching really big classes. Lyra, I don’t know if you’re teaching as big a classes, but like, are there when you’re reading everything? If, do you feel like it’s all the same, it melts into each other, or are you there’s some things that really stand out and you get you feel like you hear the authentic voice of the student? A

Dr. Nathalie Moon 37:40
really good question. I teach big classes as well. My seminar class is small. That one, I had 28 students this past year. But my other classes are much, much larger. So I don’t know my students individual voice, which makes some things more challenging. One thing I think my night, my approach to some of my courses, my teaching this past term was a bit naive in everything’s changing so quickly, so I’m not beating myself up over it, but I will do something differently next year. For example, in my seminar course, this was a smaller course where students had to do presentation, and the last presentation of the term was a reflective presentation, and almost all of them came in with scripts. And as they were reading their script, I felt like many of them used AI, I don’t know, but I wondered, which makes me sad, because even if some of them didn’t use it, I was still sitting there wondering that, I think in the future, when I have presentations, for example, I don’t think I’m gonna allow presenter notes like reading or script. Some of them had a phone or an iPad and they were just reading. So maybe, maybe I’ll go back to, you know, you can have a cue card or something, but not like full text. Similarly, in my big first year, course, they do projects, and I there were a number of groups who I wondered if they used AI, because they were mentioning a lot of things that we didn’t cover in the course. It’s like, Well, okay, maybe, maybe you learned it in another course. Maybe you looked it up yourself, or maybe you asked chatgpt to, you know, propose a statistical analysis to answer a particular question, and it was really difficult to know which it was, and obviously the outcome would be very different in terms of what grade would be justified in the two contexts. So next year, again, I think I will be a lot more restrictive about you can only use things we’ve talked about in class, possibly with a disclaimer of, if you want to do something that we didn’t talk about in class, talk to me first, because I don’t want to restrict them. The students that really want to go beyond the course of the course, we’re only doing basic things. So if they want to push beyond, I don’t want to stop them. But I got so there were so many submissions and. That I don’t know I had that suspicion, and that doesn’t feel good. I think I need to be more explicit in my instructions next year. And it sounds like you’ve, you’ve done that Lyra, and that has helped.

Dr. Lyra Stein 40:10
Yeah, I just at this point, I assume they’re all using it for everything.

Dr. Nathalie Moon 40:15
I thought I was assuming that, but I’m assuming it even more.

Dr. Lyra Stein 40:20
What I always say is, all of my assignments first have, what do we talk about in class, and then you can use ChatGPT and compare it. And I have in person. I use a in class response system called Top Hat. I’ll ask real time, and I say, Okay, I want you to answer this question. ChatGPT will not give you the correct answer, and still, you have about a quarter of the class using it, and they learn early on that they don’t get credit. They have to listen to what I say. So I don’t go in saying, Well, I want to detect if they’re using it. I go in saying they’re all using it. I know they’re all using it

Lisa Bleich 41:07
from a psychological standpoint. You just know human nature, but they’re all using it right?

Dr. Lyra Stein 41:12
And they know that I know. So they’ll talk to me about things, and they say, Yeah, I’m really afraid that I’m not going to have the occupational opportunities because of AI, and that’s when I say, you have to work on it. You have to show your employer that you can critically analyze the output, because that is what they’re going to look for. And you know, I still have some of them. They just copy and paste, and they don’t get credit. But others are like, Yeah, this is something going to have to you know, in an interview. This is why you should hire me.

Lisa Bleich 41:52
That’s so interesting. So are you almost reframing what’s necessary for employment and for careers. And are you talking to employers to get a sense of well, yeah, because I just read an article that said, you know, if AI can’t do it, then you know, we’re going to, we’re going to fire everybody who if AI can do the job. And there’s also this expectation that people can do more, because now they’ve got AI, so it’s more productive. So how are you bringing that together to train the because are you teaching students, not? You’re not in the PhD program, or you’re teaching in the undergraduate level, right? So you’re not teaching at this idea undergraduate, the undergraduate level. So how are you helping them think through that for careers and how they can use it? Because that seems to be, as you’re saying, the most important thing is, how do you use AI and add value.

Dr. Lyra Stein 42:41
They are very anxious about that, and they want to know what they need to do to be competitive, you know, to land the job of their dreams. And I tell them, Yes, you can. I’m going to teach you how to use AI. I’m going to teach you how to enter the correct prompts, but you have to show me that you’re thinking critically about the output, and I really think that’s the future of what we need to teach students. Obviously it’s personal skills group work, but it’s also you have this tech, and I don’t want to prohibit you from using it, because it’s very useful, but you have to show your ability above what the AI can do.

Lisa Bleich 43:29
Let’s end with some Myths and Truths About AI. This is what we always like to have a little segment. So are there any myths or truths about AI that you want to share?

Dr. Lyra Stein 43:40
I will say AI is very biased. As a psychologist, I have used it extensively just to see what I can do, and I find its interpretation of everything I have the paid version of chat GPT, so it knows me very well. All of its output is biased to make me feel good about myself, and it is, and occasionally I have to say, Okay, be totally objective. And when it is, it will give me a completely different output. And I think students have to realize this, that they they have to understand what to enter to get an objective output.

Lisa Bleich 44:23
And were you always an early adapter, Lyra, like Were you always someone who

Dr. Lyra Stein 44:27
I am fascinated by technology, and I was a neuroscientist beforehand, and in my mind, I’m thinking, Can this replicate the neural networks in humans that’s always going through my mind. I’m not to the point where it can, but we’ve had these type of discussions in classes, and some of the students are really afraid. They say, you know, this is terrifying. I’m not sure what to do with this once I graduate, but for now, it’s something I can use to get through college.

Stephanie Silberstein 44:59
I don’t know if. This is a truth or a myth, but it’s interesting that the three different majors or subject areas that each of us are talking about here because they’re, they’re exactly the the areas that I that I always say are most important for, you know, getting that job, because at the end of the day, people want to work with people, they want to be around and understand each other and can communicate and and listen. And then in terms of statistics, I’m always saying I don’t think I’ve ever taken a statistics class. And I said that is one class I feel like we all need. Because, you know, the word narrative gets thrown around all the time, and that narrative is, can totally be manipulated based on, you know, statistics, you know, I, I joke about, oh, you know, my, my, my daughter won first place, and it was like, well, there was only one person in the category, and so understanding all those contexts, and, you know, The narrative of, you know, how you present information, combined with your personality and your interpersonal communication skills like these are the things that are going to set you apart from the AI aspect, you know, all the technology, because, yeah, I don’t, I don’t want to collaborate and work with, you know, with a computer, 24/7,

Lisa Bleich 46:22
even if it tells you everything you want to hear and flatter

Stephanie Silberstein 46:26
Yeah, I know that maybe, yeah, let me get back to you. On that actually, let me get back to you. But no, I mean, like, you know, I think about babysitters I’ve had, you know, I don’t know how they do in school, necessarily, but I know that. Like, Oh, you know what, if I was running a company, I would hire you because, you know, you get things done, and you’re efficient, you’re pleasant to be around with your personal like, all those skills. So I think all those things combined play a significant role in Job, in that all those career aspects and things like that.

Dr. Nathalie Moon 47:00
I guess one thing that I think a lot of people forget, including myself, sometimes when you’re using these tools, they’re they’re so good a lot of the time, even when they’re wrong, they still give the illusion of being really good at conversation, certainly, but they’re not people. We talk to them. We interact with them as if they’re people. We say, Please and thank you. We, we, you know, we, it’s like a real conversation, but these are probabilistic models. They’re not deterministic. If you put the same prompt in on different days or on different accounts or whatever, you’ll get different answers. I certainly don’t understand how those models fully work in the background, and I’m sure that’s true for for most or all users, but, yeah, this is not a tutor. I know we can use it like a tutor in some ways, but it’s not a person. It doesn’t it’s not sentient. It doesn’t know if things are right or wrong. It doesn’t care if things are right or wrong. It’s a probabilistic model. Wait, they’re

Stephanie Silberstein 47:58
listening, so be careful what you’re saying. Right

Dr. Nathalie Moon 48:02
patterns of things that occur often, like it’s looking in its database for things that occur together, words that occur together, and that’s what it’s going to spit back at you. And if you don’t understand what is going into the training set, which we don’t know, then well that’s gonna have a massive influence on what you get out of it, and as things evolve. I mean, I’ve heard people on both sides like, are these pools gonna get better or plateau? And, I mean, we don’t know, because there, I think they’ve been instances of, you know, it’s feeding on itself. It’s trained being trained on itself. So maybe the quality could go down. But I think some of these companies, the software, they’re trying to filter out the bad inputs, if you will, from the training set. But how do they do that? I don’t know, but they’re not people, and they’re never going to be people, even though we it’s so easy to forget as you’re using

Abby Power 48:52
them. Yeah, the technologists I know don’t like when I say this, because they say it’s it’s not the right language, but it ChatGPT will lie to you. ChatGPT will give you a link, and you click on it, and it’s a dead link. It’s hallucination, yeah, and it will say, No, no, that no, that link works. And you, you know, say no, it doesn’t. It gaslights you,

Dr. Nathalie Moon 49:15
but it doesn’t know anything. It doesn’t know anything. Knowing is not the right verb to use about what it’s doing. It’s giving us output that is shockingly often accurate. And then, yes, of course, it’s not always accurate, but it’s remarkable how often it is accurate. I agree that it’s not always accurate. Of course, I agree, but I’m not sure that’s necessarily always the best strategy to convince our students not to use it, because when it keeps getting better, like, I don’t know if that falls flat sometimes, if we like the strawberry, how many R’s in strawberry thing? Like, I don’t think it makes that mistake as much anymore. So if you just that as your example, they’re like, cool. That was a problem a few months ago, but it’s better now. So we’re good, yeah, so and then you have to find a new example. Find a new example. But it’ll fix those.

Stephanie Silberstein 50:02
It’s one step ahead. Yeah, it is, it is. So

Dr. Nathalie Moon 50:05
I don’t think that’s the best strategy. I don’t know. I don’t have the answers, but I don’t think that’s, yeah, it to, like, find those gotchas, because they’re

Lisa Bleich 50:15
they’re not there. That’s so fascinating. It seems like it’s very hard to be a professor right now, in the sense that there’s so much that you have to keep being on top of, and it changes.

Stephanie Silberstein 50:23
And it’s chicken or the egg. It’s like, is it because of, you know, like, what is the cause of how these discussions are going, or how these papers are written? You know, it’s and

Dr. Nathalie Moon 50:34
you got COVID as well, and that method, oh, yeah, not even that’s

Dr. Lyra Stein 50:40
different. I have found life is much easier when I don’t try to detect that they’re using it and tell them not to use it and have in class exams. You know, I tell them that I want to simulate what you need to know in the workplace, which is communication, critical thinking. So I say I’m not going to monitor you if you use generative AI, because that’s not what I was trained to do. I’m trained to help you think critically, and we’re going to use it to help you.

Stephanie Silberstein 51:12
I think there’s a Broadway show now where the robots fall in love. Oh, yeah. Great reviews. No, I yeah, yes, happy maybe happy endings, yeah, I heard it was very good.

Dr. Lyra Stein 51:27
I have to tell you, some of my students have admitted to wanting an AI significant other instead of a real person, like, what is gonna happen in the future?

Lisa Bleich 51:43
Well, it’s like that movie, wasn’t

Stephanie Silberstein 51:46
it? Yeah, yeah, yeah. I was thinking about that.

Dr. Nathalie Moon 51:51
Just wait till they have kids and they have to get up in the middle of

Stephanie Silberstein 51:59
the night, because I know,

Dr. Nathalie Moon 52:02
if they get there, that would be cool. Oh yeah, yeah, no, we’re not there yet, but I’ve heard you said it would must be challenging to be an educator. Yes, I think it’s also incredibly challenging for the students you work with, like navigating this and looking ahead, and it changes so fast. Whatever is true today, like might not be true in a few months. I’ve also heard people talk about this time as a huge opportunity for us, like, who knows where the future, the future of education, higher education, is going to be? I don’t know, but we’re at this sort of transition point where if we engage with it and play a part, then we can maybe have an influence on the direction it goes in, but once it starts rolling more than then maybe that’ll be harder

Lisa Bleich 52:46
to write, harder to maneuver or reverse. Yeah, yeah, no, I think that’s very true,

Dr. Nathalie Moon 52:50
not that we have the answers, but so so that just sort of ramps up the pressure right, like this is a critical time we better get it right and we don’t know what

Lisa Bleich 52:57
we’re doing. Yeah? No, I think that’s so true. Yeah,

Dr. Nathalie Moon 53:01
but talking about it, I think, is a big part of dealing with it, just both to manage individually and to figure out best strategies. The people that are putting their heads in the sand, as you say, I think they’re working harder and they’re making themselves miserable, and it’s not productive. So it is, yeah, let’s not do that.

Dr. Lyra Stein 53:19
How can I stop that from using it? And I’m like, productive. You can’t talk about

Stephanie Silberstein 53:27
the elephant in the room. It’s there, yeah, it’s here to stay. So yeah, all right, we

Lisa Bleich 53:30
just got to get better at using it. All right. Well, thank you guys so much. Really interesting. I learned a ton, and I feel like I need to up my game on AI. I did use it, actually, to rewrite a college review, and it did a pretty good job. It did a pretty but it was, you know, I had already written it, so it just kind of re edited it, so, right, made it snappier. So that

Dr. Nathalie Moon 53:53
was good well, but we have to use it to an extent, because they’re using it the students are. So if we don’t use it, then no idea what the capabilities are, whether you choose to use the version you got or not. I think there’s still some value in playing around with it and seeing, of course, what it did.

Dr. Lyra Stein 54:11
Yeah, I run all of my site assignments through it, and I say, Do you think this encourages critical thinking?

Dr. Nathalie Moon 54:20
Ooh, oh, that’s a good question. Yeah, it

Dr. Lyra Stein 54:24
gives me some very valuable feedback.

Lisa Bleich 54:27
Yeah, and do you use the paid version of ChatGPT? Yeah,

Dr. Lyra Stein 54:31
I do for myself, but I had an extra credit assignment for personality psychology, and I said, you know, let your ChatGPT get to know you, and at the end of the semester, see if it can assess your personality accurately. And I students seem to like it. Most of ChatGPT was correct and correctly assess their scores on different traits.

Lisa Bleich 54:58
Well, thank you, CbMers, for tuning in. Thank you, Nathalie, Lyra, and Stephanie for such a fascinating conversation. It’s always fun to talk to people from different areas and get different insights. We really learned a lot. So to catch more episodes of College Bound Mentor, make sure to follow or subscribe on your favorite podcast platform and tell a fellow parent or student about the podcast. To learn more, visit CollegeBoundMentor.com Until next time, you got this!

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Contact Us

Thanks for your e-mail. We’ll get back to you ASAP.

Not readable? Change text. captcha txt