• MountingSuspicion@reddthat.com
    link
    fedilink
    English
    arrow-up
    8
    ·
    7 hours ago

    That was an interesting read, but I am not convinced that they understand the “problem” they are trying to address. That would also explain the vagueness of the title. Clearly they think something needs to change because of AI, but they have not explained why, or defined what, or the parameters for a positive change. It makes it feel arbitrary.

    At one point he suggests that telling people who are taking the exam after you what specifically is on the exam is not cheating, though his students seemed to think it is. If telling people is encouraged then people taking the test first just have a more difficult task and their results are more likely to reflect their knowledge of the subject. At that point just give people the exam questions early. I had a professor that would give out a study guide and would exclusively pull exam questions from the study guide with the numbers changed. It was basically homework, but you were guaranteed to have seen everything on the exam already and that was such a great way to ensure 1) people fully understood the scope of the test 2) relieve stress about testing. If they don’t see a problem with only certain people knowing exact questions and answers ahead of time, then I’m not sure they understand what cheating is.

    Unrelated, but they also blame outlook for why young people hate email. I had to use outlook for a bit and it does suck, but my hatred for email is unrelated.

    I’m glad they are experimenting with different methods for testing, but without really knowing more about the class itself this comes off as though this is just a filler class in a degree program and that the test doesn’t really matter because their understanding of the subject doesn’t really matter. In another blog he refers to the article about how AI failed at running a vending machine which was making the rounds a bit ago. In it he laments that we’re going to have to “prepare for that stupid world” where AI is everywhere. If you think we can still fight that, I don’t think accepting AI as a suitable exam tool is the way to do it, even if you make students acknowledge hallucinations. At that point you’re normalizing it. 2/60 is actually not bad for using AI, as he said there will always be those students, but the blog makes me question the content of the class more than anything else.

  • earthworm@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    ·
    8 hours ago

    My main takeaway is that I will keep this method next year. I believe that students are confronted with their own use of chatbots. I also learn how they use them. I’m delighted to read their thought processes through the stream of consciousness.

    Like every generation of students, there are good students, bad students and very brilliant students. It will always be the case, people evolve (I was, myself, not a very good student). Chatbots don’t change anything regarding that. Like every new technology, smart young people are very critical and, by defintion, smart about how they use it.

    The problem is not the young generation. The problem is the older generation destroying critical infrastructure out of fear of missing out on the new shiny thing from big corp’s marketing department.

  • Riskable@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    7 hours ago

    This is super interesting. I think academia is going to need to clearly divide “learning” into two categories:

    • What you need to memorize.
    • What you need to understand.

    If you’re being tested on how well you memorized something, using AI to answer questions is cheating.

    If you’re being tested on how well you understand something, using AI during an exam isn’t going to help you much unless it’s something that could be understood very quickly. In which case, why are you bothering to test for that knowledge?

    If a student has an hour to answer ten questions about a complex topic, and they can somehow understand it well enough by asking AI about it, it either wasn’t worthy of teaching or that student is wasting their time in school; they clearly learn better on their own.