10.19.2006

Reflective Tutoring for Immersive Simulation by Chad Lane, Mark Core, Dave Gomboc, Steve Solomon, Michael van Lent and Milton Rosenberg

Simplicity is the motto for my blog entry this week. I figured if I was asking students to read three page articles and write 100 word annotations I was also allowed to keep my reflection to the bare minimum.

I thought an article with six authors would have a lot to say about reflective tutoring and immersive simulation, but instead they kept things simple. The premise of the article (or is this a white paper) is to introduce readers to an "intelligent tutoring system (ITS) that scaffolds r
eflection activities with the student, such as reviewing salient events from an exercise, discussing ways to improve, and asking questions of entities involved in the simulation." I assume that the tutoring system being discussed is part of a disaster response simulation used by the U.S. Army for training purposes.

The system collects information from the simulation
on the user's performance to "implement various reflective activities" based on their interactions. The tutoring system not only helps to identify problems and suggest areas for improvement, but with explainable artificial intelligence (XAI) it enables the students to interview the virtual actors of the simulation and reflect on the interactions they had with them. If you want a better description of XAI read "Building Explainable Artificial Intelligence Systems," which is written by the same authors and is the better article.

The topic of intelligent tutoring systems is a standard discussion for the discourse concerning education and AI but it is also one of many areas for research as this topics list shows. If you want a basic introduction to AI applications for education you should read this article and if you desire more research afterwards go to this site.

I am not sure but I may be following a disturbing pathway with my article selections. Last week I downgraded the role of the teacher to student and this week I am outright replacing them with intelligent technologies. But all kidding aside are intellegent tutoring systems and pedagogical agents
capable of enhancing teaching and learning? Is it possible for AI to go beyond drill and skill activities or reflective tutoring?

In his annotation Adam compares the tutoring system described by the authors to ELIZA, Joseph Weizenbaum's simulated psychotherapist of the 1960s. I recommend reading this article for further information on ELIZA and other early intelligent systems. OK so before this entry gets much longer I am going to stop and simply ask people to interact with the following conversational and pedagogical agents as a start to explore and reflect on the possibilities afforded by AI.

ALICE | CATTY | ELLAZ | START | BILL AND DEBBIE | JABBERWACKY | CYBER IVAR

8 Comments:

At 11:06, Anonymous Anonymous said...

I am trying to put this in a context that I can understand. I don’t see it replacing the teacher anytime soon. From what I can see AI is still largely constrained to a predefined environment and the ability of the program to respond to questions/answers by parsing the syntax. I can see how it would be a fun addition to a language class or for reviewing work in constrained domains (geometry proofs?), but can’t see how it would help with higher order thinking and complex categories. At least, not as a direct component of the class. I recall that in the article Adam suggested, steps 1 & 2, analyzing the student’s work and creating the agenda, needed human intervention. I wonder how much work it would take to create a decent AI tutor for an online class, and when one would recover the costs if you were just using real people to interact with students.

So what are the authors trying to do? It looks like they have a simulation and they are trying to replace what the teachers and students might do after the simulation, reflection, with a computer. I keep asking why? Why not just post the transcript and have the class discuss it? Is this just pushing the boundaries to push the boundaries?

 
At 09:26, Blogger sgoss said...

FYI: The ITS from the article runs all three steps of the after-action review (AAR). There is no human intervention at any point, at least from the description of the authors.

 
At 13:37, Blogger Adam said...

Just to clarify on what I wrote in my annotation about human input:

1. analyze student's exercise: highlight important events from the exercise
that are candidates for discussion.
2. create agenda: organize and prioritize the highlighted events.
3. prepare XAI: load exercise log, action representations, and natural lan-
guage generation knowledge (details in [3]).
The first two steps roughly model what human instructors need to do to per-
form an AAR: judge the student's performance, make decisions about what
merits discussion, and finally, decide how they might go about addressing these
issues. Currently, steps 1 and 2 require human support, but we are working on
automating these tasks as part of an in-game tutor that assesses turn-by-turn
choices of the student. The resulting agenda is then passed to a planner and
executor that conduct the dialogue...


My take is that the current form of the product does not choose important elements from the student's performance and prioritize them in the subsequent discussion. Conceivably, by setting key decision points in the scenario, the student's actions or choices at these key points could be extracted by the AI and, --again, conceivably - rated by the AI based on their effect on the outcome of the scenario.

I would not suggest that such a tool could replace classroom discussion, but in the context of the example -- military training away from human instructors, perhaps a soldier who needs to negotiate when he has had little experience with negotiation -- I can see it being useful.

In other venues...well, imagine a sales essentials quiz you take on your pda. You could take a multiple choice quiz and get feedback on whether your choices were the ones likely to lead to a sucessful sale, or you could go through a simulation. The content would be the same - given the customer's statement, do you say a, b, or c? At the end, the sum of your choices results in getting the sale or in getting the boot, and the AAR in this case could go point by point through your choices, leading you to reflect on why you made the ones you did. The user's intelligence is doing the real work -- the AI (a 'soft' UI, arguably) is facilitating. The simulation might be more effective than the quiz.

I don't think anyone ever seriously contended that ELIZA was a true AI (she ain't no HAL), but I do recall the application leading to some heavy exchanges at parties (a high school friend had it on his home computer -- this was in 1982. Monitor in 4 shades of green, thermal printer -- Duane had a sweet setup). I would view immersive simulation as another media channel to get the content to the user. TV didn't replace instructors, neither did the web, and neither will this.

 
At 13:53, Blogger Adam said...

Oh - and Jabberwacky was just disturbing -- I don't like the looks of that guy.

 
At 23:35, Blogger Sandi said...

Like most educational technologies we have come across in this program, I think intelligent tutoring systems can be make an effective means of learning when utilized for these reflection exercises.

I do not think it should be a total replacement for any branch of teaching and learning but, I do believe it could be a helpful tool in distance learning since it can be distributed over the Internet. Also, because it is a computer program, it can facilitate recording, tracking, and analyzing data collected during the exercises through automation.

The ITS should have a well-defined heuristically designed architecture (from what I vaguely remember of the undergrad AI class I took 10 years ago) which will allow the program to “increase its intelligence” as the student progresses through the program. To consider the individual differences of students of varying disciplines, backgrounds, cultures, etc, maybe a prerequisite assessment might be a good tool to provide the program a means to collect, integrate, and learn the students’ perspectives, experiences, baseline knowledge, et al. prior to determining which approaches to take to tutoring and how the discourse should be analyzed.

Of course, the more advanced the application, the higher the costs to the program. And of course, like the consensus, the program would not be so intelligible as to have the ability to provide critical feedback to complex human thought. So, I would leave these reflective exercises based on level 1 and 2 evaluations.

 
At 08:48, Blogger Tushar said...

Talking of intelligent tutoring. Couple of weeks ago i had posted an article "Microsoft designs a school system". In that article "Their laptops carry software that assesses how quickly they're learning the lesson. If they get it, they'll dive deeper into the subject. If not, they get remedial help."

This is very similar to what we are discussing here. A combination of reflective and formative evaluation techniques to make sure that learners are learning. Implementing this kind of technology in the real-world is definitely a tall order (very expensive). Nowadays more schools/universities are switching to laptops and wireless technologies to enhance the learning envirornment, which was unheard of some years ago. I definitely believe as years go by and technology become cheaper, intelligent(AI) tutoring will be a part of the learning community.

 
At 13:18, Blogger Splindarella said...

Is it just me or do other people also try to push the AI bots to see what it takes to make them tilt? In the cases of most of the links above, it didn't seem to take much. Even the bots that claimed to learn (like Jabberwacky) seemed to do nothing more than match the closest known phrase to whatever was said; the result was a strange series of non-sequiturs and conversational dead-ends.

This made me think a couple of things. First, would the very presence of an AI bot be so distracting (ie, to someone like me who wants to see what it takes to fry its little brain more than I may want to reflect upon a learning experience) that learning may actually be decreased? Or would the novelty of frying AI brains simply wear off if they were commonly used, thus taking away that potential problem?

Second, and more importantly, I tend to feel along with many of you that AI just isn't where it needs to be today in order to be really useful. It seems like most of the systems either provide canned answers to pre-determined questions or do the random-phrase-match thing. The second method seems pretty useless from an educational perspective, so the first would probably be the way to go for reflective learning. However, students always surprise (me, at least) with curveball questions that I never would have predicted. How does AI cope with that? From what I've seen of Cyber Ivar, Start and the others, not too well (that's where the AI brain fry comes into play). How useful is reflective learning if "reflection" is limited to what the AI programmers think should be reflected upon? It can be useful within limits, but that's all. Until AI really does become "intelligent" -- capable of response on a truly human level -- I won't worry about job security any time soon.

Oh, and Adam, I didn't like Jabberwacky, either. We were done as soon as it referred to me as "snookums."

 
At 17:40, Blogger Urban Pisces said...

Everyone seems to have interesting and insightful reflective contributions to make to this article. In the worderful world of cognitive science and how the goal of knowledge is to move information from short term memory to long term memory, I pose the question, can AI scaffold learning in much the same way a human teacher can? I'm with marc, at least for now, I doubt AI will replace human interaction when it comes to reflective teaching, anytime soon.

Rebecca

P.S. If you're all wondering why I use a pseudonym for my blog name, it's because I've been "netstalked" twice already.

 

Post a Comment

<< Home