10.19.2006

Reflective Tutoring for Immersive Simulation by Chad Lane, Mark Core, Dave Gomboc, Steve Solomon, Michael van Lent and Milton Rosenberg

Simplicity is the motto for my blog entry this week. I figured if I was asking students to read three page articles and write 100 word annotations I was also allowed to keep my reflection to the bare minimum.

I thought an article with six authors would have a lot to say about reflective tutoring and immersive simulation, but instead they kept things simple. The premise of the article (or is this a white paper) is to introduce readers to an "intelligent tutoring system (ITS) that scaffolds r
eflection activities with the student, such as reviewing salient events from an exercise, discussing ways to improve, and asking questions of entities involved in the simulation." I assume that the tutoring system being discussed is part of a disaster response simulation used by the U.S. Army for training purposes.

The system collects information from the simulation
on the user's performance to "implement various reflective activities" based on their interactions. The tutoring system not only helps to identify problems and suggest areas for improvement, but with explainable artificial intelligence (XAI) it enables the students to interview the virtual actors of the simulation and reflect on the interactions they had with them. If you want a better description of XAI read "Building Explainable Artificial Intelligence Systems," which is written by the same authors and is the better article.

The topic of intelligent tutoring systems is a standard discussion for the discourse concerning education and AI but it is also one of many areas for research as this topics list shows. If you want a basic introduction to AI applications for education you should read this article and if you desire more research afterwards go to this site.

I am not sure but I may be following a disturbing pathway with my article selections. Last week I downgraded the role of the teacher to student and this week I am outright replacing them with intelligent technologies. But all kidding aside are intellegent tutoring systems and pedagogical agents
capable of enhancing teaching and learning? Is it possible for AI to go beyond drill and skill activities or reflective tutoring?

In his annotation Adam compares the tutoring system described by the authors to ELIZA, Joseph Weizenbaum's simulated psychotherapist of the 1960s. I recommend reading this article for further information on ELIZA and other early intelligent systems. OK so before this entry gets much longer I am going to stop and simply ask people to interact with the following conversational and pedagogical agents as a start to explore and reflect on the possibilities afforded by AI.

ALICE | CATTY | ELLAZ | START | BILL AND DEBBIE | JABBERWACKY | CYBER IVAR

Establishing a Quality Review for Online Courses by Tracy Chao, Tami Saj, and Felicity Tessier

WHO IS RESPONSIBLE FOR ASSESSING QUALITY IN ONLINE COURSES?

Measuring quality in online courses is a contemporary topic of discussion in our field. This is a concern of faculty, students, administrators, and instructional designers.

This article presents a practical approach to implementing a quality review of online courses. The article reveals that measuring quality of online courses is a complex task; there are many variables in producing a quality course that goes beyond the teacher and the curriculum (but of course we all know that…).

Quality in online courses is typically measured in terms of courses evaluations, perceptions of teachers and administrators and peer teaching observations. Typically, for program accreditation, the state requires academic programs to conduct faculty observations to demonstrate quality teaching. These measurements are typical of what is used in a traditional face to face classroom but they may not be best for the online classroom.

The authors suggest additional quality measurements for online courses: instructional design, course development, and use of technology. However, other measures need to be place for assessing quality of teaching, curriculum design, and experience of the learner. What issues are important in assessing quality in these areas?

What makes a quality course?

According to the authors, quality can be reviewed and measured along six different areas. The authors provide a model for measuring quality. Six areas are identified as the quality framework for web-based courses. The authors present corresponding measurements for three out of the six quality areas (instructional design, web design, and course presentation). See below for a summary of the division of labor for assessing quality measurements presented in the article.

Area of quality and Who assesses quality?

1. Curriculum design: Academic units ensure the curriculum meets quality standards for content and learning standards
2. Teaching and facilitation: Academic programs use interim formative surveys and final course evaluations to help assess the quality of teaching and facilitation.
3. Learning experience: Academic programs use interim formative surveys and final course evaluations to help assess the quality of the learning experience
4. Instructional design: Collaborative relationship between instructional designers and academic units ensures shared responsibility for sound instructional design for a course
5. Web design: The producers of the online courses are responsible for and ensure quality standards in web design. The producers are CTET.
6. Course presentation: The “course writer” or editor proofreads the materials at the predefined stages of development.

The parties that assess quality in four and five raise some important issues:

In assessing the quality of instructional design the collaborative partnership is essential. How is this relationship is constructed between faculty and instructional designer. How responsive is the faculty to feedback on instructional design from the instructional designer? What if the faculty does not have the skills in instructional design, how can it be ensured that the quality is properly assessed?

The producers are responsible for assessing the quality of the web design of online courses. However, what happens when the faculty are the producers of the content? Who should be responsible for assessing?

Some strategies for accessing quality:

1. Team approach to quality review within the online teaching and learning support group for academic programs.

This approach was presented by the authors for the review of instructional design, web design and course presentation quality. This peer review type process could be very helpful to new instructional design groups and to mentor new instructional designers. Some questions I have are:

o Does each team member have the skills to conduct a quality review?
o Who are the team members conducting the review?
o Who reviews the team’s review?
o Should there be a dedicated quality assurance staff to review all courses?

2. Partnerships with Faculty and Academic Programs

There must be a balanced partnership between the instructional design group, the faculty and the academic program. In many cases quality is assessed from the instructional design perspective, but not from the academic program perspective when the program is new to offering online courses or vice versa. Concerns that I have experienced in my professional work are on both ends. Some programs are to understaffed to monitor their online faculty and at times programs are too inexperienced to provide sufficient feedback on instructional design, web design, or course presentation.

The tension that arises in the relationship presented between the two contingencies, the instructional designers and the faculty. Who has authority to assess quality? Both parties may be sensitive to stringent reviews; both parties have expertise in their respective areas.

I raise this issue, because it is very real. Areas four, five and six are areas where a group that works with faculty to produce and design online courses fit in very well. However, the lines are blurred when faculty are responsible for creating their own content and do not have instructional designers to work with (which is the case for many programs).

Teaching, curriculum design, and the learning experience are typically left to the academic department to monitor. When new departments embark on online learning, do they have structures and processes in place for doing this type of quality assessment? What structures, processes, measurements and support do academic programs need to truly assess the quality of their online courses?