Students Perceptions of Distance Learning, Online Learning and the Traditional Classroom by John O'Malley and Harrison McCraw
I am repeating the format of last week's blog entry and only briefly discussing the article I selected, Students Perceptions of Distance Learning, Online Learning and the Traditional Classroom. There are two reasons I am keeping my discussion short.
1. As Nilda mentioned in her annotation, I also thought the purpose and usefulness of this research was somewhat ambiguous.
2. I am using this article to support a larger discussion on the research and literature being gathered and reviewed by the quality and assessment group.
To begin let's look at the article’s purpose, which was to study student perceptions of online and distance education. The authors claim the research has been done to examine the impact of new technologies on education, yet it does little to explain the effects that instructional technology on learners. Instead they analyze their subjects responses to questions that merely clarify what the learners like better, online learning or onsite learning. The article reads more like The Pepsi Challenge than an academic study. In fact, the authors themselves make it clear that the study is not useful as supporting research.
This study surveyed students in business courses only. Results therefore cannot be generalized to non-business students. In addition, students surveyed were at one university and these results cannot be generalized to students at other universities. In regards to the DL findings, it may be that the university where the students are surveyed is not effectively using DL methodologies although instructors do receive extensive DL training. It may also be that the technology used is not enabling effective DL.
In other words the research is only helpful to those who are making decisions about online learning at Richards College of Business, State University of West Georgia. I would also question its usefulness for that institution as the researchers did not assess either the teaching methodologies or the delivery technologies used at the school before conducting their survey.
Still the article is a great example of a larger question currently impacting distance and web-based education: how does one assess the quality of interactions between student and teacher, user and technology or a combination of all four? I see this question as the underlying influence for the research being done by the quality and assessment group.
Nilda and Linda are researching the impact of technologies on traditional academic interactions, defined by the article's authors as one-to-one and one-to-few arrangements. Nilda’s research focuses on the quality of technological interaction as a mediation between student and teacher and Linda’s research focuses on the quality of technological interaction as a mediation between student and tutor. Both are also concerned with the distance created through technological interactions, Nilda researching student perceptions of online interaction as compared to onsite interaction and Linda researching the compatibility of offshore faculty to connect to onshore students.
Tracy's research takes a different path as she is more concerned with the assessment of interactions between user and technology, focusing on Learning Object Metadata (LOM), defined by IEEE as
the attributes required to fully/adequately describe a Learning Object. Learning Objects are defined here as any entity, digital or non-digital, which can be used, re-used or referenced during technology supported learning. […] Examples of Learning Objects include multimedia content, instructional content, learning objectives, instructional software and software tools, and persons, organizations, or events referenced during technology supported learning.
Her research is a good example of the difficulties effecting the representation of interaction as use of technology and also who or what the user is being connected to through the technology. Is interaction something between a user and a technology, a facilitator, a network or a mixture of all three?
I will stop there and leave you with a question we will use to start our discussion on Tuesday, which is how does one assess or standardize the quality of interactions through or with technology? I am hoping the class will be able explore this question during the first half of the session and then ask group two to expand on the discussion as it may or may not relate to their research.
2 Comments:
I have to follow Michele's lead -- the key to assessing anything is to have clear objectives - technology based or not.
An issue I see in DL is a trend toward "get it out there and make it something more than a page-turner", with the goals of the instruction taking a back seat.
If we haven't worked out clear objectives, assessing the interaction of anyone with anything is going to be difficult.
The question was asked... how do you assess the quality of interactions between student, teacher, user and technology...
I particularly like the mention of Rogers' model for the diffusion of innovation (I can't remember having read about it before...) so I was interested by that.
Michele brought up a very good point, it all starts with figuring out what it is you are doing, starting out with a clearly defined objective - that makes it much easier to figure out the way to assess it.
Post a Comment
<< Home