Computer Mediated Communication

This is a short paper I wrote on computer mediated communication (CMC) while I was a doctoral student at Penn State:

In order to understand the role of communication in an online classroom it is essential to first explore an underlying theory that guides learning in this environment, which is constructivism.  Constructivism, which is the process of knowledge construction (Duffy & Cunningham, 1996) helps guide learning that is taking place in these communication environments.  The constructivist learning environment has played an important role in the formation of strategies used in the online learning environment, specifically for modes of discussion.  Learning Communities are formed in most online learning environments that utilize student-student discussion, which can be in the shape of a class, group, or team to function together and learn from one another to build knowledge or to solve a problem.  These communities of learning are based on the constructivist paradigm.  The goal in most of these environments is to construct knowledge and solve problems, which in constructivism, is termed problem-based learning.  In problem based learning, learners are given a problem to solve and discuss it within their community to arrive at a solution (Hmelo et al., 2000).

In the constructivist environment, the teacher plays the role of a facilitator to guide student learning.  In the CMC environment, where student-student and student-facilitator communication is used, the facilitator’s goal is to guide students towards their goals using strategies such as scaffolding to aid student learning.  In order to successfully utilize these strategies, we must understand the students’ perceptions of them.  This will help steer our students towards greater achievement, satisfaction, and motivation while providing the most efficient strategies which enhance their learning.

The ability to use synchronous and asynchronous CMC has become a fundamental means to communicate in the classroom, workplace, and populace.  In an online classroom setting, computer-mediated communication is used by classes, groups, and teams to discuss projects, learn from one another, and to develop online communities.

However, a significant portion of the literature on synchronous and asynchronous CMC can be associated with text.  The literature is geared towards the text based environment because other forms of CMC technology are just beginning to become feasible methods of communication in online learning.  Only in the last several years has it become possible to use methods other than text in the online CMC environment, which is due to factors such as smaller file sizes, faster internet connection speeds, and software that is user-friendly.  What does all of this literature on text based CMC reveal?  Most of the literature on the CMC environment promotes the use of asynchronous over synchronous communication as a more effective learning tool, finds that synchronous communication provides better social communication, and shows that students enjoy both formats.

This is reiterated in the literature by Mabrito (2006) who examined a class of 16 undergraduate students using both asynchronous and synchronous text chat in WebCT, which is a course management system and communication tool.  The students were divided up into four groups and were given a project to complete using each form of communication.  The messages in each form of CMC were recorded, examined, and coded, which produced an inter-rater reliability of 87%.  The mean number of messages in the synchronous environment was 720 per group compared to 523 in the asynchronous, which concludes that students preferred to communicate in the synchronous environment.  Although there were more messages in the synchronous environment, the important finding in this study was that students did not seem to go into much depth in the synchronous environment like they did in the asynchronous.  This study also examined student perceptions toward each communication method and discovered that although students found the asynchronous method to be more productive, they preferred synchronous communication because they could communicate more easily.  Levin & Robbins (2006) found similar results when they examined synchronous and asynchronous communication using an undergraduate class of 36 students.  They observed the students communicate in six text-based online discussions throughout the semester using Blackboard’s forum and chat software.  A survey was used to gather data and was given at the beginning and end of the course.  The results of the pre-class survey showed that 31 out of 36 participants had used some form of online communication and that only 3 out of 36 students thought they would prefer synchronous communication to learn. At the conclusion of the course 17 of the students were found to prefer synchronous communication. Two significant findings in this study were 1) a good portion of students came to class with some online communication experience and 2) both forms of communication were enjoyed by the undergraduate population.  Since students seem to like both forms, can we assume that they are interchangeable or should they be used for different things?  Im & Lee (2004) explored this question when they conducted a study which compared both synchronous and asynchronous text-based communication in an online class of 40 undergraduate students.  The discussions were recorded, analyzed, and coded, which produced an inter-rater reliability of 84%.  Computer-mediated communication in the course was not required but it was still used.  There were 2,820 asynchronous postings and 336 synchronous postings.  The analysis revealed that synchronous communication was used for social interactions and asynchronous communication was used for formal discussions that related to class. The authors recommended that using asynchronous communication in an online classroom environment is much more appropriate when communication is not required.  Thus, it appears that both forms of communication are valuable for different purposes.  This distinction between synchronous and asynchronous communication is reiterated further in the literature.

Spencer (2002) conducted a study which focused on student perception in asynchronous, synchronous, and face-to-face communication environments.  Transcripts were taken from 29 undergraduate classes and 113 student surveys were collected to explore students’ perception of communication.  Spencer divided the transcriptions into four groups which consisted of 1) asynchronous communication only 2) face-to-face and asynchronous 3) asynchronous and one synchronous session and 4) asynchronous and multiple synchronous sessions.  Spencer found that students thought they learned more in the face-to-face class with asynchronous learning and that they seemed to enjoy the synchronous chat format the most even though they used it more for social discussions. Therefore, it appears that Spencer (2002) and Im & Lee (2004) agree that most synchronous communication is better suited for social interaction.  Spencer’s (2002) study also revealed that students preferred a combination of asynchronous and face-to-face communication methods.  However, this finding is debated in the literature as demonstrated by Jonassen & Kwan (2001) and Olaniran et al. (1996).

Jonassen & Kwan (2001) conducted a study that involved 18 undergraduate engineering students to examine the effects of asynchronous and face-to-face communication on ill-structured and well-structured problems.  Students were divided into six groups of three and given four problems to solve throughout the course, two ill-structured and two well-structured. Each group used asynchronous text based CMC or face-to-face communication to solve each type of problem.  A questionnaire was given to each student to evaluate the results of the study. Transcriptions of group communication were analyzed and coded which produced an inter-rater reliability of 89%.  They discovered that students preferred the asynchronous method of communication because they found it to be more beneficial.  The authors attributed this finding to the fact that students found the asynchronous environment to be more flexible and a better place for reflection.  Olaniran et al. (1996) disagreed with Jonassen and Kwan (2001) in their study which compared asynchronous and face-to-face communication. One hundred and fourteen undergraduate communication majors were assigned to one of three groups to participate in two experiments, one using text based asynchronous methods, the other using face-to-face methods, and then given a survey.  The study revealed that students liked the face-to-face version better and found it to be more effective, satisfying, and easier.  The authors suggested that future research focus on students’ perceptions towards CMC and face-to-face learning. This is important because it demonstrates that researchers have identified a need for research on students’ perceptions of the CMC environment which is the focus of the qualitative question in this paper.

Hiltz, Johnson, and Turoff (1986) uncovered similar results when they compared face-to-face and text based synchronous communication. Their study utilized 40 undergraduate students who were divided into eight groups.  Each group was given two problems, a complex ranking task and the qualitative human relations task, and then was asked to solve one using synchronous and one using face-to-face communication.  Communication was recorded, transcribed, and analyzed which produced an inter-rater reliability of 90%.  The authors discovered that students found the face-to-face communication to be less formal and more relaxing, which is similar to the findings of Olaniran et al. (1996).  They also noted that almost 30% more communication took place in the face-to-face version.  An important finding uncovered in this study was that both groups produced answers of the same quality.  This is an important discovery because it shows that achievement can be the same in multiple forms of communication.  Most of literature exhibits similar findings. For example Shaw & Chen (2006) carried out an empirical study which measured student achievement between face-to-face, online synchronous, and online asynchronous CMC environments and found no significant difference in 96 college students.

It has been emphasized in this review on online communication that there are many conflicting studies on students’ preferences.  These inconsistencies in the literature can mostly be explained by the timeframe around the research.  Studies conducted 10 years ago on online communication may differ significantly from studies conducted today because the technology and the students using this technology have drastically changed.  The undergraduate students of today are from a new generation which has grown up using the internet and different forms of online communication whereas learners 10 years ago did not.  This could have significant impacts on older research and warrants further inquiry.

However, what has been revealed in the literature is that successful outcomes can be achieved using face-to-face, online text based asynchronous, or online text based synchronous forms of communication.  As long as there are new strategies, tools, and media available in CMC, future research will be needed to ensure that these expectations are met or exceeded (Dennen, 2005).

 

References:

Duffy, T. M., & Cunningham, D. J. (1996). Constructivism: Implications for the Design and Delivery of Instruction. In D. H. Jonassen (Ed.), Handbook of Research for Educational Communications and Technology (pp. 170-198). New York: Simon & Schuster McMillan.

Hmelo, C. E., Holton, D. L., & Kolodner, J. L. (2000). Designing to Learn About Complex Systems.  The Journal of the Learning Sciences, 9(3), 247-298.

Mabrito, M. (2006). A Study of Synchronous Versus Asynchronous Collaboration in an Online Business Writing Class. The American Journal of Distance Education, 20(2), 93-107.

Levin, B. B., & Robbins, H. H. (2006.) Comparative Analysis of Preservice Teachers’ Reflective Thinking in Synchronous Versus Asynchronous Online Case Discussions. Journal of Technology and Teacher Education, 14(3).

Im, Y., & Lee, O. (2004). Pedagogical Implications of Online Discussion for Preservice Teacher Training. Journal of Research on Technology in Education, 36(2).

Spencer, D. H. (2002). A Field Study of Use of Synchronous Computer-Mediated Communication in Asynchronous Learning Networks. Unpublished Dissertation, Rutgers, The State University of New Jersey, NJ.

Jonassen, D. H., & Kwan, H. (2001). Communication patterns in computer mediated versus face- to-face group problem solving.  Educational Technology, Research and Development, 49(1).

Olaniran, B. A., Savage, G. T., & Sorenson, R. L. (1996). Experimental and Experiential approaches to teaching face-to-face and computer-mediated group discussion. Communication education,45.

HILTZ, R. S., Johnson, K., & Turoff, M. (1986). Experiments in Group Decision Making: “Communication Process and Outcome in Face-to-Face Versus Computerized Conferences”.  Human Communication Research, 13(2).

Why are there so many names for our field?

This is a good question brought up time and time again in class or the workplace, conferences, and manuscripts. I have heard instructional design programs called:

Instructional Technology
Instructional Design
Educational Technology
Educational Design
Learning Sciences

and there are many more but these are the common ones.

The reason, in my opinion, that there are so many different names has to do with the fact that we borrow from so many different fields, so each program and/or person has a slightly different focus on one or more of those fields. Thus you have instructional design programs that focus more heavily on design, programming, K-12, corporate, assessment, analysis, etc. Overall however, we are all linked by one thing: ADDIE (except for some of the learning science programs which have more of an ed psych base and do not use any form of ADDIE although I believe this is changing).

I believe all of these different names are not helping our field. In fact, they are hurting us and the field. I believe our field needs a common name with a solid operational understanding of what it is. One of the first things that happens during re-engineering of a company is ensuring that everyone understands their basic terminology because its a problem when the executives think they are saying one thing but others understand it differently.  We cant even agree what to call ourselves better yet define our field. We have groups who define our competencies and they are all very similar yet we still choose different names. This confuses potential employers who might be looking for instructional designers but are confused that someone has a learning science degree. While these programs have slight differences we need to put them aside for the reasons I have described. We need to make it clear what our students do and what all students in our field do. I am not saying we need to only train designers or anything like that, but we need to make clear all the roles we can and do play.

What are multiple representations (i.e., multimedia) ?

The use of multimedia in education has become increasingly important as the technology to design, develop, and use it are becoming more popular. Multimedia refers to external representations of words (text, words, audio) and visuals (images, video, animations, and graphs) (Mayer, 2005). The use of multiple external representations (MERs) in multimedia learning environments has been shown to be an effective way to support learning and increase comprehension (Schnotz & Lowe, 2003), which directly relates to the CTML, where we can process more than one representation at the same time (Mayer, 2001). Processing of MERs in multimedia learning environments (MLE) begins in working memory where learners, with the aid of prior knowledge, create internal representations and store them into long-term memory (Seufert, 2003).

A representation is “something that stands for something else” and has been described by Palmer (1978) as (1) the represented world, (2) the representing world, (3) the aspects of the represented world being modeled, (4) the aspects of the representing world doing the modeling, and (5) the correspondences between the two worlds (p. 262).  Markman (1999) describes the represented world as “the domain that the representations are about”, i.e. knowledge/information, and the representing world as “the domain that contains the representations”, i.e. visual/textual (p. 5).  Representations can be formed internally as mental representations and externally as external representation, both of which are understood as verbal (text/narration) and non-verbal (images/animations). These verbal and non-verbal representations are described as descriptions or depictions (Schnotz, 2005). Descriptive representations are said to be symbols such as text, numbers, and narration. Depictive representations are said to be images, icons, or models. Each of these forms has it advantages and thus they can be used to complement and/or hinder one another (Schnotz & Bannert, 2003).

Ainsworth (1999a) proposed a taxonomy that describes functions for using MERs. This taxonomy is based on three functions that MERs serve to accomplish in the learning process, which are to complement, constrain, and construct (Ainsworth, 1999b). In the complementary role, representations contribute to, support, and complement one another thereby allowing learners to maximize use of each within their working memory. Constraining helps the learner to interpret information from other representations. For example, text and pictures are symbolically different, so the text “the man sits at the computer” does not describe what he is wearing, if he is typing, what color his hair is, etc. A picture of a man sitting at a computer will provide this information. Thus when using these two representations together, the image will interpret for the text. Constructing aids in the learners’ deep understanding and helps them form relationships among other mental representations. Thus, if multimedia-learning environments comprised of complementary external representations that support one another, such as static visuals and narration, are presented to learners, working memory can be utilized in a more effective capacity (Ainsworth, 2006).

The use of visual representations in multimedia learning environments has been shown to assist in learning. For example Ainsworth & Loizou (2003) examined college students’ differences when presented with material in either text or diagram format on the human circulatory system. Results of post-tests, which included drawing, multiple choice, and short answer questions, revealed that students performed better when presented with the diagrams. However, according to the dual coding hypothesis, it would appear that using two representations is better than one. Therefore, Mayer (2001) presents the multimedia principle, which has shown that using visuals with text is better than using text alone, which supports the dual coding hypothesis. For instance, Butcher (2006) conducted an experiment with 74 undergraduate students comparing text, text with simple images, and text with complex images. Students in the simple Coloring App with text treatment outperformed the complex image with text and text only treatments on drawing, memory, and inference tasks. The author concluded that images with text are better than text alone but that the image should contain only relevant information to the learning tasks, otherwise, as was seen in the complex image group, cognitive load will be increased to a point that the image no longer helps learning but inhibits it. Similar studies have shown that visual representations, when used properly in instruction, can increase comprehension and achievement (Dwyer, 1978; Mayer & Anderson 1991). This suggests that using visuals appropriately in instruction can increase the amount of content that can be processed in working memory by reducing cognitive load, thereby increasing comprehension and retention.

The use of visual representations in time-compressed instruction has been limited, although a few studies have attempted to examine its effects. Olsen (1985) examined the role that visuals and text have on time-compressed speech by studying 40 graduate students. Students were divided up into groups of 10 and were given technical instruction of the heart developed by Dwyer (1965). Individuals were excluded if they had any prior knowledge of physiology and/or hearing or visual impairment, and then placed into one of four treatments which consisted of: 1) no compression (control), 2) a compressed version, 3) a compressed version with text, 4) and a compressed version with visuals. The control group was set at 150 words per minute and the compressed groups were set at 250 words per minute. Visual materials consisted of 39 simple line drawings identified by Dwyer (1972), which were placed in the instruction where students had most difficulty. Four tests, which consisted of drawing, identification, terminology, and comprehension, were given to each group to measure facts, concepts, rules/principles, and problem solving objectives. The tests reliability for this study was .87, .75, .77, .59, and the total was .90. Results showed that the use of visuals was more effective than not using visuals for the drawing task. On the terminology test, neither text nor visual representation was as effective as the control group (no compression). The other tests revealed insignificant differences among treatments, which could be caused by the limited number of people utilized in each treatment. However the means for the visual treatment, although not significantly different, indicated that using visuals is better than not with compressed speech for all tests except the comprehension which was almost the same, whereas the non compressed version was better than the visual treatment for all tests except the drawing. Overall, this study demonstrates that given technical material, only the drawing task was more effective than the control. Since there were only 10 students in each treatment group it is difficult to generalize the results of this experiment. The author notes that future research should explore different rates of compression using visuals with different populations and explore the effects of color on visual representations within time-compressed instruction.

Loper (1974) conducted a study to test how 121 undergraduate students would retain facts when placed into treatments that consisted of auditory instruction, auditory instruction with visual (televised) material, and auditory instruction with visual (televised) material compressed at rates of 33%, 50% and 100%. The study found that visual augmentation did not improve instruction, which does not support the use of MERs. Although there were insignificant differences, the study only looked at facts and did not test for higher level learning objectives which are important as most training focuses are more than just recall and utilized as the best streaming tv as a presentation method where the current study will look at auditory instruction.

The literature on MERs support the use of visuals in instruction as was demonstrated by Ainsworth and Loizou (2003) and Butcher (2006). The literature on visuals within time-compressed instruction does not appear to follow this trend at this point in time, which is mainly due to limited studies on the topic and limited technology at the time of the studies. However it appears that using visuals with time compressed instruction may aid comprehension better than no visuals pointing to the fact that visuals in time compressed instruction helps reduce cognitive load. Thus, the question remains whether the combination of visuals and time-compressed instruction are a viable means to effectively deliver and present instruction. When discussing the notion that images aid in the comprehension of time-compressed speech Olsen (1985) states “If such a difference does occur, it is necessary to identify which type of visual augmentation, if any, is more effective in providing for this difference. No research to date has investigated this aspect of comprehension of rate-modified speech.” (p. 193). Further investigation is warranted to see if the use of visual presented with time-compressed instruction will increase comprehension.

Concept map of learning theories

Here is a concept map of the ‘main’ learning theories that define instructional technology. While I would argue there are tons and tons of learning theories that define us, these are the three (Behaviorism, Cognitivism, and Constructivism) most ISDers give credit to:

Is a test an assessment? Is an assessment an evaluation? Just semantics?

What is a Test?
What is an Assessment?
What is an evaluation?

Believe it or not, these terms all have different meanings. Tonight in my 515 class we will briefly discuss these differences during our discussion on online assessment and we do It thoroughly in my 531 assessment course. Most of us use these terms interchangeably and do it everyday. I find this mostly true of teachers during the high stakes testing periods that students have to go through. I do not think it’s ‘wrong’ to use them interchangeably but be aware there are differences and if you are speaking to a person with a strong assessment or education psychology background, you may want to use them correctly….so what do they mean?

Test – an instrument or procedure for observing or describing one or more characteristics of a students using either a numerical scale or classification scheme

Assessment – process for obtaining information that is used for making decisions about students, curricula, policy, etc.

Evaluation – process of making a value judgement about the worth of a students product or performance

Source: Nitko and Brookhart (2010) Educational Assessment of Students (6th edition)

Impact factor: Best Instructional Design Journals

What are the best journals in the instructional design field? Well while I believe its a matter of preference, some have created something called the impact factor. Thomsom Reuters under the name Science Watch has created this impact factor for all fields. Now while they do not have instructional design as a field, they do have educational research and our journals are ranked in there. So I will simply give a list of our journals and show what they are ranked in the overall list and what their impact factor is.

First, here is how the impact factor is calculated:

“The 2009 impact factor is calculated by taking the number of all current citations to source items published in a journal over the previous two years and dividing by the number of articles published in the journal during the same period–in other words, a ratio between citations and recent citable items published. The rankings in the next two columns show impact over longer time spans, based on figures from Journal Performance Indicators.

In these columns, total citations to a journal’s published papers are divided by the total number of papers that the journal published, producing a citations-per-paper impact score over a five-year period (middle column) and a 29-year period (right-hand column). SOURCE: Journal Citations Report and Journal Performance Indicators.”

Journals Impact factor and Rank 2010:

2. Review of Educational Research (3.127)
3. Learning and Instruction (2.768)
5. Computers & Education (2.617)
11. BJET (2.139)
12. Metacognition and Learning (2.038)
22. Journal of Learning Sciences (1.7)
24. Australasian Journal of Educational Technology (1.6)
32. INSTRUCTIONAL SCIENCE  (1.473)
51. ETRD (1.081)
52. EDUCATIONAL TECHNOLOGY & SOCIETY (1.066)
57. Turkish Online Journal of Educational Technology (1.016)
58. Distance Education (1.0)
70. HARVARD EDUCATIONAL REVIEW (.841)
75. Journal of Science Education and Technology (.804)
113. JOURNAL OF EDUCATIONAL COMPUTING RESEARCH (.561)

Full list can be found here – Just a note about this link, I can only access it from my campus or when I am signed onto my school’s vpn. Thus you do need access from your school before viewing it.

Action verbs in learning objectives

Yesterday during my educational technology for teachers class my students wanted to use the word ‘Understand’ in their learning objectives. I just cringed. I asked them what understand meant? I asked how you would teach and assess a learning objective such as: Students will understand the state of north carolina? Each student had a very different idea of what understand meant and so did each taxonomy they were using. I asked how reliable that would be? I guess it would be ok if everyone had the same operational definition but then really the term would mean other action verbs so why not just use them in the first place?

When writing objectives make the objective clear and concise. Use action verbs that describe what is to be done for instance:

identify
distinguish
compare and contrast
describe
evaluate
list

DO NOT USE VAGUE TERMS. I have a list of good and bad verbs that I will post soon.

See my previous post on writing learning objectives

Here is a link from MIT that re-emphasizes what I just said

How to develop units for online courses

Since my students’ online units are due today, I figured I would make this post. If you want to make your online course successful a best practice is to include the following things in each unit. I say this because many times I see an online course and things are not spelled out for the learner. I get into a lesson and I do not see objectives or their are not directions for me and I kind of have to assume that I just click through the materials. This usually works fine for me as I have a lot of online course experience but for new learners, this is just setting up a disaster. So I have compiled the following list.

Just to clarify, when I say unit in online learning, I generally refer to one section/topic/week/lesson in a course that is online and 80%+ asychronous. In higher ed we break this into weeks however each environment might be different. Here is a list of things that should be included in an online unit, however remember that each case/course is different and may require more/less depending on the needs to the client and students.

Things that should be included in an online unit:

* Lesson Title
* Concept / Topic To Teach
* Standards Addressed if appropriate (usually for K-12 only)
* General Goal(s)
* Course Objectives
* Required Materials – the students need anything to complete this? Maybe they need headphones to listen to a video
* Step-By-Step Procedures – This is where you give them very detailed step by step directions. Course content can be in this section.
* Course content (instructional strategies)
* Closure (wrap up the unit)
* Assessment Based On Objectives (assignment, test, quiz, etc.)
* Adaptations (For Students With Learning Disabilities) – usually only applied to K-12
* Extensions (For Gifted Students) if appropriate or for students who want to learn more about the topic – this should always be included for all learners – K-12, higher ed, corporate, government
* Possible Connections To Other Subjects
* Appendix – materials, such as PPT, Videos, Podcasts, etc.

Computer Interface Design: MIT 595

Learn Adobe Photoshop techniques (Beginner to advanced) while simultaneously learning the principles of human-computer interaction. This course will focus on designing visually pleasing interfaces for PC, tablet, and mobile devices. You will learn the theories behind interface design and learn how to apply them in various settings through Adobe Photoshop.

Summer Session 1

Tues/Thurs 5-8pm

Please see flyer for details