File sizes and download speeds of 56k, 3G, 4G, and High speed wireless internet

Here is a chart I made comparing speeds of various internet connections and showing how fast/slow one would expect their files to download. Remember that these are really hypothetical and depend so much on where you are, what kind of device you have, etc. You should also remember that equipment you are using should be regularly maintained (visit for more info). Theoretically 4G and high speed internet should be nearly the same but realistically this is not the case. So please do not take these as fact, rather, use this chart as a guide to consider when developing for both web and mobile devices.

100kb 250kb 500kb 1mb 1gb
56k Modem 15 seconds 36 seconds 1 minute 2.5 minutes 1 day 18 hours
3G <1 second 6 seconds 12 seconds 25 seconds 7 hours
High Speed/Wireless/4G <1 second 1 second 3 seconds 5 seconds 1.5 hours

*note – these speeds assume best case scenario. More than likely all speeds will be
Slower. Both 3G and 4G speeds will probably be significantly slower, especially as 4G max.

Your PC behaving strangely? Is it very slow? Does it crash very often? Doesn’t it boot at all? Well, these are common problems, that everybody have met in their life – to be honest, I’ve got these problems several times – and this comes from several reasons, but the first is: tech devices ages very fast. Xtra-PC is a small USB device that you plug into an old, slow or dead computer (Windows or Mac) to give it new life. Basically, it’s a custom operating system, based on the proven foundation of Linux, that runs directly from the Xtra-PC USB device, and it even works on computers with a missing or crashed hard drive. Find xtra pc reviews on

Speeds are not even available yet.

If you

Flash to HTML5: Buttons

In this clip I test Adobe Flash’s HTML5 conversion tool. I test motion tweens, masks, and buttons via Wallaby, which is the conversion tool. I test the HTML output file in Safari, Chrome, and Firefox. The motion tweens and masks work in both Safari and Chrome. Nothing works in Firefox. The buttons do not work in any browser. While Wallaby has potential, this is a big limitation. This means that it is currently only useful for animations but not interactivity.

Check out the demonstration:


Flash CS6 to HTML5

The following video demonstrates Adobe Flash’s CS6 to HTML5 conversion tool codenamed Wallaby. The tool converted my file in one second. It worked great in Safari but did not work in Firefox. I am not sure if Flash’s tool is not working or HTML5 is not working. I say this because of the compatibility issues I have with HTML5, especially in Firefox. Also, Adobe’s Wallaby tool was last updated on March 8th, which means that a new version is just around the corner. Overall, I am impressed it worked but need to test this with more advanced Flash files. I was not really surprised there was a compatibility issue as that is HTML5. Here is the video with the demonstration:

What are multiple representations (i.e., multimedia) ?

The use of multimedia in education has become increasingly important as the technology to design, develop, and use it are becoming more popular. Multimedia refers to external representations of words (text, words, audio) and visuals (images, video, animations, and graphs) (Mayer, 2005). The use of multiple external representations (MERs) in multimedia learning environments has been shown to be an effective way to support learning and increase comprehension (Schnotz & Lowe, 2003), which directly relates to the CTML, where we can process more than one representation at the same time (Mayer, 2001). Processing of MERs in multimedia learning environments (MLE) begins in working memory where learners, with the aid of prior knowledge, create internal representations and store them into long-term memory (Seufert, 2003).

A representation is “something that stands for something else” and has been described by Palmer (1978) as (1) the represented world, (2) the representing world, (3) the aspects of the represented world being modeled, (4) the aspects of the representing world doing the modeling, and (5) the correspondences between the two worlds (p. 262).  Markman (1999) describes the represented world as “the domain that the representations are about”, i.e. knowledge/information, and the representing world as “the domain that contains the representations”, i.e. visual/textual (p. 5).  Representations can be formed internally as mental representations and externally as external representation, both of which are understood as verbal (text/narration) and non-verbal (images/animations). These verbal and non-verbal representations are described as descriptions or depictions (Schnotz, 2005). Descriptive representations are said to be symbols such as text, numbers, and narration. Depictive representations are said to be images, icons, or models. Each of these forms has it advantages and thus they can be used to complement and/or hinder one another (Schnotz & Bannert, 2003).

Ainsworth (1999a) proposed a taxonomy that describes functions for using MERs. This taxonomy is based on three functions that MERs serve to accomplish in the learning process, which are to complement, constrain, and construct (Ainsworth, 1999b). In the complementary role, representations contribute to, support, and complement one another thereby allowing learners to maximize use of each within their working memory. Constraining helps the learner to interpret information from other representations. For example, text and pictures are symbolically different, so the text “the man sits at the computer” does not describe what he is wearing, if he is typing, what color his hair is, etc. A picture of a man sitting at a computer will provide this information. Thus when using these two representations together, the image will interpret for the text. Constructing aids in the learners’ deep understanding and helps them form relationships among other mental representations. Thus, if multimedia-learning environments comprised of complementary external representations that support one another, such as static visuals and narration, are presented to learners, working memory can be utilized in a more effective capacity (Ainsworth, 2006).

The use of visual representations in multimedia learning environments has been shown to assist in learning. For example Ainsworth & Loizou (2003) examined college students’ differences when presented with material in either text or diagram format on the human circulatory system. Results of post-tests, which included drawing, multiple choice, and short answer questions, revealed that students performed better when presented with the diagrams. However, according to the dual coding hypothesis, it would appear that using two representations is better than one. Therefore, Mayer (2001) presents the multimedia principle, which has shown that using visuals with text is better than using text alone, which supports the dual coding hypothesis. For instance, Butcher (2006) conducted an experiment with 74 undergraduate students comparing text, text with simple images, and text with complex images. Students in the simple Coloring App with text treatment outperformed the complex image with text and text only treatments on drawing, memory, and inference tasks. The author concluded that images with text are better than text alone but that the image should contain only relevant information to the learning tasks, otherwise, as was seen in the complex image group, cognitive load will be increased to a point that the image no longer helps learning but inhibits it. Similar studies have shown that visual representations, when used properly in instruction, can increase comprehension and achievement (Dwyer, 1978; Mayer & Anderson 1991). This suggests that using visuals appropriately in instruction can increase the amount of content that can be processed in working memory by reducing cognitive load, thereby increasing comprehension and retention.

The use of visual representations in time-compressed instruction has been limited, although a few studies have attempted to examine its effects. Olsen (1985) examined the role that visuals and text have on time-compressed speech by studying 40 graduate students. Students were divided up into groups of 10 and were given technical instruction of the heart developed by Dwyer (1965). Individuals were excluded if they had any prior knowledge of physiology and/or hearing or visual impairment, and then placed into one of four treatments which consisted of: 1) no compression (control), 2) a compressed version, 3) a compressed version with text, 4) and a compressed version with visuals. The control group was set at 150 words per minute and the compressed groups were set at 250 words per minute. Visual materials consisted of 39 simple line drawings identified by Dwyer (1972), which were placed in the instruction where students had most difficulty. Four tests, which consisted of drawing, identification, terminology, and comprehension, were given to each group to measure facts, concepts, rules/principles, and problem solving objectives. The tests reliability for this study was .87, .75, .77, .59, and the total was .90. Results showed that the use of visuals was more effective than not using visuals for the drawing task. On the terminology test, neither text nor visual representation was as effective as the control group (no compression). The other tests revealed insignificant differences among treatments, which could be caused by the limited number of people utilized in each treatment. However the means for the visual treatment, although not significantly different, indicated that using visuals is better than not with compressed speech for all tests except the comprehension which was almost the same, whereas the non compressed version was better than the visual treatment for all tests except the drawing. Overall, this study demonstrates that given technical material, only the drawing task was more effective than the control. Since there were only 10 students in each treatment group it is difficult to generalize the results of this experiment. The author notes that future research should explore different rates of compression using visuals with different populations and explore the effects of color on visual representations within time-compressed instruction.

Loper (1974) conducted a study to test how 121 undergraduate students would retain facts when placed into treatments that consisted of auditory instruction, auditory instruction with visual (televised) material, and auditory instruction with visual (televised) material compressed at rates of 33%, 50% and 100%. The study found that visual augmentation did not improve instruction, which does not support the use of MERs. Although there were insignificant differences, the study only looked at facts and did not test for higher level learning objectives which are important as most training focuses are more than just recall and utilized as the best streaming tv as a presentation method where the current study will look at auditory instruction.

The literature on MERs support the use of visuals in instruction as was demonstrated by Ainsworth and Loizou (2003) and Butcher (2006). The literature on visuals within time-compressed instruction does not appear to follow this trend at this point in time, which is mainly due to limited studies on the topic and limited technology at the time of the studies. However it appears that using visuals with time compressed instruction may aid comprehension better than no visuals pointing to the fact that visuals in time compressed instruction helps reduce cognitive load. Thus, the question remains whether the combination of visuals and time-compressed instruction are a viable means to effectively deliver and present instruction. When discussing the notion that images aid in the comprehension of time-compressed speech Olsen (1985) states “If such a difference does occur, it is necessary to identify which type of visual augmentation, if any, is more effective in providing for this difference. No research to date has investigated this aspect of comprehension of rate-modified speech.” (p. 193). Further investigation is warranted to see if the use of visual presented with time-compressed instruction will increase comprehension.

The effects of time-compressed instruction and redundancy on learning and learners’ perceptions of cognitive load

My recent article published in Computers and Education:

Abstract: Can increasing the speed of audio narration in multimedia instruction decrease training time and still maintain learning? The purpose of this study was to examine the effects of time-compressed instruction and redundancy on learning and learners’ perceptions of cognitive load. 154 university students were placed into conditions that consisted of time-compression (0%, 25%, or 50%) and redundancy (redundant text and narration or narration only). Participants were presented with multimedia instruction on the human heart and its parts then given factual and problem solving knowledge tests, a cognitive load measure, and a review behavior (back and replay buttons) measure. Results of the study indicated that participants who were presented 0% and 25% compression obtained similar scores on both the factual and problem solving measures. Additionally, they indicated similar levels of cognitive load. Participants who were presented redundant instruction were not able to perform as well as participants presented non-redundant instruction.

What is Dual Coding?

The following video describes dual coding theory as well as cognitive load. The point of the video is briefly explain the main concepts around this theory. For more in depth analysis I would suggest the literature and in fact I will post some that I have written. Also, here is the image I used in the video if anyone would like to use it (for educational purposes citing me of course).


dual coding



What is multimedia?

Such a common word in our vocabulary yet so many people do not know the definition so here it goes….

Multimedia refers to a combination of both verbal (text, narration, audio) and non-verbal (pictures, images, graphs, icons) representations used in a media environment. They can be used for communication and/or learning.