Mobile Learning has become a hot-topic and therefore has been seriously laden with marketing and hype. The excitement is very understandable, but it also makes finding information about how to actually create mobile eLearning solutions (mLearning) all the more difficult. Today’s post is intended to help you cut through the hype and get down to the business of creating actual training content that can be delivered on cell phones, tablets and other mobile devices.
In this series of blogs, I’ll explain what makes mLearning different than conventional eLearning, and how to adapt or customize your content for delivery on mobile devices. Then I’ll give you an overview of the dominant types of mLearning content, and give you some examples of how to create content of these types. Finally I’ll talk about some of the inherent tools in mobile devices that might be leveraged in order to provide new opportunities for your learning solutions. Today I’ll tackle the first part, What makes eLearning different.
mLearning Interface Differences
One of the most important differences between content on mobile devices and content on PC’s is that the input for mobile devices has largely moved to direct input with the finger on the screen. Whereas a PC uses a mouse to interact with the graphical user interface, the mobile device uses the finger. Matt Gemmel describes his perceptions of the key differences here, and for the more research minded among you, here’s an early study comparing touch and traditional inputs from Allen Bevans (an interesting look at whether divergent thinking is influenced.)
The practical reality here is that touching things makes us feel differently about those things. This is the core idea behind interface differences in mobile. We paint associative pictures in our minds and the things we touch and handle directly. Some of the interactions that we know from PC’s, like clicking and double clicking seem to have parallels in the world of mobile devices. You click on an icon or button on a PC, whereas you tap it on the phone or tablet. The obvious difference is that your eye and finger are focused on the same place with the mobile device – and that you can physically feel the mobile device when you touch it.
What you may not realize is that mousing on a PC actually has many many variations of the click. Clicks for example are generally divided into the down and the up phase. This is why you can click down on a button, then roll your mouse off that button, and then click up – thereby cancelling an unintended mouse interaction. You don’t have that luxury with a direct finger interaction. That said, there are many ways that the finger inputs can be used to create interesting and logical user inputs. Now for the most part you’ll rely on your content creation tools to assist you in creating the interactions of clickable objects etc, but it is important to know a little about the differences in order to make your content behave logically on the mobile device. Here’s a quick list of mobile finger interactions and how they are generally used.
All about the fingers
Not only does the experience of touch make the interaction with the device more intimate, it also creates both new opportunities and new challenges for eLearning solutions. Now many of these challenges should be off-loaded to your eLearning creation software. You’ll want to look to your authoring tool to solve the problems of creating inputs that recognize the conventions of mobile solutions. But you’ll also want to be aware of them, as there are some that are unique to mobile, and others that mimic or emulate a functionality that was available on pre-touch interfaces.
The absence of Hover
The lack of a hover function is one of the biggest differences between mobile devices and PC’s. Hover is used in mouse based input to provide feedback to the learner that the mouse selection is inside the area of a button or interactive object, but that object is not yet selected. We see a mouse hover as a hilight area (most of the time). Here’s a nice discussion of the absence of hover from Trent Walton. The basic point here is that the absence of a hover means that your designs will need to adapt significantly for mobile. You won’t be able to rely on the learner’s rolling over faded images, hidden hilights, subtle hyperlinks etc. in order to expose popups, tooltips, fly outs or to alter the state of images, text and other elements without clicking / tapping the element. This means that in general, you’ll have to be much more specific about creating visual clues for what is and what is not ‘tappable’ on a given screen.
Tap
The finger tap on a mobile device is very similar to the click of a mouse on a desktop or laptop computer. As you create mLearning it’s good to keep several things in mind regarding the tap/click interface. 1.) The physical finger is much larger and a much bigger visual factor in overall visibility when clicking. Fingers come in all shapes and sizes and virtually all of them are a great deal larger than a mouse. In most cases, the hand, the finger and the arm block the learners vision of large portions of the screen during a tap event.
You’ll want to consider this limitation as you design for mobile devices. Precise points for tapping may be much more difficult for learners to achieve so you’ll want to provide large areas to click – and you’ll want things like radio buttons on questions that highlight the entire answer row when tapped to help the user verify they’ve selected the row that they thought they did. You’ll also want to provide some room for accidental selection that is not penalized, by allowing learners to choose one thing, then change that choice before submitting a final answer.
Double Tap
A double tap is defined in the iOS Human Interface Guidelines for Zooming in and out. iOS users are accustomed to zooming in and out on double click centered on the click point.
Hold
The hold interaction is unlike the kind of interactions you may be familiar with in mouse based operating systems. Basically this is a time sensitive touch – it takes note that your finger is touching, and held for a period of time in the same location. The most recognizable use of this in iOS is used to make the app icons (and therefore the apps), modifiable. It’s interesting to note that in order to communicate this special state Apple uses animated icons with big red x’s and that wiggle around in constant rotation back and forth. Hold is also often used as a sort of right click, to bring up a popover window with multiple choices. It’s worth noting that on iOS places everything into the editable state, whereas hold only works on single icons – and only lasts while the icon / app is being actively ‘held’ on my Android device.
Swipe
The swipe gesture is commonly associated with page turning. Learners brush their finger across the surface of the tablet or phone – generally in a horizontal or vertical motion. This is similar to a drag action with a mouse, though the experience of a swipe is one of the clearest examples of the visceral nature of direct surface contact. The page appears to follow the motion of the finger and the gesture is extremely intuitive on a touch surface. Likewise the gesture is generally not intuitive when using a mouse on a desktop or laptop.
Pinch & Spread
One of the coolest ways to get refined control over zoomed areas on the screen on a mobile device is to use the pinch or spread gesture to zoom in or out on an image or object. This two finger gesture requires the user to pinch the fingers together or spread them apart to control the zoom level of an image or object on the device. Clearly this introduces an added potential benefit. You can theoretically design eLearning elements that are visible at much higher resolutions in order to enable the use of things like interactive detailed maps or charts that on the whole contain too much information to really see on a single screen, but that can be quite useful if navigating a whole map one small area at a time. Because this gestural manipulation feels much more like manipulating a real world object counterpart, the knowledge transfer should also be deeper and more lasting.
Pull / Drag
Another common convention in tablets and mobile touch interfaces is to reveal the contents of a hidden tab by pulling or dragging on the tab to reveal the full tab. It’s possible that we’ll see this gestural navigation begin to replace eLearning mainstays like the rollover – as it is intuitive to pull out a window with additional information on a mobile device, and has good parity with the rollover on conventional computers – this would give eLearning developers a way to design similar interfaces for both platforms, but allow users to simply rollover the area on a PC and to drag the tab open on a mobile device.
Tilt
Many of today’s mobile devices are also able to detect and react to information about the rotation and orientation of the device. As a result the application may be re-oriented to appear upright on the screen regardless of how the device is being held. This adds a dynamic in that your mobile eLearning projects could either display smaller in vertical orientations, or could be dynamically altered to display differently when the device uses a vertical orientation.
Real World Experience Differences
The most obvious difference between mobile devices and their counterparts is that cell phones and tablets are not tethered to a fixed location. This provides a great opportunity to catch learners in convenient settings, but it also means that you can get information, job aids and other content to people in the context of their regular work.
In fact Just in Time eLearning (Information the learner needs on site, in context and on demand) is one of the most popular forms of mobile learning developed today. This could include things like searchable indexes and databases, to help people find the answer to a problem on site or in the field. It includes things like job aids, for example a brief video demonstrating a process that isn’t performed often, but must always be done correctly. Checklists are another solid Just in Time solution. Why not provide a tick list to guide the steps of a procedure, you could even give optional information about how to perform each step from a mobile eLearning app.
PodCasts and VodCasts are another good Just in Time solution. You can record short episodes on given tasks and provide them all through search or links. This way the learner can get the information that they need when and where they need it. You’ll notice that all of these JIT solutions tend to be ways to bring information to the trainee when and where it is most useful. You can frame them in differing contexts, but the crux is that you’re able to make the information more available in the most desirable way – on the job while the trainee is actually encountering the task you are training them for.
One additional way to make these job aids really sizzle is to consider mapping them to real world objects with QR Codes. You can create custom QR codes to link to the appropriate Just In Time eLearning, and affix real stickers of those QR codes to the items in the workplace that are associated with demand for that lesson. The obvious reference here is to map a given machine on the factory floor to JIT eLearning using QR codes, but you could use these in a variety of ways. You might for example put one near the water cooler for a module on Information Awareness. On my trip to the zoo this weekend I noted that QR codes had been added to all of the animal signs, thus enabling Just In Time eLearning content, marketing etc. related to the currently viewed animals.
The eLearning company Kineo has been doing some very nice work creating resources for companies investigating the mLearning space. Recently they published a couple of guides to mLearning which are available online via the Kineo Website.
Over the next six weeks or so the Adobe eLearning Team is hosting a mind-blowing array of eSeminars all about HTML5 and mobile eLearning. You can learn all about the series here:
I’ll be hosting a couple of those sessions, please feel welcome to sign up for any of them from the links below:
Best Practices for Creating HTML5 Courses using Adobe Captivate 5.5 and HTML5 Converter
April 17, 2012
Join Michael Hinze, Adobe Captivate Expert, as he shares the best practices of creating HTML5 Courses using Adobe Captivate 5.5 and HTML5 Converter. He will show you how to create a course in Adobe Captivate 5.5 and publish it to HTML5 format using HTML5 Converter. He will also share some tips and tricks for effectively creating HTML5 courses.
TIME: 8AM US Pacific
URL: http://www.adobe.com/cfusion/event/index.cfm?event=detail&id=2010007&loc=en_us
Practical Mobile eLearning Today: Real Solutions for Creating mLearning for Your Organization Right Now
April 18, 2012
Join Chandranath Bhattacharyya, the engineering manager for HTML5 eLearning projects at Adobe and Dr. Allen Partridge, Adobe eLearning Evangelist for an introduction to the practical side of mobile eLearning development. We’ll discuss the UI differences that make eLearning content for mobile delivery different than PC deployments and we’ll show you how to get from Adobe Captivate to iOS and other mobile platforms. Get down to the business of creating actual training content that can be delivered on cell phones, tablets and other mobile devices. We’ll give you an overview of the dominant types of mLearning content, and give you some examples of how to create content of these types. Finally I’ll talk about some of the inherent tools in mobile devices that might be leveraged in order to provide new opportunities for your learning solutions.
TIME: 10 AM US Eastern Time (NOTE SPECIAL TIME)
URL: http://adobe.ly/IlLqN9
10 Key Requirements for HTML5 eLearning Authoring tools
April 25, 2012
Join eLearning professional Dustin Tauer and Dr. Allen Partridge for an amazing eSeminar focused on HTML5 eLearning Authoring. As the mobile revolution marches on more and more Instructional Designers, Chief Learning Officers, Trainers and Developers are scratching their heads at the rapidly changing landscape. What should we look for in our eLearning tools? Do we need to migrate tens of thousands of courses? How will we know what the standards are? Dustin and Allen will tackle these issues and more in this live interactive session designed to clarify the most pressing and critical requirements for HTML5 Authoring. Join them as they challenge the industry to provide the best tools to do the job and begin to set the standard for eLearning tools that publish HTML5 content.
TIME: 1 PM US Eastern Time
URL: http://adobe.ly/HC1YMZ
Transitioning from eLearning to mLearning
May 9, 2012
Description: Join Josh Cavalier, popularly known as Captain Captivate, along with Pooja Jaisingh and Vish as talk about how mLearning fits into the overall instructional strategy and how you can effectively use mobile devices for learning consumption and creation. They will also discuss how to build the mLearning content delivery ecosystem and various delivery options available for mLearning
TIME: 8 AM US Pacific Time
URL: http://adobe.ly/IfPvkF
Hi Allen, You analysis of the common touch events seems a little broader than covered in current the 3WC HTML5 touch draft. I just posted on a similar topic where I see GBI (Gesture Based Interface) as one tool for making functional mLearning that uses interface actions familiar to mobile users to save valuable sceen real estate for content. It would also be cool to let simple voice commands trigger the appropriate events next/back/repeat/replay. Cheers
Hi Allen, do you by any chance know when the HTML5 Converter will be able to handle widgets and animations? I know it doesn’t at the present moment, which is why we haven’t used it yet. We have widgets in our e-Learnings as well as use text animations and would love to use the converter for mLearning. Are there upgrades in the works?
You must be logged in to post a comment.