Implementation

Developing Part 3 of this project was by far one of the most rewarding parts, but it also proved to be the most challenging.

On the one hand, I learned the importance of exploring all facets of the Implementation stage, an area I’d never studied or worked in before. As I worked on the course rollout plan, I developed an appreciation for the numerous moving parts involved in launching an online course, from staff allocation to marketing strategies and technical setup

On the other hand, I found it extremely difficult to select an appropriate learning management system. My first problem was determining the requirements for a learning platform. Although the DLI checklists for pedagogical and technical considerations were an excellent guide, I wasn’t sure which criteria I should prioritize. I ultimately decided that two factors were the highest priority: support for mobile learning and the capacity to host virtual classroom sessions or at least, integrations with applications that had these capabilities (e.g. Teams, Zoom).

In a sea of commercial LMS that all seemed to promote largely similar features, what made the difference was imagining myself in a context in which I only had a small team with limited resources at my disposal. After looking into 360Learning, its relatively robust feature set and affordability sealed the deal.

Evaluation

While I found that developing the evaluation strategy was more straightforward, it still came with its own set of challenges.

As a novice learning designer, I felt that Kirkpatrick's Model of Evaluation was the most appropriate framework since each stage is clearly delineated and the process of putting it into practice is relatively straightforward. By focusing on just the first three levels of the model (Reaction, Learning, and Behavior), I could also ensure that the evaluation was exclusively learner-centric. One of the most significant issues I faced was figuring out how to measure the course's impact on the broader issue of disinformation. I realized that attempting to evaluate the course's effect on a global scale would be impractical. This led to the decision to focus primarily on individual learner outcomes, a choice that I believe will provide more actionable insights for course improvement.

Exploring and implementing the User Experience Design for Learning (UXDL) framework (aka the honeycomb model) into the evaluation stage was also a key learning point for me. I found that it helped ensure that the course's visual design and accessibility features were thoroughly assessed.

Looking back, though, I see areas for improvement. For instance, given the time and resources, I might incorporate additional qualitative data collection methods into the Level 1 Reaction analysis, such as in-depth interviews with a sample of learners. These interviews could provide deeper insights into the learners' experiences and the course's impact on their digital literacy skills.

Overall, the implementation and evaluation planning process was a valuable learning experience. It challenged me to think critically about every aspect of course delivery and evaluation, helping me to develop a more holistic understanding of digital learning design. While challenges remain, I feel I’d now be better equipped to tackle them should I eventually move forward with designing and developing the full version of this course.