Mobile learning for development: Ready to randomise?

Laurenz Langer, Niall Winters, Ruth Stewart

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

4 Citations (Scopus)


Driven by the demand for evidence of development effectiveness, the field of mobile learning for development (ML4D) has recently begun to adopt rigorous evaluation methods. Using the findings of an ongoing systematic review of ML4D interventions, this paper critically assesses the value proposition of rigorous impact evaluations in ML4D. While a drive towards more reliable evidence of mobile learning’s effectiveness as a development intervention is welcome, the maturity of the field, which continues to be characterised by pilot programmes rather than well-established and self-sustaining interventions, questions the utility of rigorous evaluation designs. The experiences of conducting rigorous evaluations of ML4D interventions have been mixed, and the paper concludes that in many cases the absence of an explicit programme theory negates the effectiveness of carefully designed impact evaluations. Mixed-methods evaluations are presented as a more relevant evaluation approach in the context of ML4D.

Original languageEnglish
Title of host publicationMobile as Mainstream - Towards Future Challenges in Mobile Learning - 13th World Conference on Mobile and Contextual Learning, mLearn 2014, Proceedings
EditorsMarco Kalz, Marcus Specht, Yasemin Bayyurt
PublisherSpringer Verlag
Number of pages12
ISBN (Electronic)9783319134154
Publication statusPublished - 2014

Publication series

NameCommunications in Computer and Information Science
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937


  • Developing-country education
  • Development effectiveness
  • Impact evaluation
  • ML4D
  • Mobile learning

ASJC Scopus subject areas

  • General Computer Science
  • General Mathematics


Dive into the research topics of 'Mobile learning for development: Ready to randomise?'. Together they form a unique fingerprint.

Cite this