I have no idea what the protocol is for naming versions of things. I imagine, like me, someone has an idea of what the stages are going to look like, when a truly fresh new is going to happen. For me I have a sense that version 4.0 of the SOLE Toolkit will incorporate what I am currently learning about assessment and ‘badges’, self-certification and team marking. But for now I’m not there yet and am building on what I have learnt about student digital literacies so I will settle for Version 3.5.
This version of the SOLE Toolkit, 3.5, remains a completely free, unprotected and macro-free Excel workbook with rich functionality to serve the learning designer. In version 3.0 I added more opportunities for the student to use the toolkit as an advanced organiser offering ways to record their engagement with their learning. It also added in some ability to sequence learning so that students could plan better their learning although I maintained this was guidance only and should allow students to determine their own pathways for learning.
Version 3.5 has two significant enhancements. Firstly, it introduces a new dimension, providing a rich visualization of the learning spaces and tools that students are to engage with in their learning. This provides an alternative, fine-grain, view of the students modes of engagement in their learning. It permits the designer to plan not only for a balance of learning engagement but also a balance of environments and tools. This should allow designers to identify where ‘tool-boredom’ or ‘tool-weariness’ is possibly a danger to learner motivation and to ensure that a range of tools and environments allow students to develop based on their own learning preferences.
Secondly, it allows for a greater degree of estimation of staff workload, part of the original purpose of the SOLE Model and Toolkit project back in 2009. This faculty-time calculations in design and facilitating are based on the learning spaces and tools to be used. This function allows programme designers and administrators, as well as designers themselves, to calculate the amount of time they are likely to need to design materials and facilitate learning around those materials.
Back in the late northern hemisphere summer of 2013 I drafted a background paper on the differences between Educational Data Mining, Academic Analytics and Learning Analytics. Entitled ‘Adaptive Learning and Learning Analytics: a new design paradigm‘, It was intended to ‘get everyone on the same page‘ as many people at my University, from very different roles, responsibilities and perspectives, had something to say about ‘analytics’. Unfortunately for me I then had nearly a years absence through ill-health and I came back to an equally obfuscated landscape of debate and deliberation. So I opted to finish the paper.
I don’t claim to be an expert on learning analytics, but I do know something about learning design, about teaching on-line and about adapting learning delivery and contexts to suit different individual needs. The paper outlines some of the social implications of big data collection. It looks to find useful definitions for the various fields of enquiry concerned with collecting and making something useful with learner data to enrich the learning process. It then suggest some of the challenges that such data collection involves (decontextualisation and privacy) and the opportunity it represents (self-directed learning and the SOLE Model). Finally it explores the impact of learning analytics on learning design and suggests why we need to re-examine the granularity of our learning designs.
“The influences on the learner that lay beyond the control of the learning provider, employer or indeed the individual themselves, are extremely diverse. Behaviours in social media may not be reflected in work contexts, and patterns of learning in one discipline or field of experience may not be effective in another. The only possible solution to the fragmentation and intricacy of our identities is to have more, and more interconnected, data and that poses a significant problem.
Privacy issues are likely to provide a natural break on the innovation of learning analytics. Individuals may not feel that there is sufficient value to them personally to reveal significant information about themselves to data collectors outside the immediate learning experience and that information may simply be inadequate to make effective adaptive decisions. Indeed, the value of the personal data associated with the learning analytics platforms emerging may soon see a two tier pricing arrangement whereby a student pays a lower fee if they engage fully in the data gathering process, providing the learning provider with social and personal data, as well as their learning activity, and higher fees for those that wish to opt-out of the ‘data immersion’.
However sophisticated the learning analytics platforms, algorithms and user interfaces become in the next few years, it is the fundamentals of the learning design process which will ensure that learning providers do not need to ‘re-tool’ every 12 months as technology advances and that the optimum benefit for the learner is achieved. Much of the current commercial effort, informed by ‘big data’ and ‘every-click-counts’ models of Internet application development, is largely devoid of any educational understanding. There are rich veins of academic traditional and practice in anthropology, sociology and psychology, in particular, that can usefully inform enquiries into discourse analysis, social network analysis, motivation, empathy and sentiment study, predictive modelling and visualisation and engagement and adaptive uses of semantic content (Siemens, 2012). It is the scholarship and research informed learning design itself, grounded in meaningful pedagogical and andragogical theories of learning that will ensure that technology solutions deliver significant and sustainable benefits.
To consciously misparaphrase American satirist Tom Lehrer, learning analytics and adaptive learning platforms are “like sewers, you only get out of them, what you put into them’.”
Siemens, G. (2012). Learning analytics: envisioning a research discipline and a domain of practice. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge (pp. 4–8). New York, NY, USA: ACM. doi:10.1145/2330601.2330605
There is no such thing as blended-learning. Or rather there has never been anything except ‘blended’ learning. Of course we all know that, we’re just lazy with our language and as Orwell(1) said “…if thought corrupts language, language can also corrupt thought.” Maybe it’s worth thinking about the terminology we use.
I have no problem with a conversation about the right blend, indeed I rather like the verb ‘blend’, it’s the noun ‘blended/ing’ I find problematic. Let’s stop talk about the ‘blended approach’ and describe instead our model of learning. Let’s agree on our underpinning theoretical structures (if you like that sort of thing), identify our context and that of our learners (culture, expectations, destinations, prior experience, infrastructure), and let’s describe our model.
What we have in the contemporary ‘blended’ debate is a healthy concern with what students’ do, and where, how and when they do it. Rather than teaching our one-hour lecture and our two our seminar and despatching students’ into the dark dusty stacks or the ‘short-term loan’ mêlée, we now seek to engineer the ‘blend’ of approaches we want them to take. The scrap for the library carousel and scouring the desks of the studious for the only copy of the ‘reference-only’ gem has now been replaced by a broader concern for the ‘design’ of the students’ learning. We blended twenty years ago and we blend today, only the context has changed. This is a good thing.
So why don’t we call it that, why don’t we call it ‘our learning model’? Since here is so much pressure on Universities to differentiate themselves why don’t we seek to develop, articulate, refine and promote the Massey Learning Model, the Athabasca Learning Model, the Wisconsin Learning Model.
‘Blended’, like many terms in education, has been in vogue and now risks being taken for granted and misused. Alternative terminology also has its supporters; ‘mixed-mode’ and ‘hybrid’ are also used synonymously. The most common conception of blended learning is one in which there is a combination of face to face, real-time, physically present, teaching and computer-mediated, essentially online, activity. The term has come to imply an articulated and integrated instructional strategy. The term blended is often used to imply something more than the evolution of digital materials ‘supplementing’ face-to-face instruction, rather it implies that each ‘mode’ can serve a student’s learning in different ways. In practice this might mean that a two-hour lecture and a two-hour seminar become a web based lecture, a face-to-face seminar and several web based activities, allowing more time for contributions, more time for voices to be heard.
The contemporary argument is often simply maths. In a class of 40 where one would hope to have a thoughtful 10-15 minute contribution from each student, a seminar would need to be 8 -10 hours long. Online that same reflective and expressive opportunity is unbounded by class-time.
There are many reasons to reconsider the reliance on face-to-face instruction.
Participation, the opportunity to contribute, is one. But there are also opportunities for content to be paused, reviewed, annotated, questioned, spliced and shared in ways that live synchronous face-to-face contact cannot be. Media-rich course content, video and audio, interactive resources, formative assessments, all serve to allow the student to choose not just when, but also where, to study. The ‘where’ question then also gives rise to the other popular motif amongst University leaders, mobile learning.
The reason it is so difficult to establish what the right ‘blend’ is, is simply because the context of the learning determines the nature of the blend. The students’ context establishes what can and can’t be done in a specific mode, what time parameters exist, what technology restrictions and what assessment evidence is ultimately required.
Perhaps the biggest argument in favour of a blended approach (20 years ago and today) is simply that it requires engagement. Managing to access content and activities, participate appropriately and incrementally develop a portfolio of formative assessment towards a final summative goal, requires, self-management, discipline, at least some digital literacy today, and some motivation. Turning up and sitting in class is not hugely onerous (although arguably it demonstrates time-keeping).
So if you’re an institution considering the ‘Blend’, I’d like to offer a suggestion. Don’t. Instead consider the nature of your context (past-present-future) and articulate the learning model around which your exemptions and exceptions will develop, articulate a learning model to rally staff to a shared concept of learning (believe me, ‘blended’ won’t excite them) and articulate a model that learners will say “I recognise that, that’s my concept of myself as a learner, I’ll go there”.
Take a diagnostic model (here’s one I prepared earlier…) and define your own unique model of learning (better still invite meto come and work with you on it), and I guarantee you will be blending (verb) but you won’t have to try and sell the stillborn ‘blend’ (noun).
(1) Politics and the English Language” (1946) George Orwell
The 27th Annual Distance Learning and Teaching Conference at Madison-Wisconsin this August was a diverse and varied programme attended by some 900 distance educators from all sectors, from K-12 to professional education. My contribution was a half-day DiAL-e Workshop with Kevin Burden (University of Hull) attended by some 24 people. The workshop went relatively well but also gave us an insight into a variety of cultural differences in such settings. I was able to learn from this and the 45 Minute Information Session on the SOLE model the following day definitely had a better ‘buzz’. In addition I contributed to a new format this year, a 5+10 Videoshare session where participants had (supposedly!) produced a 5 minute video and then made themselves available to discuss it for 10 minutes.
All the sessions went well but the SOLE model and toolkit seemed to grab some serious interest and I will hope to have the opportunity to go back to the States and work with colleagues on learning design projects in the future.