November 11, 2014 § 1 Comment
There have been many, many times in human history where an upheaval of some kind changes everything, and people have to recalibrate what they know, and how they do things: early agriculture, the printing press, religion, industrialisation, and the silicon chip have all posed new problems and opened up new opportunities for segments or the entirety of the human race at one time or another. During such periods, established wisdom and knowledge are of little use, and fresh maps are needed to chart the way through the unfamiliar landscapes of the new era.
The present modern times pose all sorts of new questions about where we are going and the best routes to take, and the Internet is not the most challenging of these when you consider climate change, nationalism, or global poverty. But it nonetheless is an unprecedented phenomenon and arguably a major element with regard to all the other challenges: a means of communication that operates more or less without boundaries, radically more democratic than any previous form of mass communication, posing scope for human intercourse that is in equal measure thrilling and frightening. But it is still very new: the World Wide Web has been around for less than twenty five years, and that is absolutely nothing in human scale. We should recognise that we are still at the very outer fringes of understanding how the Internet might impact on the distribution of knowledge, power and democracy across the world.
We, adults that is, are certainly not yet in a position to pronounce with any great authority to the young on the significance or value of the online world. Young people are dynamic players on this new stage, and are creating their own means of incorporating it into their lives. Whilst no more wise than anyone else about the implications of their activities online, they are often highly adept at achieving their own needs in their own ways. Certainly as adept as many of their teachers, who are inevitably caught up in schools’ anxieties about the Internet.
Its actual value in schools is quite trivial; a kind of children’s encyclopaedia in which to find the most findable and least challenging knowledge. Educators should instead be grasping the Internet with both hands, and making it the focus of exploration, study and deep use in formal education. It is not straightforward: the particular problem of understanding what this new thing means in our lives, and learning how to manage that, is something quite distinct from the familiar curriculum of formal education. There is no overarching or authorised knowledge of the Internet: it is only knowable in fragments, and in the partial understandings of different individuals and groups. And the solution should definitely not be to act as if we do know all about it by laying claim to expertise under the banner of something posing as authorised knowledge called Digital Literacy.
Rather, formal education should adopt a phenomenological orientation towards the Internet, and prioritise the collaborative exploration in classrooms of the different ways in which different people make sense of it, experience it, act in it, learn from it; and through those explorations, perhaps understand better how we might do those things better, more intelligently. Teachers and pupils, young and old, should bring their own perspectives and experiential knowledge to this exploration in equal measure, with equal respect. The goal has to be far more ambitious than learning how to be safe online: we should accord the Internet the central attention it merits as a defining aspect our modern age, and an indispensable medium of all that we might consider educational. Above all, this should be a democratic endeavour, quite free from the deadening paternalism of anything like Digital Literacy instruction.
May 19, 2014 § 9 Comments
Please read the following very carefully. It is the opening Statement of Purpose to new subject in the National Curriculum for England, computing. And it is positively bursting with goodness:
“A high-quality computing education equips pupils to understand and change the world through logical thinking and creativity, including by making links with mathematics, science, and design and technology. The core of computing is computer science, in which pupils are taught the principles of information and computation, and how digital systems work. Computing equips pupils to use information technology to create programs, systems and a range of media. Building on this knowledge and understanding, pupils are equipped to use information technology to create programs, systems and a range of content. Computing also ensures that pupils become digitally literate – able to use, and express themselves and develop their ideas through, information and communication technology – at a level suitable for the future workplace and as active participants in a digital world.”
Change the world! Or at least maybe find a job. Here are the keys to your digital future. Consider yourselves equipped.
Overblown rhetoric aside, this represents, in epistemological terms, what might be viewed as a very significant shift in traditional British thinking about education. In effect, it proposes that the rigour and discipline of computer programming is capable of providing children with all the things that were once thought to be afforded by a classical education: logical thinking, creativity, understanding, self-expression and empowerment. If Athens was once the intellectual cradle of civilisation, I guess it is now located in a cloud somewhere between Stanford and Seoul.
In the detail of thinking that follows this opening statement, these proposals seem quite attractive, and will certainly resonate strongly with a portion of the school population at least. So, it has to be correct to offer all young people the chance to learn how to “to create programs, systems and a range of content” in appropriate and engaging ways. This could be creative, productive and educational fun, if done well (the prospects of actually being able to provide this high quality computing education right across the system are somewhat less certain). For all the hyperbole of the opening sentence, the basic premise of promoting a child friendly version of computer science is both daring and sound.
But please note: computer science is a discipline in its own right. And like any discipline, there are strict limits to its concerns; the inner logic of the thing. I cannot see why the inner logic of computer science is expected to encompass the notion of digital literacy – a claim that is offered as a final irresistible flourish in the statement of purpose, presumably to reassure us that whilst the baby that was ICT has indeed been thrown out, its bath-water remains:
“Computing also ensures that pupils become digitally literate – able to use, and express themselves and develop their ideas through, information and communication technology – at a level suitable for the future workplace and as active participants in a digital world.”
Computer science concerns the application of symbolic languages to the manufacture of mediated communications and representations. It is, quite evidently, a morally neutral activity which is concerned with making things work well. Digital literacy, on the other hand, is essentially a moral construct that concerns itself with guiding people to use digital media in socially desirable ways, so that they may contribute to the knowledge economy, and not disrupt the moral economy. There is a meeting point for the two, I acknowledge: you can’t be an effective knowledge worker if the code breaks down; but then neither can you send abusive messages on Twitter if the code breaks down. Just because digital literacy comes from the same wellspring as computing, it does not mean that they can be made to serve each other educationally.
This is a fundamental fallacy of false relations – like thinking that if you eat a lot of fish you will learn to swim.
Coming soon – part 2. What kind of digital literacy do we want, if any?
February 19, 2014 § Leave a comment
We recently hosted the first of our Breaking Boundaries seminars, titled Digital Inclusion for Digital Communities. Our speakers, Dr Jonathan Tummons, from Durham University’s School of Education and James Richardson of Tinder Foundation, provided us with differing but complementary perspectives on notions of digital communities and digital inclusion. An audience from across departments helped to contribute to the interdisciplinary debate – just the sort of discussion we hoped would take place. Thank you to all who attended -both in person and online – and especially to James and Jonathan for such thoughtful presentations (available to view here).
What follows is a brief reflection on issues raised in the seminar, particularly in light of the overarching theme of the series: the use of technology to break down barriers to learning and participation in society. While approaching the theme from apparently different angles, there was some immediate common ground for our two speakers.
Jonathan is currently a co-investigator based in the UK on a three-year institutional ethnography: Medical Education in a Digital Age, which explores the issues that surround the implementation of a new medical education curriculum enacted across two locations in Canada, New Brunswick and Nova Scotia, three hundred miles apart. Jonathan described the simultaneous facilitation through technology of this specially designed university course, and the research activities of the team of digital ethnographers.
In his theoretically informed presentation, Jonathan touched on underlying institutional discourses of equality – the notion that technology could and would provide a parity of experience for students across the two sites. Yet Jonathan reflected on emerging research findings that students at the “satellite” site, who received streamed lectures, were less likely to ask questions and engage informally with lecturers, and were therefore engaging in a different way to those students who were experiencing face-to-face tuition. An audience member contributed some personal reflections after undertaking an online teacher training programme, noting that she felt the technology – despite idealized notions of a “global” classroom – actually represented a barrier to her sense of belonging to the student community, rather than facilitating engagement. We discussed the importance of our own discursive constructions of the communities we choose (or choose not) to join, and the learning that accompanies any new application of technology: competency with one form of technology does not necessarily equate to competency in another. An overarching conclusion, perhaps, was the important mediating role of the physical world, including technical or face-to-face support, and real social relations, in the virtual world.
Similar ideas were reflected in James’ presentation, in which he discussed the work of Tinder to support digital inclusion across the 2800 centres of the UK online network. There are currently 11 million people without basic digital literacy, and those who are digitally excluded are likely to also be socially excluded in a number of ways. While discourses might suggest that digital inclusion leads to greater to social inclusion and can, therefore, break down barriers to participation in society, this picture is highly simplistic. “Online communities can be of obvious and immediate value, providing they give people the opportunity to learn, to share experiences and ideas, to build social capital and feelings of self-efficacy….but those who stand to benefit most from online communities are those who are least likely to use them,” said James. “Engagement is a huge issue,” emphasising the importance once again of physical, real world mediation and the complex skills required to interpret information online. In digital inclusion interventions, “contact must be sustained, tailored, responsive and, above all, human to have the desired effect.”
James called for a move beyond the familiar, straightforward questions about access to digital resources towards new understandings. How does digital inclusion increase social mobility, confidence or self-efficacy, for example? While these softer measures are considerably harder to measure, and less appealing in impact evaluations for funders and stakeholders, perhaps they represent some of the most important boundaries to address through digital inclusion practice. James argues that grassroots organizations and academia have much to learn from each other, with huge potential for collaboration to engage with such questions in the future.
Don’t forget the next seminar: OER, MOOCs and the promise of broadening access to education on Thursday 20th February, 5-6.30pm, at the Oxford Internet Institute. It promises to be a great event!
January 15, 2014 § Leave a comment
I work as Director of Learning at Epic, and part of my role is to explore the potential of new communications technologies to support people to learn at work. I’m particularly interested in the potential of mobile devices to support learning and this interest led me to study for a part-time Doctorate with the Department of Education, within the Learning and New Technologies Research Group. The focus of my research is on the potential of mobile devices to help adult learners in the workplace.
As someone interested in mobile devices for learning, I’ve obviously been following recent developments in mobile technology with interest and one of the more exciting advances has to be the advent of Google Glass and similar wearable devices. Google are now trialing their new technology enhanced spectacles, provoking plenty of speculation and debate. This is what Google have to say.
Some of the speculation centers around what people will look like wearing Google Glasses (will they feel cool enough?!) and at Epic, we’re reading Fashionable Technology: The Intersection of Design, Fashion, Science, and Technology with interest. While it’s certainly true that if people don’t feel confident and comfortable sporting wearable technology, it won’t take off, there are other more far-reaching concerns to consider.
It won’t be a huge surprise that I’m an enthusiast of all things techie, so I’m as excited about the potential of Google Glass as I am about the iWatch, gesture-based technology and the various measurement apps being developed by the quantified-self movement.
This year, Epic were fortunate enough to be successful in our application to trial a pair of Google glasses, and I got the chance to try them out for myself. In terms of looking cool, I think I can safely say I failed: I caused some amusement walking around the workplace vigorously nodding my head to reset the view. It was exciting though and within a few minutes I had browsed our website, looked someone up on Wikipedia and made a phone call to a colleague in New York, all without typing anything into a device. The view wasn’t too obstructive and the voice recognition worked well for simple commands, although it didn’t seem to get along too well with my accent for anything more complex.
Some of the potential activities I can envisage for future learners wearing Google Glass include on-the-job performance support for workers learning practical skills such as bricklaying and plumbing or even surgery, or a vocabulary reminder for people learning new languages, helping them cement their understanding in context. On a LinkedIn discussion in Epic’s learning technologies group members suggested support for medical students, prompts for people giving presentations and even help for new skiers! And that’s just taking advantage of the potential that Google Glass can add to your peripheral vision.
Google Glass technology also offers the potential to video record what you can see and to stream it to someone else. This could be hugely useful for learning. For example, apprentices could benefit from this when discussing performance challenges with mentors. This would be a unique opportunity to see through the eyes of someone who is either more or less skilled at performing a task.
But there are also causes for concern. There is a negative potential for future Google Glass wearers to unobtrusively record things in their vision. Tech analysts are not just thinking about the experiences of those people wearing the glasses, but also about the experiences of those people the Glass wearers are looking at. In a future Google Glass wearing society, it is possible that many of us won’t know when and where we are being recorded, who is storing those recordings and what they are doing with them.
It’s one thing to voluntarily decide to take part in recording your performance for the purpose of identifying areas for improvement – athletes do this all the time, as do executives honing their public speaking skills – but it’s quite different to be involuntarily recorded as you take your child to the park, or eat a quiet dinner with friends in a neighborhood restaurant.
While Google Glass is likely to be a great new support tool in the former case, it could also have serious implications for privacy in the latter case, as this article suggests.
If much more of what we do and say can be recorded and stored, it will give more weight to fleeting moments and we will find ourselves on public show at times when we would prefer to lower our guards and relax. Some people have already made the mistake of thinking that conversations on Twitter are like private conversations with friends, and have found themselves taken to court for expressing off-remarks in writing that might have been disregarded or forgotten had they been spoken in the pub. Because our online opinions are stored in the public domain and often remain there indefinitely for anyone to view, they are treated in a different way.
If private moments start to be routinely recorded and stored, then chatting to your friends in a cafe or bar could become more like having a written conversation on Twitter or Facebook, where anything you say can be saved, stored and tweeted around the world or even used in evidence against you.
If Google Glass takes off, it’s likely that etiquette will evolve around using it, in the same way that people are now asked to switch their phones off in the theatre or cinema. Agreements may also evolve about who owns the videos and who may video what.
Still, it’s a sobering thought. Many technologies have potential for harm as well as good, and while I still can’t help but be excited by the potential of Google Glass and keen to keep playing with our pair, I’m also concerned by the potential erosion of privacy.
When it comes to using this technology for training, designers of technology-based training and performance support, and especially executives in Learning and Development departments, will need to give careful thought to formulating policies about what is and isn’t appropriate and what will happen to the data they generate.
An earlier version of this blog appeared on the Epic blog in March 2013.
October 15, 2012 § Leave a comment
I recently enjoyed watching Sherry Turkle’s TED video on being ‘Connected But Alone’. This explores the idea, which also features in her most recent book (Alone Together: why we expect more from technology and less from each other), that technology can act as a tool of isolation and can actually reduce meaningful social interaction.
In this TED talk, Turkle issues a warning that people’s constant connection via digital technology, (focusing particularly on the example of smart phones) is not a replacement for real conversation, arguing that bitesized communiqués, how ever many one shares, do not add up to a deep meaningful interaction. She goes further, suggesting that such constant connectivity can actually block real human connection and offers the example of a meal table where parents ignore their children because they’re too busy checking their smart phones. The implication here is that technology is the cause of isolation and poor familial communications.
Turkle offers more examples and deepens her argument in a fascinating way by discussing the longer-term implications of this kind of technology use for our internal dialogues and self-analysis and our ability to deal with solitude. However, here, I want to discuss very briefly the main point of the video: that technology blocks and reduces real world interaction.
Taking up Turkle’s example of smart phones, I absolutely agree that they can be used as tools that inhibit face-to-face dialogue. I have failed numerous times to get a proper response from someone because they have been too engrossed in their phone. More often, I’m sorry to say, I have been guilty of being engrossed in my phone myself. However, Turkle expands on this and places her arguments within a discourse of moral panic about technology and, in doing so, assumes a causal relationship between technology and poor social interaction.
The image of parents ignoring their children at the dinner table is an excellent example of the morality she introduces. Here the smart phone is immediately portrayed as the cause of the break down of tradition family values: a child’s desire to reach out and communicate with the older generation is ignored because traditional modes of meaningful conversation have been replaced by mobile phone screens and an addiction to shallow quick hits of controllable digital data. The image is so emotive that the technology is immediately cast as a causal villain and an instant condemnation is demanded.
However, what goes unsaid is the fact that parents have been finding ways to ignore their children long before the invention of smart phones. If not newspapers, books or even 1000-yard stares, it’s been cultural rules relating to children being ‘seen but not heard’. This intergenerational conflict is wonderfully described by Waugh (in Brideshead Revisited) where the dinner table becomes Charles and his father’s battleground with the printed word being the weapon of choice.
In fact people have been finding ways of avoiding social interaction for years in a number of contexts that extend far beyond familial interaction: on the bus, the train, the tube, the street, work, the gym, the pub etc. It’s true modern technology is an excellent tool for doing this, but it is not the only one and historically has not been the only one. It is part of a long list of, to modify the meaning of Foucault’s term, technologies of power, which have been used to control social interactions – reading, writing, whistling, praying, blankly staring, or simply frowning have all been used as methods of excluding external intrusion. Perhaps, then, rather than technology causing the break down of social interaction, it’s use as a tool to socially isolate the user from what’s going on around him/her should be thought of as an expression of a basic human need – a need, at times, to have a break from the people we are with.
The roles that modern digital technologies and social media play in our lives are highly complex and far more nuanced than the idea that they cause social breakdown (and far more nuanced than my light hearted idea that they are an expression of a basic human need to occasionally socially exclude – although that may be one important role that they play). I think the place of technologies in our lives is fascinating and should be thoroughly researched and discussed. However, I’m inclined to think that moralizing about technology’s evil effects and reduction to cause and effect does more to hinder this research agenda than progress it.
July 11, 2012 § Leave a comment
The arrival of the Raspberry Pi has been heralded by some as ‘the new BBC Micro’ in terms of how it has the potential to revolutionise computing in schools. Whereas the BBC was a game-changer by being made available in every school, the Pi aims to be available for every child to take home. This is mostly due to its cost, at only around £25 for a fully-functioning computer, but also because each one is about the size of a credit card (excluding keyboard, mouse and monitor — the Pi is designed to connect to a family TV). It also comes with several child-friendly programming environments installed, such as Scratch, to address the concern that computer programming is disappearing from schools (unlike the BBC Micro, where programming in BASIC was an integral part of using it).
With this in mind, the National Archive of Educational Computing have released a rather timely report on the success and long-term impact of the original BBC Micro and the Computer Literacy Project, including lessons learned and key recommendations for the success of similar future projects. These include the importance of supporting learning outside the classroom, and in reaching the home as well as the school, so it will be interesting to see how well-placed the Pi is to achieve this. The report is well worth a read, and is freely available from Nesta.
I will now be spending the rest of my day reminiscing nostalgically about the BBC Micro and showing my age terribly… kids of today will never understand the joys of spending an hour loading a game from a cassette tape…
April 4, 2012 § 1 Comment
‘Information is much more portable in the modern world than it used to be. So are people….There are three things which have revolutionized academic life in the last twenty years…: jet travel, direct-dialing telephones and the Xerox machine…As long as you have access to a telephone, a Xerox machine, and a conference grant fund, you are ok, you are plugged into the only university that really matters- the global campus.’
The professor had a point, but he overemphasized the technologies of the 1980s. Two decades after his words were written, they have been superseded by e-mail. Digital documents, Web sites, blogs, teleconferencing, Skype, and smartphones. And two centuries before they were written, the technologies of the day- the printed book, and the postal service- had already made information and people portable. The result was the same: a global campus, a public sphere, or as it was called in the 17th and 18th centuries, the Republic of Letters as Lodge stated in his novel.
Any 21st century reader who dips into intellectual history can’t help but be impressed by the blogosphere of the 18th. No sooner did a book appear than it would sell out, get reprinted, get translated into half a dozen languages, and spawn a flurry of commentary in additional books. Thinkers like Locke and Newton exchanged tens of thousands of letters; Voltaire alone wrote more than eighteen thousand, which now fill fifteen volumes. Of course, this colloquy unfolded on a scale that by today’s standards was glacial- weeks, sometimes even months- but it was rapid enough that ideas could be criticized, amalgamated, refined and brought to the attention of people in power.
The jet airplane is the only technology of Lodge’s small world of 1988 that has not been made obsolete by the Internet. This also reminds us that sometimes there is no substitute for face-to-face communication. Airplanes can bring people together, but people who live in a city are already together, so cities have long been crucibles of ideas. Given enough time and purveyors, a marketplace of ideas cannot only disseminate ideas but change their composition. No one is smart enough to figure out anything worthwhile from scratch. As Newton conceded in a letter to a fellow scientist, “If I have seen further it is by standing on the shoulders of giants.” The human mind is adept at packaging a complicated idea into a chunk, combining it with other ideas into a more complex assembly, packaging that assembly into a still bigger contrivance, combining it with still other ideas, and so on. But to do so it needs a steady supply of plug-ins and subassemblies, which can come only from a network of other minds.
The networked mind is the new mindset we all require in the 21st century. Given the proliferation of Web-based technologies such as blogs, wikis and social networking tools in our daily lives, these technologies have become a cause for all praise, scorn and worry. What would it become and what would be the cultural ramifications of its pervasive use? Were we indeed headed for the vision of Lodge’s small world, or was the web merely a littered cyberspace of pornography and bad design (Levinson, 03)? I have heard many people articulate their technological anxieties, describing how they are “behind” the curve. My response is always, “Aren’t we all?” In this century, chance favors the networked mind; so let’s take the opportunity to continually remain students ourselves, testing and sharing best practices for new forms of engagement.