Digital Humanities & Higher Education: What can DH do for our students? (Day of DH 2017)

As the international Day of Digital Humanities has come around again (all too quickly!), I wanted to mark the occasion with another blog post, to reflect upon my own engagement with DH one year on.

I’m in the USA at the moment, for the Society of French Historical Studies annual conference (so forgive typos etc. attributable to a touch of jet lag!), so I’m not doing much towards my usual Digital Humanities activities today. On Saturday, however, I’ll be showing them off in all their technicolour glory (!), with enormous thanks to Tom and the rest of my tech-team on the Europeana-funded Visualising Voice, who worked hard as we got this project presentation-ready. So instead of blogging the mundane powerpoints, the checking of my paper and a sneaky trip to the National Museum of Women in the Arts, I wanted to use this world-wide event as a long-overdue opportunity put together some thoughts I’ve had for a while on DH, HE and the tech sector.

In my 2016 post, I spent quite a lot of time exploring the perennial question mark which hovers over the term “Digital Humanities”. Since then, I’ve read quite a lot of other posts, as well as books and articles about how to “define” DH, by highly reputed scholars working in the field (including Melissa TerrasMatthew Kirschenbaum, David Golumbia and Dan Cohen. The problematic question is also the raison d’être of the website: www.whatisdigitalhumanities.com which offers a different definition of the field with each page refresh). The only unanimous verdict I can derive from my reading is that DH is ultimately undefinable (or to be defined, as Lou Burnard puts it, “with reluctance”). Rather than seeing the polymorphous nature of DH as a concern, as I did, to a certain extent, last year, I’ve now come to see this as a huge benefit of the field.

To me, now, DH is a world where nothing is closed off to us, where any new way of approaching texts, histories and contexts can be realised. DH allows me to think in three, or even four dimensions. Rather than being concerned with where the boundaries defining DH lie, I’ve begun to explore and challenge the boundaries that distinguish DH from Technology Enhanced Learning (TEL) and from the tech industry. This has come about, mainly, through moving to Birmingham, where I’ve become quite involved in groups and community projects which use tech to answer questions and address key issues in society and in enterprise. It has also come about through working with a tech professional on my own project: Visualising Voice (discussed in this post).

While I came at the Visualising Voice project from the perspective of a scholar working in the humanities, my developer team came at it from a non-specialist perspective. That, for us, was win-win. While I think there’s work to be done in bringing our approaches closer together (i.e. getting the developers using TEI-XML to handle poetry) and getting me more comfortable with using GitHub and with pushing edits to the site myself, the collaboration has had huge benefits from both sides. I’ve also come under fire a few times, from the developers, for using DH standards, which they deem outdated or superfluous. This tells me that, as Digital Humanists we shouldn’t rest on our laurels – keeping up with what’s going on in the tech world is essential, lest we and our practices become out of date and out of step with the industry who can support us the most.

The importance of relating to and keeping-up with the tech industry was really brought home to me a few weeks back, when I attended the Big Cat Breakfast, held at Birmingham City University. This was a networking event much like Tech Wednesday, and some of the other tech-related events I’ve been to in Birmingham, the only different being that this one started at 7am and provided coffee and croissants instead of the usual pizza and beer (to me, this was an improvement… as an aside: while I won’t say no to free pizza or beer, I wonder whether the fodder on offer at these kinds of events reinforces gender stereotypes about who goes to tech events – or who drinks beer??). The main part of this pre-work networking event was a talk by Andy Street, former Chief Executive of John Lewis, who is one of the candidates for the up-coming mayoral elections in the West Midlands.

Street’s talk plotted how John Lewis came to be at the forefront of online shopping innovation, discussing the company’s huge increases in investment in technology between 2000 and 2015. Drawing on his experience in management, and in leading technological change, Street claimed that Birmingham – AKA the Silicon Canal – was set to be at the forefront of the third industrial revolution – the revolution of web- and mobile-technology. From my experiences of living in Birmingham so far, I’m inclined to agree. Certainly, if we invest in young people, to ensure that they have the right skill-set to thrive in the entrepreneurial and economic climate of the future, then Birmingham could well be a world leader in this revolution. However, there were a number of questions from the audience about where existing institutions such as the NHS, and the HE sector fit into this: Street was on-the-ball (it was 7.30am, after all), giving creative suggestions as to areas in which the NHS might shine, and discussed what Birmingham was already doing to support individuals in developing tech skills within the FE sector.

However, as the session came to a close, I was still left with some uncertainty about how the University sector fits into this model for creative technological revolution. I wasn’t the only one with questions about the relationship between higher education and the tech industry: a member of the audience, from a tech / design company herself raised the issue of graduate retention in Birmingham, asking what we can do to keep more graduates from the four fantastic universities the city boasts. Undoubtedly, Birmingham has the potential to rival London for graduate recruitment and salaries, and it makes economic and social sense to keep home-grown graduates in the city. I suggest a possible solution to this in the form of a partnership between some of the tech startups / organisations in the city and the higher education institution.

But what has all of this got to do with Digital Humanities? Returning to perennial debates over the amorphous nature of the field, I suggest that getting DH-based research on our students’ radar, and involving undergraduates in discussions about tech and research is a key way to enable students of history, music, the arts, languages, literatures etc. with technology. If we, as academics, work more closely with the tech industry, we’ll forge valuable links which will serve our students well as they enter the job market. Such a partnership might not provide undergraduates in humanities-based subjects with the coding skills necessary to thrive as a tech professional, but it will open doors, help to forge links with the tech sector and help to break down the still-prevalent tensions between computer sciences and the arts. Universities need to invest in tech in order to invest in people, fostering the development of forward-thinking, tech-savvy graduates who are equipped to join the workforce as we welcome in the tech revolution. I believe that Digital Humanities, and digitally-competent humanists have a key role to play in ensuring that the social, economic and international success of that revolution, within our own community.

Productivity, Connectivity and Personal Development

Although I’m a researcher, and I’ve loosely deemed this an ‘academic’ blog, you may notice that I don’t typically share much actual research here. There are a few reasons for the lack of serious, scholarly material on my blog. The first reason is time. A research-based blog post needs to be backed up by detailed reading and proper referencing; while I do quote from time-to-time, I like to keep my blog posts reflection based, so that I can devote the time for citations and detailed analysis to my research output (which also includes blog posts for the Baudelaire Song Project – those are arguably a little more scholarly than the ones on my own site). The second reason that my blog is not full of detailed research and citations is because, if you like that kind of reading, then I can direct you to my more scholarly writings via my Academia.edu page (don’t hate me – whatever people say about Academia.edu, I like to have my little corner of the internet for sharing papers and networking!). The third and most important reason why I don’t produce properly scholarly output to share on my blog post comes back, once again to time: I write these posts in very short bursts, between 6am and 7am, using The Most Dangerous Writing App.

Lots of academics have their own personal site or blog; many of them are wonderful, showcasing their ideas, exciting research and relevant materials. Plenty of them are opinion-based, rather like mine, combining outlines of research and ideas with thoughts about the internet. Another sub-section of academic blogs are completely abandoned, a testimony to good ideas and time pressures – this was more or less the state that mine was in until the start of 2017, when I read a book called The Miracle Morning by Hal Elrod.

This post is not about research, but about personal and professional development. I’ve always had mixed feelings about the self-improvement industry. I was brought up to believe that self-help books were silly, indulgent and founded on a load of rubbish. I realise, here that I’m conflating quite a few different genres, but that reflects the way I thought about these kinds of books – whether they lurked in the psychology section, the self-improvement section, the spirituality section or the business and management section didn’t really matter, my mind was closed to learning about personal development. Then, back in 2011 or 2012, a few things changed.

First of all, I started going out with someone who had family working in the self-development industry (in a relatively high-profile way) and who loved reading those kinds of books. I gave personal development a chance, flicking through favourites such as Stephen Covey’s The Seven Habits of Highly Effective People and Dale Carnegie’s How to Win Friends and Influence People, and suddenly I saw a different side to self-improvement. I should admit, here, that I still have reservations about these kinds of books – many of them are badly written, or expressed in overly positive North-American prose which slightly puts my nose out of joint, but all in all, adding personal development books to my personal reading list has been a really positive step in 2017.

It was my partner who read The Miracle Morning first, as an audiobook. He started getting up at 6am and then at 5am, fuelled by enthusiasm for the book and for the effects of this structured, early start. So I downloaded the Kindle version myself, and had a read. I  hated the prose, and I got irritated with all the excessively drawn out personal stories, which attempt to play on your emotions, but reading between the lines, I could see the benefits. So the next day I set my alarm for 6am, and I’ve not looked back.

The first day of the “Miracle Morning” was easy – I was excited to get out of bed and get on with my day. I vaguely followed the plan of water (Berocca), exercise (a few squats), meditation (getting frustrated because the Headspace app didn’t work for me) and writing (blogging, using The Most Dangerous Writing App). While the exercise I did was minimal, and some weeks later, I still haven’t got behind the meditation, the early start still made me feel more alert and motivated than the usual 45 minutes of snoozing and the subsequent rush which typically characterised my mornings. I never took it any further than 6am as I’m convinced that I need a decent seven hours’ sleep at least. While my partner maintains that you can train yourself to emulate a Margaret Thatcher-style routine, I’m sure that normal human beings require sleep as well as positive habits to succeed!

Why am I explaining all of this on my ‘academic’ blog? Well, first of all, I want to sing the praises of The Miracle Morning – I’ll be honest, I hate the prose (have I already mentioned that?!) and find some of the ideas in it a bit cheesy – “journalling, scribing and positive affirmations” are all a bit too ‘self-help’ for me, but the basic premise of getting up early, starting the day in a healthy, positive way and getting things done – either for work, for personal growth or even for pleasure (!) – is really effective.

In addition to skim-reading The Miracle Morning, I’ve been listening to an audiobook, called Deep Work, on my way into the office. Written by Cal Newport, himself an academic with a background in business, the premise behind Deep Work is that we need a good stretch of time to focus deeply on a piece of work, in order to be productive. This means allowing ourselves space to work without being distracted by appointments, meetings and, of course, social media. I’ve long thought that the internet is damaging the concentration of adults and is potentially having an even more significant effect on the way young people focus and concentrate. Research suggests that it’s more a lack of willpower NOT to touch phones or Google things than an inherent inability to concentrate as a direct product of gadgets and social media, but either way, I know that the “always connected” lifestyle is good for me in some ways, but detrimental to my focus in others. Since listening to Deep Work, I have managed to put some healthier habits in place to enable me to focus more and get more from my work time… so far it seems to be paying off.

I’ve always enjoyed engaging with personal and professional development programmes at work, hence choosing to learn how to be a better teacher and supervisor in HE, but reading some personal development books has been an important first step in making me a better researcher. Although I’m with Cal Newport, in finding disengaging with the internet and with social media very helpful as I seek to develop a more positive, clear focus, I also have to celebrate the internet for all the amazing research, productivity and personal development information and tools it provides (many of them for free), so my task over the past few weeks has been about finding balance – between work and life, between connectivity and disconnectivity, between productivity and procrastination. I’m not there yet, but focussing on personal development has certainly helped kick-start the process. Finally, apologies for the typos – I wrote this blog post in twenty minutes and if I stop typing, The Most Dangerous Writing App will eat my words, and if that’s not motivation to sit down and write, I don’t know what is…

 

Digifest 2017: Lessons in technology-enhanced learning

Earlier this week, I took a bit of time away from research to attend Digifest, a two-day conference run by Jisc, the digital education solutions provider. I wasn’t quite sure what to expect from the event, but when I discovered that it was taking place in my hometown of Birmingham, I decided it was worth investigating.

Although I was only able to stay for the first day, on Tuesday, I had a really positive experience of the event. It was a typical conference-style set-up with stalls and stands run by start-ups and educational providers in the main hall, plenaries and large-scale talks in a lecture theatre, and smaller workshops / breakout sessions in communal spaces or meeting rooms. The variety of sessions on offer was impressive – both in terms of the content and the format they were delivered. There was an inspiring talk by presenters from Oxford Brookes University and the University of Ulster on what teaching excellence looks like in a technological age, a useful workshop session on getting undergraduates engaged with digital archival work, and the day was rounded off with a debate about whether technology was changing the learning process in HE (answer: yes and no, but mainly yes – I’ll discuss this more in a future post!).

The programme for the event looked impressive, but what was really pleasing was the amount that I took away from presentations that I initially wasn’t that interested in, or that I went to by accident. Perhaps one of the most inspiring sessions was led by a team of lecturers and learning practitioners from Forth Valley College, in Falkirk, Scotland. They were talking about some of the tools they had used to get students in FE engaged in learning including smartphone apps such as Aurasma and the online Toolkit Xerte, developed at The University of Nottingham. Although I work with digital methodologies in my research, and have been devoting a lot of my pedagogical reading to exploring technology-enhanced learning in Higher Education, I’d never come across either of these tools. During this session, it really hit home to me how much we, as facilitators in ‘traditional’ universities can take away from the teaching practices of FE colleges.

 

What can HE tutors learn from creative teaching in FE colleges?

At Forth Valley College, one of the tutors on a vocational course in heating engineering had used the app Aurasma to overcome some of the health and safety issues associated with teaching in a space with lots of electrical and mechanical objects, which students might not be trained to use safely. Aurasma allows users to hover their smartphone over certain images or objects in a room; these images and objects are linked to videos or pictures, which appear when scanned in the app. In an FE setting, vocational engineering students could use their phones to scan electrical circuit boxes; a video, created by the tutor or by other students would then appear, showing them what was inside. The app allows students to explore and learn about the wires and components inside the circuit box, without having to open it up and expose themselves to dangerous electrical configurations which they are not yet trained to handle safely.

Of course, such health and safety concerns are not so pressing when you’re teaching Modern Languages in a university context. However, perhaps because it was so far removed from my own teaching experiences, the innovative ways of delivering vocational training I saw at Digifest did push me to think creatively about how we could appropriate the digital tools used in FE in my own environment. I realised that this kind of a tool could revolutionise plenary sessions. What if, instead of rounding off a series of lectures and seminars with a stand-and-deliver plenary session, we used an augmented reality timeline to give an overview of the course. My examples have been with poetry but it would perhaps work even better with social / political topics. So, what if we start with a series of physical pictures displayed along the walls of a lecture theatre, then ask students to record a short video on a particular topic – a poem, a political event – making sure that each individual or group has a different theme?The students could then upload their video by a set deadline: it would then be easy to generate a QR code to attach to the physical pictures, so that each one could be scanned by a smartphone and a video would pop up. I’m not sure whether I see a need for getting to grips with Aurasma, as QR codes work so simply, but, in principle, I see this as a tool which could totally revolutionise my own teaching.

The question of whether to use designated apps such as Aurasma, versus the trusty QR code raises another important question over the implementation of technology in the classroom, particularly when students are being asked to use their own personal devices for educational purposes. After all, my own smartphone doesn’t have enormous amounts of storage, and I’m loath to take up precious space with extraneous apps; I’m sure that your average student (even if they have a better phone than I do!) would rather keep their storage for YouTube, Spotify and Snapchat than clog it up with pedagogical tools. I get that, so we need to make sure that we’re using the apps and programs that they engage with already: this saves everyone time, as students don’t have to install and get to grips with yet another new pedagogical tool.

 

How to cater for everyone’s needs in HE?

One of the major strengths of Jisc’s Digifest for me was the open-minded approach that the facililtators and presenters took to the implementation of digital in the classroom. There was a huge awareness of the pitfalls of excessive or indiscriminate use of technology enhanced learning. It was accepted that using digital tools in the classroom does not automatically help students to learn more, or to learn better. The presenters understood that not all tutors and lecturers want to have more technology in their teaching, and the first day of the conference was so much better for that. Perhaps the most important thing for me was that the event always kept in mind what students want and what students need (as opposed to what curriculum designers, module leads and ed tech companies think they want). I’ve heard a lot of people criticise higher education (along with many other sectors) for being run by old white men. I personally have no problem with old white men in top positions in HE (of course, I want non-white people and definitely women to be running things too, with everyone as equals, but let’s not malign old white men – they can’t help falling into this category!), as long as they’re where they are because they manage HE institutions well, as long as there are other groups represented in the sector and as long as they’re listening to what students want, rather than pushing for what they think students want. Digifest was a good opportunity to remind tutors, lecturers and especially course designers to talk to their students about what they want from their courses.

Regardless of age and level of tech engagement, I think it’s very hard to know what our students do want. I may be a comparatively “young person” to be working in HE, and like the vast majority of Modern Language students, I also happen to be a woman, which might make me more in tune with students’ interests than our poor, maligned old white man. That said, the start of my RA post came exactly a decade after beginning my own undergraduate studies, and a lot has changed, especially where tech is concerned (I find it hard to imagine a student night out without carrying round a huge 5 megapixel digital camera!). In any case, university tutors and lecturers are a strange breed. We love our subject enough to spend years doing postgraduate study (mostly in our early twenties, when our contemporaries were enjoying freedom from study, and their first proper pay packets) – most of our students probably don’t want to follow in our shining example, and that’s just fine – each to their own. But if we don’t know what students want, we definitely need to keep asking them – and not just by getting them to fill in evaluation forms and NSS surveys. We need to find out what they want by talking to them: I’d really like to see students more involved in curriculum design, perhaps with select representatives from the student body serving as digital ambassadors, keeping us – the old men designing and delivering courses, the young women leading and facilitating learning, and all the teachers, lecturers and facilitators in between – up-to-speed with how undergrads want to use tech to enhance their own learning. How would our practice change if we had tech ambassadors from the student body, who could discuss with their peers and enter into a meaningful dialogue with their tutors about how technology could enhance their learning?

Modern Languages: teaching, learning and inspiring in the face of a ‘crisis’ in the humanities

As part of my training in teaching and learning in Higher Education, I had to evaluate a learning theory. Questions of identity as scholars working in Modern Languages are at the forefront of my mind, as I think about my own academic practice, so I decided to look at Alison Phipps and Mike Gonzalez’s Modern Languages: Learning and Teaching in an Intercultural Field. (2004)  Here are my thoughts on the book and the apparent ‘crisis’ in Modern Languages.

Phipps, A. & Gonzalez, M., (2004), Modern Languages: Learning and Teaching in an Intercultural Field. London: SAGE.

Modern Languages: Learning and Teaching in an Intercultural Field addresses some of the key issues surrounding the nature of Modern Languages as a discipline, focussing in particular the implications of these issues for teaching and learning in a Higher Education context.

The co-authored volume by Alison Phipps and Mike Gonzalez was first published in 2004 and reprinted in 2008. Having first appeared some thirteen years ago, around the beginning of my own undergraduate studies in this subject area, the study feels rather outmoded due to a range of factors including shifts in the nature of Higher Education, changes in the discipline of Modern Languages (brought about, at least in part in response to the perceived ‘crisis’ in Modern Languages and in the humanities, which is a central preoccupation in this study). While we could not, realistically, expect such a study to be revised to respond to current socio-political concerns in the West which have rocked Modern Languages departments – in particular Brexit, but also the conditions in Trump’s America – I feel that Phipps and Gonzalez’s consideration of learning and teaching in Modern Languages could take greater account of how social and political shifts and the rise of populist ideas impact upon the transnational and multicultural underpinnings of the subject.

In the first chapter, the authors discuss the crisis facing Modern Languages, which derives both from inherent complexities of the discipline and from the social and political context in which Modern Languages are being taught and studied. I have always argued that one of the major advantages of Modern Languages is that it is more of a mode than a discipline, allowing students and academics access to a range of fields from political and social sciences to literature, art history and music. Phipps and Gonzalez, however, present the open-ended nature of Modern Languages as one of the challenges of the discipline, stating:

It could be argued that it [Modern Languages] is not a discipline at all, or at least that it did not [originally] have cohesion or a set of shared perceptions until the creation of a strategic alliance of individual language disciplines mobilising as a united body in the face of a crisis. (Phipps and Gonzalez 2004, 4)

The first part of Phipps and Gonzalez’s argument chimes in with my own sense that Modern Languages is not a discipline in the same way as History or Mathematics. However, their reasons for reaching this conclusion diverge from my own in that, while I see the lack of clear boundaries defining Modern Languages as a strength of the field, Phipps and Gonzalez content that the open-ended nature of the subject area is a flaw which has made it particularly susceptible to the effects of a generalised crisis in the humanities.

Throughout the study, Phipps and Gonzalez criticise the increasing sense that Modern Languages are a skill, suggesting that part of the crisis in the field comes from a misunderstanding of the nature of Modern Language degrees and how they differ in form and content from Languages for All programmes. Although they are typically housed within Modern Languages departments, LfA courses are designed to give students and researchers in other disciplines necessary language skills, and add value to their own degrees. But, as Phipps and Gonzalez argue, Modern Languages as a degree programme offers far more than spoken fluency and skills in spoken and written production. I chose to examine Phipps and Gonzalez’s book, as I am particularly interested in the inter-cultural and interdisciplinary aspects of teaching and research in Modern Languages. I am convinced that the open-ended nature of Modern Languages as an academic field invites research which touches on other disciplines, and works in tandem with other wide-ranging academic fields such as the Digital Humanities. In response to Phipps and Gonzalez presentation of the state of Modern Languages teaching in Higher Education in the twenty-first century, I argue that the responsibility falls on us as teachers of Modern Languages to promote wider critical skills and provide methodological tools for approaching high-level research which approaches texts (in the broadest sense of the term), history and other cultural phenomena from a position of linguistic expertise and cultural understanding. The challenge is to reconcile that emphasis on research, cultural understanding and critical thinking with what applicants and students expect from a Modern Languages degree.

In the final chapter Phipps and Gonzalez discuss the lack of clarity over what constitutes a ‘legitimate object of study’ in Modern Languages degrees (Kelly 2001, 82, cited in Phipps and Gonzalez 2004, 5). They make the important point that there are many different objects of study at play within the discipline of Modern Languages, but their analysis does not satisfactorily take account of the fact that, within the sphere of Higher Education, different parties – students, researchers, language teachers – have differing ideas as to what constitutes the most ‘legitimate’ object of study. This means that degree programmes may not be weighted in a way which suits students, and the ways in which language and content modules are linked, the balance of these different skills and the interdisciplinary options available will have a profound impact on whether Modern Languages programmes can continue to recruit students and can halt the steady decline in applicants over the past ten years.

Where solutions to the ‘crisis’ are offered, I find them reactionary and based on opinion, rather than focussing on practical ways to address the sorry state of Modern Languages in Higher Education.

As languagers we are people who move in and through words as actions, who develop and change constantly as the experience of languaging evolves and changes us. A languaging student and a languaging teacher are given a unique opportunity to enter the languaging of others, to open up the ways in which the complexity and experience of others may enrich life.  (Phipps and Gonzalez 2004, 167)

At the crux of the argument in Modern Languages: Teaching and Learning in an Intercultural Field is the fact that languages and, as Phipps and Gonzalez note, ‘language teaching is a dynamic, volatile, changing, messy business.’ (Phipps and Gonzalez 2004, 169) Rather than try to impose boundaries on the discipline or to rationalise the vast interdisciplinary crossover of Modern Languages, we need to explore the similarities and differences between the needs and desires of our students and our own interests and motivations as teachers. To my mind, the challenge of teaching and learning in Modern Languages lies in reconciling our own research interests and fields of expertise with our students desired outcomes. As academics, we are encouraged to practise research-led teaching, to be innovative in our pedagogical approach and to embrace the interdisciplinarity of our subject; however, in doing so, we risk alienating our students and potential applicants, by offering modules and teaching styles which are at odds with what they want to achieve. Although the issue of research-led teaching and the importance of aligning our own research interests with students’ learning goals is not confined to Modern Languages, it is, perhaps felt more acutely in our discipline where the language forms both the medium by which learning takes place and the content of the learning.

Ultimately, this study offers an important insight into the state of Modern Languages and their place within both the humanities and within the landscape of Higher Education more broadly. However, it reads more like a manifesto for Modern Languages, exploring and asserting their status in the face of the ‘crisis’ in the humanities, without really offering any viable suggestions for addressing the issues raised. The uncertain and, indeed, unfortunate situation of Modern Languages is a key issue, which informs how we approach the design, provision and delivery of teaching in HE. The challenges we face in the discipline also have important implications for the way in which we approach the task of student recruitment in a field where student numbers are diminishing year-on-year. However, I found the study quite repetitious in continually re-iterating the problems faced by Modern Languages as an academic discipline, and I felt that far more needed to be done to link these discipline-wide challenges to the practice of teaching and learning. There were not enough specific examples of innovative pedagogical approaches or student recruitment strategies which might help to mitigate the issues surrounding Modern Languages as a discipline, nor were there any real and tangible suggestions for re-evaluating the identity of Modern Languages. I suggest that the book would have benefitted from ‘taking a step back’ to evaluate the commonalities and the points of divergence between the different perspectives on Modern Languages within society as well as by HE institutions and those who work in them – that is to say by academics, researchers, teachers and students.

 

References

Kelly, M. (2001) ‘”Serrez ma haire avec ma discipline”’ : reconfiguring the structures and concepts’ in R. Di Napoli, L. Polezzi and A. King (eds), Fuzzy Boundaries? Reflections on Modern Languages and the Humanities. London: CILT, pp. 43-56.

Phipps, A. & Gonzalez, M., (2004), Modern Languages: Learning and Teaching in an Intercultural Field. London: SAGE.

 

Introducing Visualising Voice

At the beginning of 2017, I started a new project: Visualising Voice. It works alongside my research on The Baudelaire Song Project, but also as a standalone research project. And I’m very excited about it.

It all came about when I applied for a Europeana Research Grant, back in September, when I’d only just started at The University of Birmingham. The call for applications was looking for researchers working in / with Digital Humanities tools, who could make use of resources held in the Europeana Collections, a digital platform for cultural heritage, funded by the European Commission.

I have to admit that, when I applied, I hadn’t really got to grips with Europeana Collections, though I’d used a number of the resources which are held under their umbrella. Mainly, I was familiar with Gallica – digitized resources from the Bibliothèque national de la France (BnF), and had, on occasion made use of online resources from other national libraries – notably, of course, the British Library (BL) in London, but using Europeana opened up a wealth of new resources to me.

So, I was delighted – and a bit daunted, I’ll confess – to find out, just before Christmas, that I had been awarded one of three Europeana Research Awards.

I hadn’t expected to gain the funding. When I applied, I wondered whether my project was too derivative, in that it closely follows methodologies developed within the Baudelaire Song Project Team. I also wondered whether it was too ambitious, as I was going to be working with Software Developer, Tom Cowley, to create a new interface, allowing users to get involved in Digital Humanities. However, Europeana seemed to like both of these aspects of my project – it’s collaborative, digitally ambitious, but also rooted in tried and tested methodologies which fit well when applied to materials in the Europeana Collections.

I realise that I haven’t actually explained what the project aims to do. So, I’ll backtrack a little bit. Visualising Voice aims to look at what happens when we perform poetry aloud. The project works on the premise that poetry is an inherently oral form of writing – it is meant to be heard – or, at least, some of the different features of poetry, such as rhythm, rhyme are best understood aurally, rather than by looking at a piece of paper. The Europeana Collections hold quite a lot of spoken word recordings of poems by Baudelaire. Working on the Baudelaire Song Project, I was already comfortable enough working with Baudelaire to know that this was something I could happily take on.

I could never get bored of Baudelaire, nor of Mallarmé, whose work formed the basis of my thesis, though I am wary of becoming a “one-trick pony”, and I was definitely ready to branch out and expand the scope of my research. Visualising Voice gave me the perfect opportunity to continue my research, whilst looking at new texts and broadening the extent of my research output. Working with the website and development also gave me a chance of develop my tech skills, most notably working with html and javascript. While the development of the song analysis interface is very much down to Tom, and his developer associates, I have been hands-on in building the website and it’s really opened my eyes to using the web to share my research.

We know why Baudelaire had to be in the list of poets to study, but why Verlaine and Rimbaud? Verlaine featured in the corpus for a number of reasons. First and foremost, he is very frequently set to music (though arguably not as frequently as Baudelaire). The large number of song settings of Verlaine’s poetry hint at an inherent orality, at something in his work which invites performance. Secondly, recordings of Verlaine’s poetry feature heavily in the Europeana Collections, supporting my view that his work has an important performative element, and making it an easily-accessible focus for the study. Alongside Verlaine, I chose Rimbaud. Recordings of Rimbaud’s poetry often feature alongside those of Verlaine and I was surprised to find that recordings of his work featured almost as much in the Europeana Collections as Verlaine’s. Historically, the two poets have been seen as a double-act, on account of their close but often turbulent personal relationship, so it made sense, in my study, to consider both poets. Baudelaire, Verlaine and Rimbaud together are bastions of nineteenth-century French poetry, with unique and pioneering elements to their poetic output – this made them ideal candidates for study in Visualising Voice.

Looking at the work of three poets together offered an opportunity to test the methodologies developed within the Baudelaire Song Project on the work of other poets, as well as exploring how we examine and talk about features of speech, as opposed to song. I think that analysing speech brings new challenges. While setting words to music adds an additional dimension to analysis, in some ways I this is easier to analyse, as there are more layers and more obvious variations in performance. In the case of speech, the differences between performances of poetry are more subtle and, as a result, bring a different angle to our research.

So far, I have firmed up the corpus for the project, which involves looking at three different performances of the same poem. There are three poems each by Baudelaire, Verlaine and Rimbaud, so initially, we’ll be looking at nine poems, though we hope to expand this to eighteen over time. Because of the nature of the resources in the Europeana Collections, most of the recordings used are from the 1950s or early 1960s. This limitation brings both advantages and challenges. In the first instance, we’ll need to assess how much variation there is between the different performances and what the nature of those variations are. If there is little variation, then we might see this as a generational style of performance. The real challenge is that, within the Europeana Collections, at least, we don’t have access to any more modern recordings: this is why we decided to add a crowdsourcing element into the project. In the latter phases of the project, we’ll be inviting users to record and analyse their own performances – either snippets or whole poems, as they wish, in French or in English. These can either be deleted or, if users are interested in contributing further to the research, uploaded privately to the project team. Of course, we hope that some users will be feeling brave enough to share their recordings publicly.

The Visualising Voice project home page should be appearing online over the next few weeks, when we’ll share a bit more about the project. For now, watch this space…!

How healthy is #ECRchat?

If you ask most Early Career Researchers how it feels to get out into the academic world, with a PhD in hand and an impressive new title on their e-mail signatures, they’ll tell you that life is tough. With tens, sometimes, hundreds of applicants for one permanent job, lots of fixed-term contacts, and the need to keep lots of balls in the air including teaching, research, finishing a PhD, having a life, it’s not easy starting out in academia.

However, I’m not here to write about the downsides of academia – you can find enough of that on The Guardian’s “Academics Anoymous” feature, in which the trials, tribulations and triumphs (but mostly the trials and tribulations) of academic life are documented in a blog-style comment piece. In AA, scholars allow their spleen to flow onto the page, under the safe guise of anonymity, in a manner similar to the equally, if not more choleric “Secret Teacher” (of which I was once a regular reader and sympathiser).

I’m convinced, however, that despite the challenges, an academic career is still a privilege (as my sympathy with the Guardian’s “secret teacher” suggests, I feel that the grass is far greener for staff in HE than in secondary education). Naturally, just because we are lucky enough to work in an industry which allows us to pursue our intellectual passions doesn’t mean that the willingness and enthusiasm of Early Career Researchers should be abused by short-term contracts or low-paid teaching positions. However, I do feel that many of us Early Career Researchers (myself included) are too quick to become caught up in the negative side of the job and fail to see the many benefits of academic life.

For all the hard work that Early Career Researchers do – teaching, writing, publishing and living a life beholden to the job market – there’s still one thing that ECRs just aren’t doing enough of: positive thinking. To get ahead, we all need to embrace the kind of solutions-focussed thinking our academic training should have fostered within us. So often, I go to Early Career events – disciplinary and general, institutional or further afield – and the mood is the same. It’s one of “my university doesn’t offer X”, “I’m too busy to get training in this area”. This kind of approach is stopping us from getting ahead. If you think your university doesn’t offer the training you want, then ask about bringing it on board. Want more writing events? Ask your research office / your department etc. If what you’re looking for isn’t out there then why not see about kick-starting it yourself? Naturally, some things are easier to get going than others – a writing group for ECRs is probably easier to launch by yourself than an accredited mentoring programme or a teacher training qualification, but that doesn’t mean you shouldn’t lobby for the things that are important to your professional and personal development.

Some months ago, I was lucky enough to spend a week curating the @WeTheHumanities twitter account. It was fascinating to hear the testimonies of people in academic circles – not only those starting out, but those in established careers.  However, I was saddened by just how dejected some of my contemporaries felt. I decided to use my curatorship to (attempt to) focus on the many great things which make us want to a career in academia. I invited ECRs and more established academics to talk about the positive aspects of their work. My plan fell flat – especially as far as Early Career Researchers were concerned. In fact, I felt that I had been totally shot down, because – for many – it was impossible to think positively after the rejections began to pile up, after trying to break into a career which didn’t support mental wellbeing and exacerbated existing health issues. Interestingly, too, people assumed that because I wanted to celebrate the good things in academia, that I must be in a permanent post, that I must be convinced that I couldn’t be touched by the problems of getting a permanent job, faced by most ECRs.

I have to admit, that in many ways, my week curating the WeTheHumanities twitter account really brought me down – firstly, knowing that my contemporaries felt bad about their situation, secondly feeling that they thought I was some kind of Pollyanna for wanting to focus on the good parts of the job (and for me, these outweigh any negatives). I absolutely love what I do – I work on a research project I believe in, and after only 18 months in the job, I have already benefitted from some amazing opportunities which I have really enjoyed and will be of real benefit to my career. I’ve done public engagement events, teaching and lecturing, taster sessions for sixth-formers and even given a lecture at 5 in the morning (for 24-Hour Inspire). I’m not saying it’s been easy – I’ve written hundreds of thousands of words and put in many hours in and outside the office. It’s all been worth it, because I’ve enjoyed it and because it’ll stand me in good stead for the future.

One of the things I felt that I came under fire for when curating the @WeTheHumanities accounts was my suggestion that we might have ‘fall-back careers’. I’m not sure how I feel about this myself, but it’s an idea I have to entertain. With an awareness that I might not get my dream lectureship immediately after my current post comes to an end, I know there may come a point where I have to do something different. On the one hand, this idea that I won’t be a lecturer in five years time is terribly disappointing, on the other hand, I sometimes wonder what I might do instead. I’m not sure, because an academic career is the one I want. But I do know that I have a PhD – hell, I’ve got three degrees! – I have experience of teaching. I’m a fast reader and a quick typist. I am pretty good at critical thinking and I’m quite creative. If I sound like I’m boasting, then, I’m sorry – but I’m not in the minority. Every Early Career Researcher should be able to boast these skills, and if they don’t make you a hot contender on any job market – academic or otherwise – then I don’t know what will.

Where do I see that future? I’d like to say in academia. Much as I love what I do, I know that it’s a fixed-term project and that it won’t last forever. I’ll do everything I can to secure an academic job. And if I can’t? Part of me says I’ll cross that bridge when I come to it; the other part of me says that I’m already a writer, teacher, mentor with gradually growing tech skills. Do I feel hard-done by that I don’t have a permanent job? Absolutely not – I’m happy in what I’m doing now and excited to see where the future will take me.

What good are the digital humanities?

I have a fundamental problem with the digital humanities: I’m not really sure what they are. I probably shouldn’t admit that given that I work with digital humanities tools every day. I also feel that I shouldn’t admit that I’m still trying to work out what DH really is, because I am working to build my career around using digital tools to studying nineteenth-century French poetry.

There are a few issues I have with DH. First of all, I wonder whether there will really be a need for the digital humanities in ten, fifteen, twenty years time? The humanities themselves have stood the test of time very well, dating back – I suppose – to Ancient Rome and / or Ancient Greece. The “digital” part, however, is a relatively new addition to our arsenal: there’s a perception that students love it, funding councils love it, and people on interview panels love it. I love it, too, but cautiously, as I’m wary of the desire to shoe-horn DH in at every opportunity. Doing so, I think, undermines the value of DH and of the research we’re already doing.

The digital humanities have already undergone a major rebranding, shifting from their old identity as the computational humanities, before it was decided that computational humanities was a bit last century, a bit computer-speak for humanists, and it should be rebranded as the digital. My issue is this: we live in a digital age, but very seldom in life do we flag up when things are digital as opposed to analogue. Most of us nowadays have smart phones, but mostly, they’re just referred to (in the UK at least) as “phones”. Nobody ever gets in their car and thinks: should I use analogue navigation (a map?) or digital navigation (Google Maps?) – they just do it. And this is precisely how I feel about the digital humanities: rather than trying to decide whether what we do is digital or not, and spend time situating ourselves within a digital culture, can’t we just accept our identity as humanists, from various disciplines and various methodological approaches?

After all, digital humanities is not a methodology in itself: those of us who approach our work digitally, using digital tools still have to decide what those tools are and how to implement them. Often this means devising the tools ourselves, working with someone else to devise the tools or using tools which are not actually designed for that purpose, appropriating them in ways which fit the job. This is all very productive. But this is still just doing our research in the twenty-first century. I’m sure – and part of me kind of hopes – that the digital humanities will fizzle out as a term. I don’t feel that it needs another new look, a PR overhaul, which is what so often happens with ways of doing things in university departments (e.g. “it’s not a research cluster it’s a strand / stream / insert other appropriate term for separating researchers out into useful groups” / “we need to give this course a more appealing title”). Instead of rebranding us, lumping us all together, giving us a new identity as digital humanities, it would be quite nice sometimes just to get on with what we do best (research / teaching / all the jazz that comes with being an academic) without having to work out where we fit in. That is as much a question of modern life as it is a question of asserting an identity within the academic community – where do I fit in and who am I?

Ultimately, it’s not productive to try and pigeon-hole ourselves, and we shouldn’t have to worry that we’re “not digital humanities enough”, that we’re too old-fashioned when applying for jobs or developing our CVs. That sends out the message that the kind of high-level, old school criticism done in the past in literary circles is not good enough. Yes, as critical individuals we can see the flaws in the approaches taken by our predecessors. I admit, also, that it’s important to be doing things properly, to be doing things well, and re-hashing old ideas in old ways does not constitute progress. Nor, however, does re-hashing old ideas in new ways. So, what I want to take away from these reflections is the idea that it’s OK to be a twenty-first century citizen in academia, to use the modern tools we have available to us in creative new ways. But, that it’s also OK to keep doing what you’re doing and doing it well, if you don’t feel that digital tools add anything to your research.

After all, it’s questionable how much digitization and e-books can really be said to innovative digital humanities methodologies in a world where many have been using Kindles for fifteen years or more and students and academics alike rely on tablets and laptops for information on the go. Much as I hate the term, I am a millennial. Since my early teens I’ve used computers for some kind of research, for communication and for learning (albeit, an unreliable dial-up connection in my school days) – much of what is commonly thought of as digital humanities is just second nature to me. But there’s another side to what is commonly thought of as digital humanities which is out of my reach – big data handling (I’m talking MATLAB and SPSS and other high-level tools which, as a modern linguist working on poetry I just haven’t had cause to use). I’m also talking programming. In the past few years, I’ve learnt a bit of code – I’ve progressed from creating the kind of homepage you might see in the Hampster (sic) Dance days, to something which looks fairly decent even by today’s standards. But I’ve worked hard to make even these baby steps, and I’ve had a lot of help. And that’s what we need to make digital humanities different – help. It’s OK to be proud of being specialists in our field, rather than trying desperately to edge into a different field. But let’s use the gaps in our knowledge to team up with other people. I think this is the goal for humanists – whether they consider themselves digital or not – to talk to the specialists in technology. I think they could tell us a few home truths about the level of forward-thinking tech which is actually at the core of much of what we consider to be digital humanities, but – perhaps more than that – they have much to offer in helping us to think differently about our research, enabling us to approach what we do in new, exciting and creative ways.

* Title with a nod to John Carey’s 2005 book What Good Are the Artswhich provoked a lot of thoughts in me as a first-year undergrad and which I now want to re-read.