Voices from LLT

A podcast series highlighting stories from the community
Mar 25 2024 (7 minutes)
A conversation with Christoph Hafner at the 2024 Regional Language Centre (RELC) conference

Hayo Reinders talks with Christoph Hafner at the 2024 Regional Language Centre (RELC) conference held in Singapore. More information about Dr. Hafner's work can be found at the following link.

https://scholars.cityu.edu.hk/en/persons/christoph-hafner%283112dd1d-dcb1-4ddf-b74d-3a9b6d5ea60d%29.html

Reinders, H. (Host). (2024, March 25). A conversation with Christoph Hafner at the 2024 Regional Language Centre (RELC) conference (No. 8) [Audio podcast episode]. National Foreign Language Resource Center.

2024 Interview Hafner
[00:00:00]

Hayo: Hello, everybody, and welcome to the latest installment of Voices from LLT, Language Learning and Technology's very own in house podcast. Today, coming to you live from the RELC conference, as you can see here in between us, in Singapore. And today we have another very special guest, Dr. Christoph Hafner.
Nice to have you

Christoph: here. Thanks. Thanks for having me.

Hayo: You're at the University of Hong Kong, right? The City University. City University of Hong Kong. And you've been there for a long time.

Christoph: It's unimaginable.

Hayo: So in terms of language learning and technology, you have seen it all. You've heard it all. You've been part of the development of the field in many ways.
Tell us, tell the listeners, what are you working

Christoph: on at the moment? Okay, well, so at the moment, um, there are a couple of projects in spring to mind, but the 1st, 1 of those, um, uh, because I take an ESP approach [00:01:00] to, um, digital technologies is that I'm looking at, um, digital genres, uh, particularly related to sort of research.
Um, and one that I've been looking at in particular recently is a video methods article. So my interest is in looking at sort of a new literacy practice, if you like, um, and I take a look at a particular, um, uh, journal, uh, which is called the Journal of Visualized Experiments and presents, uh, procedures, new procedures, um, you know, innovative new methods.
In various kinds of science, especially life sciences, but presents them visually, um, and that kind of a genre is very effective in terms of bridging certainly communicative gaps because the scientists that are receiving the videos, they're able to replicate the research that and replicate those methods much easier than if they were to just read an article.
So that's kind of one area that I'm working on at the moment, looking at the way that, um, uh, [00:02:00] multimodal constructions of that kind of knowledge, um, are affected as well as because it's a slightly different sort of genre, the way that people, um, uh, interact and engage and present their stance through that as well, because researchers pop up, uh, and address the audience directly, which, of course, you wouldn't see.
Um, in a standard research article. So the question that I'm interested in there is like, how is digital technology changing the way that we read, write and communicate? Um, and which might have an impact on the kinds of things that we choose to teach ESP learners at university. For

Hayo: example, we'll put a link to that in the description.
If you have a website for for that, people can look up more about that. Uh, you mentioned learners. How about, uh, teachers and, and, you know, educating teachers, um, maybe role for, you've just described

Christoph: that, right? Well, it's, um, going to be important if we expect teachers to be able to address these kind of new [00:03:00] genres that, um, teachers own, uh, understanding of those genres, including the multimodal elements are actually.
Uh, catered to as well. Um, and in that space, I am working on a Cambridge element with Jennifer Ho on multimodality. Um, this is going to probably target, um, uh, teachers at all levels. Um, and the idea is that, um, teachers do need a kind of toolkit. They need to understand, um, what semiotic resources are available in, um, media like video and that sort of thing.
Um, and certain theories of multimodality, particularly social semiotic ones, I find very useful. Um, they can give us some kind of, um, insight into the resources that are available, how they can be combined, and then of course how you might. approach that with English language learners as well.

Hayo: Wow.
Wonderful. Um, obviously in, in your particular, uh, niche, [00:04:00] um, there's been a lot of development, um, in recent years, the whole multi modal development. And now of course, like everything else impacted by AI, I suppose. Um, can you tell us a little bit about sort of recent changes and where you see your particular field going in the months and years to come?

Christoph: Okay. So I think you've, um, I put your finger on Uh, exactly the two topics that I would, um, uh, want to discuss actually, and the first one, um, uh, is the multimodal developments. And one of the things that's been, um, uh, taken up quite enthusiastically is the whole idea of digital multimodal deposing. Um, and that's something that, um, It's being studied very actively at the moment, asking questions about how effective it is, for example, how can we prepare teachers for this kind of thing, how can we do it in ways that are ethical and critical and that sort of thing as well.
Um, and so that's an area that I'm [00:05:00] really excited about seeing what comes in that area. And of course, as you point out, you can't at the moment go past generative AI. because that has really fundamentally challenged the processes and practices that we have in place in university education, in the disciplines, in every single discipline we've got an issue there.
And so we first need to address things like ethical concerns related to academic honesty and assessment. Those are the things that first pop up on your radar. But what I'd like to see happen, uh, in this area is. Um, and it's already starting, um, is some research that's looking at how are professionals using these tools and what kinds of decisions are they making, um, about how they incorporate those tools into their reading, writing and communicating.
In other words, this is a kind of a moment when everything's changed yet again. Uh, and so, um, we [00:06:00] need to know how has it changed. If our goal is to prepare lawyers or scientists or something like that for practice, we need to know what that practice looks like. And currently we don't really have a good idea.

Hayo: Plenty of scope for future research. It sounds like. Do you have PhD students

Christoph: yourself? Yes, I've got two

Hayo: PhD students. And if there are people watching, listening, Uh, who are perhaps interested in, in, uh, pursuing research in this area, perhaps as a, as a doctoral student. Uh, are there opportunities in your, your university?

Christoph: Uh, yes, there absolutely are. Yeah. Yeah. Well, that's, that's pretty good. I mean, you heard it here first .

Hayo: So come and get your multimodal future , uh, buy one, get one free. Um, Christophe wonderful, uh, speaking with you. Thank you so much for joining us and for listening. We'll see you in the next video. .

Feb 5 2024 (11 minutes)
A conversation with Andrew Lian at Asia CALL

Hayo Reinders talks with Andrew Lian at the Asia CALL conference held in Da Nang, Vietnam. More information about the projects mentioned during the interview is provided below.

Precision Language Education, Rhizomatic systems, Epistemology

Lian, A.-P., & Sangarun, P. (2023). Rhizomatic learning systems and precision language education: A perfect match (Chapter 8). In M. S. Khine (Ed.), New Directions in Rhizomatic Learning From Poststructural Thinking to Nomadic Pedagogy (pp. 119–141). Routledge - Taylor & Francis Group. https://www.routledge.com/New-Directions-in-Rhizomatic-Learning-From-Poststructural-Thinking-to-Nomadic/Khine/p/book/9781032453088

Lian, A.-P., & Sangarun, P. (2017). Precision Language Education: A Glimpse Into a Possible Future (Feature article). GEMA Online® Journal of Language Studies, 17(4), 1-15. https://doi.org/10.17576/gema-2017-1704-01

Lian, A.-P. (2004). Technology-Enhanced Language-Learning Environments: a rhizomatic approach. In J.-B. Son (Ed.), Computer-Assisted Language Learning: Concepts, Contexts and Practices (pp. 1–20). iUniverse. http://www.andrewlian.com/andrewlian/prowww/apacall_2004/apacall_lian_ap_tell_rhizomatic.pdf

Lian, A.-P., & Sussex, R. D. (2018). Toward a critical epistemology for learning languages and cultures in 21st century Asia. In A. Curtis & R. D. Sussex (Eds.), Intercultural Communication in Asia: Education, Language and Values (Vol. 24, pp. 37–54). Springer International Publishing AG.

Verbotonalism and Load-Lightening (perception)

Cai, X., Lian, A.-P., Puakpong, N., Shi, Y., Chen, H., Zen, Y., Ou, J., Zheng, W., & Mo, Y. (2021). Optimizing Auditory Input for Foreign Language Learners through a Verbotonal-Based Dichotic Listening Approach. Asian-Pacific Journal of Second and Foreign Language Education, 6(14), https://sfleducation.springeropen.com/articles/10.1186/s40862-021-00119-0

Lian, A.-P., Cai, X., Chen, H., Ou, J., & Zheng, W. (2020). Cerebral Lateralization Induced by Dichotic Listening to Filtered and Unfiltered Stimuli: Optimizing Auditory Input for Foreign Language Learners. Journal of Critical Reviews, 7, 4608-4625. https://dx.doi.org/10.31838/jcr.07.19.541

Wen, F., Lian, A.-P., & Sangarun, P. (2020). Determination of corrective optimals for Chinese university learners of English. Govor/Speech, 37(1), 3-28 (SCOPUS). https://hrcak.srce.hr/file/362922

Self-imitation and perception

Li, Z., & Lian, A.-P. (2022). Achieving Self-Imitation for English Intonation Learning: the Role of Corrective Feedback. Chinese Journal of Applied Linguistics. Chinese Journal of Applied Linguistics, 45(01), 106-125. https://www.degruyter.com/document/doi/10.1515/CJAL-2022- 0108/html

Li, Z., Lian, A.-P., & Yodkamlue, B. (2020). Learning English Intonation Through Exposure to Resynthesized Self-produced Stimuli. GEMA Online® Journal of Language Studies, 20(1), 54-76 (SCOPUS). https://doi.org/10.17576/gema-2020-2001-04

Reinders, H. (Host). (2024, January 16). A conversation with Andrew Lian at Asia CALL conference (No. 7) [Audio podcast episode]. National Foreign Language Resource Center.

A conversation with Andrew Lian at Asia CALL (transcript)

HAYO REINDERS: Hello everybody and welcome to the latest installment of Voices from LLT, our very own in house language learning and technology podcast. Today coming from you, for you live from Vietnam, from Da Nang, the Asia Call Conference. My name is Hajo Reinders and today I have the privilege of speaking with my friend, Professor Andrew Lian. Andrew, welcome.

ANDREW LIAN: Hello, thank you very much. It's a great pleasure to be here. Thank you. Thank you.

HAYO: Uh, there may be a few people who don't know about Asia CALL. You have had some involvement with that, one could say. Can you tell the audience just briefly what is it?

ANDREW: Sure. Well, AsiaCall is a professional and research organization that was established more than 20 years ago.

Initially in Korea, it had its first conference in Thailand, and then it's been going on ever since. We've had regular international conferences every year. We've had a small hiatus at various times, but this is our 20th international conference. It's pretty good. And we've attracted quite a lot of people over the years from, you know, all over the world.

HAYO: Right. And this year, 2023, the theme of course is...

ANDREW: Well, the theme of course is, as you would expect, as you would expect, which is artificial intelligence, uh, and critical digital literacies. But, but fundamentally everybody's just talking about ChatGPT, and, and derivatives of that in terms of large language models and, uh, and, uh Other goodies.

Actually, we haven't talked a lot at all, or at all about, uh, pictures and drawing and video. Right. It's been mainly text based. Mainly text based, yeah.

HAYO: So, at this conference, we have, I think, participants from about 17 different countries, mostly from Asia. There's a couple of people from the United States.

ANDREW: A couple of people from the United States, a lot of people from, uh, from the Philippines. Yeah. A lot of people from India. Indonesia, India, Japan, Cambodia. And so, um, one of the things that we've tried very hard to do over the years is to make the conference accessible. To people in the region whose level of income is not as high as as say the West I've just come back from World CALL where I was a keynote and the fee there was around 500 US dollars Which nobody can afford now in in the region.

So Our fee here, uh, is around, I think, 150 or less, depending on what you, you're trying to do. And in the past, we were able to offer scholarships, and now it's become more difficult because of the financial squeeze. Uh, and also because people are now used to online conferences. Yeah. And so the demand for, uh, face to face conferences is perhaps not quite as high.

Uh, uh, and people are more used to staying at home and being more comfortable and so on.

HAYO: So apart from the focus on Asia and the relatively Affordable entrance fee, shall we say. Is there anything else that makes Asia CALL special?

ANDREW: Well, Asia call is actually one of a number of, of, uh, of Asian associations connected with computer assisted language learning.

Uh, there's nothing particularly special about Asia CALL in the sense that, uh, uh, it, it's specializes in one area or another area, but it, it, it has. We have created over the years a kind of stream of, well, initially returning people, but somehow it keeps attracting new people all the time, and I think that's, that's, that's pretty good.

One of the things that we are interested in doing, uh, is that we are interested in the possibility of creating MOUs with universities in the region. And I say this because initially we hadn't thought of it. Uh, we had asked people to, uh, To be country representatives, for example. And so we had somebody in the Philippines, somebody here, somebody there.

And it didn't work very well. And then suddenly, some, in the last maybe four or five years, some universities have come to us and said, Oh, we'd like an MOU with Asia CALL. And I asked them, why do you want an MOU with Asia CALL? And they said, well, because, you know, we can work together on projects. Uh, and, and you can, you can, uh, you're in an international association.

So they got something out of it, we got something out of it. I, I have to explain, by the way, that we have no members. We have no members, we have no income, apart from the conference. Uh, and, and so we are, we are really constantly renegotiating. The connections with people. So, back to the MOUs. The MOUs are very handy.

We've got four at the moment, with Dongren University, with an institute, an institute, sorry, an Islamic institute in Chiruban, another university, sorry, and two universities in Vietnam, Van Lang University. And so we're thinking, Oh yeah, that's a good way of strengthening the kind of intellectual infrastructure as well as the, uh, physical and people infrastructure of Asia CALL.

HAYO: Well, it certainly feels like a community. Uh, this was, I think my first Asia CALL conference. Yeah. And I noticed a lot of people who knew each other, who'd been to several previous conferences. And that's, of course, always very, very nice to see. Um, just to finish off, how about your own research? What are you focusing on at the moment?

ANDREW: Ah, I, I work in, in a couple of, in, in different, in, in two different areas. Actually, it's not very different from you, in a sense that I think we all, we, all, all of us who work in computer assisted language learning are automatically involved in autonomy. True. I, I, I mean I started all of this back in 1979, 1980, when I, I wrote my first computer program in BASIC on a KL 10, uh, DEC computer, which was a huge multi user machine sitting somewhere, and it was actually putting output, uh, uh, on paper.

With a card thread. No, no, not cards, but just Oh! Oh, no, we'd gone past the cards. Right, okay. It was just on paper. Right. It was a paper terminal, then moved on to CRT terminals. So, and, and the idea behind it at the time was how can I do something to do two things? One was to personalize the learning experience of students, and the second one was to relieve the classroom environment from routine stuff that didn't need to be in the class, right?

And, and so that's how it kind of started. And I've been very interested in areas like listening comprehension for a while. I've been very interested in areas of The kind of feedback that you can do with students. I'm currently working, you know, going back in time, uh, over, uh, with, with, uh, with answer markup systems so that students can enter their answers to questions or to to transcriptions and then get very precise feedback as to what's going on here and what's going on there.

And it doesn't rely on AI. It actually relies on proximity measures, uh, or on, um, a little database of, of errors collected from an error analysis. Right. So that was one side. I'm very interested in, in, on the autonomy side. I'm very interested in development of rhizomatic systems. And As part of the rhizomatic systems, I'm also very interested in, in finding ways of personalizing the research.

Sorry, personalizing the feedback. Feedback, yeah. And, and the access to resources to help people as they have moment to moment problems in their learning. So, you know, you say to people here's a project go and do this and they say well I can't do this thing and I can't do that thing I can't do that thing or they say to themselves and then they seek out the resources to help. Connected with that is a thing called precision language education that I dind of not quite launched, but I started talking about in 2017 and all of this stuff has been published by the way and The, the, um, The notion behind precision language education is that everybody's perceptual mechanisms are different.

And therefore we need to adjust their perceptions of whatever it is that they're trying to learn to them, rather than using a statistical model of how a class might learn or a group of people might learn. And we've started, that's right, and we've started doing that using a theory of perception called verbotonal theory.

Verbotonal theory has allowed us to identify some of the best ways of, of making that connection between perception and learning. And so I guess my latest piece of work or my latest thinking revolves around the notion of the interface between perception and learning. And, uh, so leaving it as, as, as general as that for the moment.

But we have done some very specific work in China on, uh, on, on, uh, on load, on the load on the brain. Uh, what happens when people listen to, uh, audio materials. Uh, and we've discovered that if you feed, for example, uh, intonation through the left ear, and the, the same exactly the same, exactly synchronized, the same sentence on the, in the right ear, but without filtering, the brain loses, uh, uh, some of its load and can redistribute it towards perhaps better processing, right?

And that's coming out as a, as an interesting, as an interesting, uh, um, experiment at the moment, but perhaps an interesting result in the future, because we've got one experiment that's worked where people increase their scores on a particular speaking task by 7%, using a double blind rating system, so nobody knew what they were rating.

So 7 percent improvement. Uh, high, high, uh, so P was, the P value was 0.001, so it's very high. Um, and so We're at a point where we're starting to say the following. Can we improve learning simply by manipulating the audio signal that's coming into people's ears? So it's, it's, it's, and then, and then perhaps even saying, oh, perhaps we need to tune it differently for each person.

And that's where artificial intelligence will come in, uh, and be very handy.

HAYO: Fascinating, and I know that you have had And have a number of PhD students who are working with these concepts and ideas. Um, we'll put a link in the show notes where you can learn more about Professor Lian's research and also his PhD students.

Thank you so much for watching. Thank you again. Wonderful seeing you here. It's been great. Thank you. And thank you for being a keynote for us. Thank you very much.

Oct 16 2023 (35 minutes)
A conversation with the finalists of the 2023 LaunchPad language educational technology competition

In this episode of Voices from Language Learning & Technology, Hayo Reinders interviews 5 language technology who participated as finalists at the LaunchPad language educational technology competition held on June 8 during the 2023 CALICO conference. The yearly LaunchPad event offers a unique platform for first-time entrepreneurs who have created a technology product intended to introduce and pitch their language technology to an audience of world language educators, technologists, and scholars.

Reinders, H. (Host). (2023, October 16). A conversation with the finalists at the 2023 LaunchPad language educational technology competition (No. 6) [Audio podcast episode]. National Foreign Language Resource Center.

Interviews with the five 2023 LaunchPad finalists (transcript)

Hayo Reinders: Hello, everybody, to the latest installment of the podcast Language Learning and Technology Voices from LLT. Today, live from beautiful Minneapolis here at the University of Minnesota at the 2023 CALICO conference, we have a very special podcast episode for you because we had a wonderful event organized today by the Language Flagship Technology Innovation Center who have what is called a LaunchPad event that they have been organizing for the last five or six years.

I think this is the sixth installment this year where a selection is made of really inspiring, promising, innovative, new educational technology companies and projects. And well, today we had five finalists come in and do their presentations, their pitches in front of a very large audience, as well as a group of judges, and two prizes were awarded.

We're here really to have a brief conversation with each of the five finalists. And the first one today is Paige Poole.

Paige Poole: I am Paige Poole, the head of product at Pangea Chat

Hayo Reinders: Welcome Paige, and great job on your pitch today. I really enjoyed your description of your project and your exciting plan.

So we want to hear a little bit more about that. So for the benefit of those listening and viewing right now who were not either live or online during the event and who don't know about your product. Tell us briefly, what is it that you are?

Paige Poole: Yeah, so Pangea Chat aims to make language learning as fun, social, and interactive as students' digital lives outside the classroom and we do that by allowing students to learn a language while texting their friends. So, we have an instant messaging platform that was designed specifically for beginner and low intermediate language students at the high school and university levels. We allow them to learn a language while doing something they love, messaging, and connecting with peers.

And we also give them the agency to choose the topics that they want to talk about while learning a language. So whatever that might be.

Hayo Reinders: Okay. And that might not be something that their teacher might have.

Paige Poole: Yeah, exactly. So yeah, we've heard from students that, you know, the textbooks, the materials, even some of the language learning apps are outdated or they don't really let students talk about what they want to talk about while learning a language. So we hope to allow them to do that.

Hayo Reinders: Oh, that's fantastic. And probably really motivating for the learners as well, I would imagine. Yeah, yeah. Have you done any research or observations of the impact that engaging in this type of activity has on your learners?

Paige Poole: So we just finished a small pilot study. We did an efficacy study as we did our spring pilot from January to May, and we are starting to analyze that data now so I can't really speak to what the data says yet, but hopefully soon, yeah. But lots of good comments from students. We do have comments about how they like that it resembles other messaging platforms like iMessage or WhatsApp, and it gives them that familiar context of I'm texting and messaging with my friends, but in a space where they're actively learning.

And so they've said, you know, it makes. It makes it, fun. It brings some informality to their language practice. And so they do like that it connects with, you know, texting, which is what they spend a lot of time doing.

Hayo Reinders: Exactly. And so you, you've described it as a text texting app it's mobile-based or is it?

Paige Poole: Yeah. So it's available on a mobile version for Apple and Android. It's on the Apple Store, the Google Play Store, and it is also available as a web version.

Hayo Reinders: And so as a teacher, what would I have to do to use the application? Do I have to create a teacher's account, for example, and then have my students sign up for individual accounts, or can I sign up an entire class? How does it work in practice?

Paige Poole: Yeah, so you would create an account as a teacher, and then once you've created your account, you would create a class, very similar to what you would do in Google Classroom. And then you can invite students with a code or with a link, then students would join your class either with that code or with that link.

Hayo Reinders: All right. And so are you envisaging that the teacher would give the students instructions during class time, either online or physical class, and give them like ideas as to what to do, or for how long to speak, and those kinds of instructions are all done by the teacher, not by the app.

Paige Poole: Yeah, so the teacher really has a lot of freedom to decide how they want to use the app, and they can also decide to let it be totally student-led.

So when a teacher creates a class, there are certain permissions they can set depending on what their ultimate goal is. Some of those include whether students can send direct messages to each other, like private messages. Whether students can create group chats or whether that's something that only the teacher can do to have more control over the types of interactions.

Students can also be asked to use it in the class space or outside of the class. We originally thought of it as something that students would be doing outside of class. So, extending that language learning and practice outside the classroom. So, taking what I'm learning in the classroom, applying it, you know, in a real world, authentic context with my peers, but also giving me that freedom if I want to talk about something else with my classmates, I can.

Hayo Reinders: Right. Nice one. So, you've obviously really, as a company, really got a feather in your cap today because of being a finalist. And this is really an achievement. I guess the question is how is this going to help you and your team to move forward and what's your vision, what's happening in say the next year or two years for you and the team?

Paige Poole: So lots of exciting things. We're launching our commercial version on July 15th so it's free to use until then. And we're really going to be working, as well as submitting, a phase two application for an NSF SBIR grant. We've got a phase one, we're submitting for phase two. That will also really help us with our development moving faster, getting more things onto the platform.

We're really working on adding more elements of game design so things like a point system for students, leaderboards, and also more feedback for students. That's really important for us that students are getting individualized feedback as their messaging, often in ways that are not currently possible for teachers to do during communicative tasks in the classroom.

If you think about, you know, you've got a class of 20 students they're doing small group communicative tasks. You can't give feedback to every one of those students, but they would be getting that feedback on Pangea Chat. So working on more ways of making that possible.

Hayo Reinders: So would they receive feedback from the teacher afterwards? Would the teacher be reading a transcript, for example, or are you offering feedback that is delivered by the app? So it's delivered by the app.

Paige Poole: So every time a student sends a message they're getting feedback. And so just to give you some examples, if I'm a student and I'm writing in my base language, the app is automatically going to come up and say, Whoa, hey, I see you're writing in your base language. Let me help you get that in your target language. The students are going to be led through an interactive translation flow where they're going to have to make critical choices about the language that they choose to translate their link, and their message into their target language. And for every choice they make, they're getting feedback on the accuracy of that choice.

Also, if they need more information about the word that they're choosing, they're getting feedback in terms of definitions, and contextual definitions as well, and then at the end, they're also getting feedback on the overall, let's say, accuracy of their translation. So how similar is what they've translated in this learning activity to what they intended to say, in their base language?

And then for those learners who are writing in their target language, they're also getting feedback. So if they're getting, if they have grammar mistakes, they're getting feedback on that. And this is all happening before they send a message. So it's a really low-risk, high-support environment.

Students can feel comfortable about making these mistakes, getting that feedback, and then sending their message and teachers can see an analysis of this to help guide learning and teaching in the classroom, curricular decisions and so they're not seeing the transcripts, but they're seeing, for example, how many messages students are sending at the class level, chat level, and individual student level.

And then in terms of language learning, they're also able to see how many of those messages at those same three levels were sent with translation assistance, with grammar assistance, or totally in the target language without any assistance, which ideally is, you know, I've sent an accurate, comprehensible message in my target language.

Hayo Reinders: Yeah, amazing. And you've come such a long way with the development in a relatively short amount of time and it's a huge amount of work. So I guess just, you know, thinking of the audience maybe graduate students, people who are thinking of developing something themselves, starting a company like you and your colleagues have done. Any advice for our listeners?

Paige Poole: It's a journey for sure. There will be lots of learning, lots of skinned knees and you know, picking yourself back up. But I think some really important advice is, to speak to whoever you are designing your product for as much and as often as possible. And once you have something that they can look at, get that in their hands so that you can get feedback from them. What do they like? What do they not like? Is this really addressing the need that they have? Is this helping them? Right? Because the idea with any product is, you know, you're helping someone do something they need to do in a better way than they could have done before.

And so in our case, You know, talking to teachers and talking to students has been invaluable, and the more you can do that, the better, and I think the better that your product will be in the sense of, you know, meeting that need and really helping, in this case, teachers and students.

Hayo Reinders: Well said. Well said. So, you mentioned that until July 15, people can...

Paige Poole: Yes, they can download it from the Apple App Store, Google Play Store, or use the web version. We've got some special pricing before then so if you do decide, you know, this is great. I want to use it with my students before July 15th, we've got some special offers. People can check out our website and then after that, it will be, a freemium where they can message on the platform, but to get access to the language learning tools, they will have to have a subscription.

Hayo Reinders: Yeah. And we'll put links to the app in the show notes. Well, it's been wonderful talking to you, and congratulations again on your success. Thank you.

And now we have our second interviewee here. It's Anthony Spadafino from Lingostar AI and, well, Anthony, huge congratulations because you not only won the prize today but a second prize as well, the audience award as well as the judges' prize. Congratulations, well done. So Lingostar.ai, tell us briefly what is it?

Anthony Spadafino: Yeah, thanks for having us, by the way. Sure. CALICO, it's amazing to be here. , yeah, so Lingostar.ai, it's a generative AI-powered chatbot that intends to bring the power of conversational immersion into the classroom.

It is effectively a platform on which either a teacher or an independent learner can create any number of basically infinite conversation scenarios in which the generative AI portion, the large language models, are powering a live unscripted chat. And what that does is it allows students to basically have daily repetition of conversation practice.

It allows teachers to integrate culture and adapt the generative AI platform to their curriculum so that it's very flexible and then it also is personalized for every student. So it can adapt and learn over time about a student's level as well as sort of integrate and evolve with the student as they mature in their language learning journey.

Hayo Reinders: Right. So part of the secret sauce of your app, which makes a difference from just giving students access and access to ChatGPT is that it remembers your conversations that you had.

Anthony Spadafino: Yeah. So, not only are we overlaying a lot of useful functionality for the live conversation piece like feedback on pronunciation, grammar, spelling, etc. But as you mentioned, the most important thing that we do is parse all of the data, whether that's audio data or text data, for improving the experience for every individual user. And that can be in the short term, meaning we are taking that data and we're saying, Hey, you might have been able to pronounce this, you know, better.

Why don't we practice that pronunciation in a separate flow or it might be more medium to long term where we're parsing data over multiple conversations and saying, you know, we're seeing repeated mistakes, whether that's a conjugation, whether that's maybe unsophisticated vocabulary or misuse of vocabulary, and that's going to power post conversation study games, activities. And then eventually when we get there, the AI is going to be able to recast those conversations in the next conversation. So the student actually sees proper use of that grammatical structure, that vocabulary term, et cetera, in the course of a real conversation.

And that's really, I think, the magic of AI is just the ability to personalize the experience for every student.

Hayo Reinders: So from the teacher's perspective, and you know, if I'm in my class, how would I use this as part of a classroom activity, for example?

Anthony Spadafino: Yeah. So for us, first and foremost, a teacher can create as many classrooms as they want. They can invite students to those classrooms using a unique link. And then the teacher is in full control over what the student sees when they log into the platform. So they can, students can choose from pre-existing topics that Lingostar created. They can use our chat about anything feature, which means they can put in everything I showed in the demo today, a conversation about debating with our AI whether a hot dog is a sandwich.

Or, the teacher can create their own topics. And I think that the part that's most powerful right now for teachers is that we at Lingostar believe that teachers are a really integral part of the language learning journey. We're not trying to replace the curriculum. We actually want our chatbot to be able to talk about anything, quite literally anything so that the chatbot can be used by teachers to effectively wrap itself around the existing curriculum.

So say if you're learning in a Spanish class about la comida, then the teacher can create custom topics about, I don't know, Tapas in Spain, and they can create custom topics and the student is using the vocabulary that they're learning in the course of the classroom in a safe space, which is in a live conversation with the chatbot, but it is also like over time is starting to, you know, learn about the students. As I said, it is learning about their mistakes so that over time we're able to track that student's progress from the beginning of the semester or year towards the end.

Hayo Reinders: Well, what Anthony is not telling you here is that some students told me earlier are apparently wanting to, you know, use, learn the language to be able to ask somebody on a date and things like that. And what did you call the bot?

Anthony Spadafino: Well, we're creating a personality type for our chatbot, which we're calling FlirtyBot.

Hayo Reinders: FlirtyBot. I think I suggested that would be a great name for the company, but maybe not. So you've won two big prizes and a huge achievement for you and your team. What's next for you and the company?

Anthony Spadafino: Well, I think first is product wise, I think there's such a long road map. We're never finished building what we want to build.

And for us, we really want, like, from an AI perspective, we really want to be able to nail down this concept of tracking progress. So how do, how can we use AI to train our model such that when a student gives us a few conversations worth of audio data as well as, you know, conversation data, we can peg them, you know, using the actual standards or the separate standards to a specific level. And so over time, we can track that progress and actually measure that progress over time so that we know that we're delivering better outcomes. That's more on the product side. I think more on the business side, I think the most important thing for us is to, you know, demonstrate the utility of this to different customer groups.

That's why we love being invited to, places like CALICO because we get a chance to meet with so many university professors, language centers, and potential users. We can really get that valuable feedback. So for us, it's about demonstrating the potential for this to basically find investors and find clients and customers for the product.

Hayo Reinders: So people watching and listening, I will put a link to the website into the show notes if people want to reach out to you. Or use your product or do research on it, etc. Any possibilities for collaboration?

Anthony Spadafino: Yeah, totally. If you go on our website and you fill out a form, that email goes directly to me.

We are not a major corporation. We are five passionate people who have been building this in our spare time and we are really, really happy to try and, you know, use this technology for good, specifically for the academic community. We are very committed to keeping the price as low, if not free, which currently it's free. [We�re] committed to keeping this as low as possible so that the maximum amount of students, especially in K-12 and at universities around the world, have access to this type of technology. So, please, if you have any specific use case, and you're not sure whether this could, you know, our technology could be adapted for it, just reach out via the website and I'll get back in touch with you right away.

Hayo Reinders: Great, thank you. Final, final question. Any advice for graduate students? Recent entrepreneurs, potential entrepreneurs.

Anthony Spadafino: Any suggestions? Yeah, I would say first off, there are so many of us that are passionate about this. And in some ways, you can view that as competition, and I, I really don't think that's the case.

I think there's just so much energy and passion in this space around the world that first and foremost network yourself with other potential founders or other founders with products that you know, you really enjoy using. I've gotten a chance to meet the other finalists for the Launchpad Award and have been thrilled to review their products with them and talk through it.

So, that's number one. And then number two, really think hard about how you're going to get this product in front of your core customers. The phrase, you know, the product sells itself is not the truth. Okay, it's really hard and it's really hard in education. And I think making sure you're really, really focused on who your target customer is and then trying to speak with them as early and often as possible to understand their true needs, is going to really help you develop that product effectively.

Hayo Reinders: Well said. Thank you very much, Anthony Spadafino, and congratulations again. Thank you.

Hayo Reinders: Hello again, and this time we have the third finalist of the Launchpad event, Linh Phung, who yesterday presented and pitched her wonderful application Eduling Speak. Linh, again, congratulations on your success. For those of us who were not at the pitch earlier, could you just briefly tell us what is Eduling Speak and what does it do?

Linh Phung: Thank you. Hi, everyone. Eduling Speak basically connects learners to talk in pairs based on communicative tasks. So it's not really a free conversation. It's a conversation that is supported by tasks. From my experience, I think that it is kind of the very first task-based app that does this because I believe that learners need more oral practice or communication practice. So really this is an opportunity for learners to have real communication, to develop true communicative competence.

Hayo Reinders: Would learners use the application independently outside the classroom or is it also something that can be used with a teacher inside the classroom?

Linh Phung: They can use it outside the classroom because the idea is that they can find a partner to talk to anytime, anywhere when the number of users is big enough. However, they can also use this in the classroom when the teacher assigns tasks to them. So we do have a separate web-based educator dashboard for the teacher to assign tasks and listen to the students' recordings and give students feedback.

Hayo Reinders: Interesting. So if I'm logging in as a student from, say, here in Minneapolis, then how would I be paired with another?

Linh Phung: That's a good question because that's the main feature of the app. One way for learners to be paired is to join the live lounge. And if someone else is also there, they can get connected.

Another way for them to get connected is to pin their profile in the connection lounge and make friends. They can also make friends with people that they know so that they can text them and schedule time to talk with one another. So there are a lot of ways to connect with each other.

Hayo Reinders: And I know from your pitch that Edulink Speak is targeted also at younger learners.

So how do you deal with security, safety, online safety, etc.?

Linh Phung: Actually, the app is now for learners at least 13 years old. So actually the learner population is old enough to make the decisions in terms of opening a learner account. But for those parents who are concerned, they can create accounts for their children and they can lock certain features as well. They can lock the live lounge or they can lock the connection lounge as well, just for safety reasons.

Hayo Reinders: Yes. And teachers probably would be able to do the same thing if teachers were using it in class and they could decide what features they want to be available for the learners.

Linh Phung: Right now it's not possible because by now, if they want to create an account, they have to be old enough, but also in certain countries, I know that learners have to be at least 16 in order to do something like that. So parents can make those decisions for them.

Hayo Reinders: Right. Okay. So you obviously had a big success you know, being selected as one of the finalists here so hopefully that will help you in the future development of your application. So what are your plans for the short to medium term?

Linh Phung: What's next? It's interesting that I can see that the app is always in beta in the form that I'm always improving and developing it. I do have many big ideas and big plans for the future. One of the ideas is to have teachers create content so that we have more content and more relevant content to students and teachers in different classrooms. The second idea is to open it up as a marketplace for different services, like tutoring, coaching, and language advising. I hope to incorporate more automated feedback as well.

I haven't mentioned that we also have AI correction and vocabulary analysis to give feedback to students. Students can also get feedback from the teacher, but hopefully, there will be also more automated feedback. For those who want to learn from their own experience and reflect on their speaking.

Hayo Reinders: Well, I know that you've worked on this application very hard in a relatively short period of time and have added so many features in general. You have a lot of energy and you do a lot of different things. So, for those listening and or watching who are perhaps thinking of their own future projects, and have ideas, do you have any advice for our listeners?

Linh Phung: Yeah. It is just something that I really enjoy doing. I enjoy creating materials. I enjoy bringing ideas into fusion. So, as you know, I also have this kind of just-do-it attitude that if I have an idea and I'm passionate about it, I try to get it done and make it happen. So perhaps one of my advice is to gather ideas, but if you are really passionate about it, get started and maybe improve from what you have.

Hayo Reinders: Find your passion and just do it. You heard it here first. Linh, thank you very much. Thank you.

Hayo Reinders: And now here we have the next finalist, Mage Duel. Welcome. First of all, congratulations again on being one of the finalists. Can you just tell us very briefly, what is your program about?

Deanna Terzian: Well, Mage Duel is an AI-powered video game, that teaches language. So it's a language learning, AI-powered video game.

Hayo Reinders: It's an AI-powered game, English only?

Deanna Terzian: It's for multiple languages. Right now we have three languages that are implemented. Arabic, Mandarin, and Spanish.

Hayo Reinders: Okay, an AI-powered game, is it going to be used by students learning independently or in class?

Deanna Terzian: It's designed to complement formal language study, and the idea is that when you're developing, in language acquisition, fluency is a very tricky thing to get right. It takes a long time, because in order to be, to develop fluency, you need practice. And it's hard to get enough practice what we're trying to do is accelerate fluency and the way we do that is we give you an immersive video game where the learning activities are integrated into the gameplay. So as you're playing the game and going through the activities, you're doing the work. But it's also fun, and so our expectation is that students would play the game longer than they would endure just studying in, in the traditional methods.

Hayo Reinders: Right, right. And would they use it only in class, or can they also do it at home, for example?

Deanna Terzian: Oh, they can absolutely. It's designed to be used out of class and it's designed to be something that you might do for homework and to build on what you've learned in class. What's a little bit unique about our approach is that we use a semantic similarity engine in our language production activity. And what that means is that, it's an AI-based, large language model engine, and as you're producing the language, we're evaluating you based on meaning instead of on exact matching, a gold standard translation, and that is how we naturally acquire language. It's about being understood and understanding and about, you know, comprehensible inputs and outputs.

And so, with the semantic similarity engine, We're able to reward the students for partial understanding. I mean, I don't know if you've ever done other types of language games where you miss one word and then you have to start over. That's not that helpful. It's not motivational. And it's not really helpful because it's about understanding and being understood.

So that's, that's one of the innovations in our game.

Hayo Reinders: Very nice. So now you've got another feather in your cap because as a finalist, obviously, it's a great success. What's next for your company?

Deanna Terzian: Our next version of the game is coming out actually in just a few weeks and we now want to get it in the hands of students and we want feedback. We want to understand what works well, and what doesn't work well, and then a full efficacy study. That's all on the horizon in the next six months.

Hayo Reinders: Wow, excellent. All right. So there will be some people listening who maybe have some of their own ideas, graduate students. Maybe even people who are thinking of starting their own project or company.

Any advice for them?

Deanna Terzian: I mean, one of the things that has worked really well with us for us is we're working with the US Air Force and our customer, we meet with them weekly and we iterate just like you do in software development. So we try things out. We don't get too far into the project without getting feedback because what you think is going to work well for the students or, you know, what you think is going to make sense doesn't always happen when you're actually putting it into action.

And so, it's, I would say that the, the most critical thing is having really regular contact. And it's a little scary because that means that you're showing things that aren't quite polished and aren't quite ready and so you have to build that trust but the payoff is just enormous. And so, if you can find a customer, even if it's someone who isn't paying you, but somebody who's willing to meet with you once a week and go through that process with you, it's extremely valuable.

Hayo Reinders: Yeah, that's really great advice. Deanna Terzian from Mage Duel, thank you so much, and congratulations again.

Hi again, here we are with Lia Sauder from DLS VR. First of all, congratulations. You are one of the finalists and that's a real success. For those of us who were not at the pitch, could you just briefly tell us, what is DLS VR?

Lia Sauder: Sure. So, DLS VR is an abbreviation for Diplomatic Language Services Virtual Reality. And we are a virtual reality system that is web-based. Students and instructors connect with each other in the virtual environment online or in person. And they travel around to different countries virtually, sort of like a field trip, to simulate an experience of being in another country.

We have a back-end editor that allows you to create modules easily without code and so that's one of the nice features of DLS VR that makes it functional for even instructors and students to get involved with the design process.

Hayo Reinders: And so in an average classroom, let's say you've got 18 or 20 students how would this work? Would you have one headset or two headsets and then rotate it around the class or?

Lia Sauder: It's specifically designed for smaller class settings. That's what it really thrives in. We created it for DLS. Class sizes tend to be smaller, one on one, one on two, one on three and so it thrives more in an intimate setting because the instructor that's teaching students the language in the language classroom can then take the students into a VR experience and it just augments their language training.It gives them a chance to practice listening and speaking skills, and really focus on the culture as well because they're touring landmarks of the place that they're studying about.

Hayo Reinders: Right, and so would the teacher be with the students in the VR world at all times?

Lia Sauder: That's how it's designed and that's how it's most commonly used. We're experimenting with asynchronous, so self-exploration mode for students, and working on figuring out how to do that in a very interactive way because the teacher does provide a lot of interaction or prompts, and tasks for the students to do in the VR environment as well as the pedagogical elements that are embedded inside of the modules for students to interact with but in an asynchronous format, then that needs to be built in. So we're working on that too.

Hayo Reinders: Well, that nicely leads me to the next question because you now have this success as a finalist of the LaunchPad. So what are you, what are you working on next? What's next for the project?

Lia Sauder: We are going to continue working on asynchronous modules, trying that out with our students, and then also potentially working to get the editor in front of teachers and students, so that people in classrooms outside of DLS can also experience building VR modules.

Hayo Reinders: Very exciting. So, there might be some people listening, watching now, who have their own ideas, and projects, and maybe some who might even be thinking about starting their own company. Any words of wisdom for them?

Lia Sauder: Sure. I would say it's important to keep the excitement of the tool grounded in research. So what from research that we know works well can you bring into the app? So it's not just an app for its cool features and the excitement of it or the novelty of it, but how can you really embed that in pedagogy and a solid grounding in teaching or working towards whatever goal you have with the application.

Hayo Reinders: Yeah, well said. Yeah, so if you're a teacher listening, never forget your teaching roots, right? Excellent. Well, again, congratulations on your success. And that was DLS VR.

Feb 27 2023 (28 minutes)
A conversation with Lara Lomicka and Liudmila Klimanova Related Resource

In this episode of Voices from Language Learning & Technology, Hayo Reinders interviews the guest editors of a Special Issue on Semiotics (V27N2), Lara Lomicka and Liudmila Klimanova.

Reinders, H. (Host). (2023, February 27). A conversation with Lara Lomicka and Liudmila Klimanova (No. 5) [Audio podcast episode]. National Foreign Language Resource Center. https://hdl.handle.net/10125/104688

A conversation with Lara Lomicka and Liudmila Klimanova (Transcript)

Hayo Reinders: Hello, all you lovely language and terrific technology people and welcome to Voices from Language Learning and Technology, our very own in-house podcast. My name is Hayo Reinders, and today I have two wonderful guests, Lara Lomicka and Liudmila Klimanova. Welcome to you both. Lara and Liudmila are on the show now because very soon now, any day, we're going to have the latest special issue drop of language learning and technology that both of you are, editing. Lara, maybe I'll start with you: what is the topic and what can we expect?

Lara Lomicka:  Okay, well the special issue is coming out about semiotics. So signs, meanings multi-modality in digital spaces, digital places, and how that relates to language learning and teaching. So we're really excited about this because I think that something like this hasn't been done before or published anywhere. So we were really grateful for language learning and technology to give us the opportunity to kind of bring some, some new ideas together from the field that relate to some different things. And I guess I could maybe start by just talking a little bit about how we came up with this idea and how it kind of came to fruition. Um, I would say about three or four years ago, I was at a conference with Liudmila. I was at one of her sessions and she was presenting on mapping the Borderlands, which is a project that she has been working on at the University of Arizona with some of her colleagues and while I was listening to her project, I was thinking, wow, you know, I'm doing something very similar with a different tool, but I can't get my tool to do what I want it to do.

So I started talking with Liudmila. We had several discussions about how I might be able to use her tool and collaborate with her, even though my project wasn't quite the same. You know, her project was connecting students and who were living in these border regions and documenting their experiences through images that they were collecting.

Um, and then, you know, talking about the meanings behind these images while geo-locating them. And I was working in the context of study abroad, short term, study abroad, taking students to Paris, taking students to Morocco and having them capture kind of what they were experiencing in these spaces, trying to make meaning of their, their images and, and locate these images, geolocate them on maps.

But the tool that I was using Siftr wasn't allowing me to upload Um, 360 images and 360 video and Liudmila’s tool could do just that. And so we, we kind of initially got together just through that conference presentation and started thinking about, you know, what we could do to promote the topic of semiotics a little bit more in language learning and teaching.

And so, you know, we, we were thinking, well, why don't we propose a, a special issue where maybe we could see what's, what's going on with Researchers internationally and nationally to see what kind of great ideas we could hear from, from others and kind of pull them all together into to a special issue.

Hayo Reinders: Awesome. I, have had a sneak peak, a little preview of the contents of the special issue. And what struck me is that it's a very, very wide range of topics. You know, you think of semiotics, that's quite specific. It's not my field. And I assumed that, you know, it was all going to be kind of quite contained. But actually the range of, of topics is, is amazing. So maybe. Liudmila, can you tell us a little bit about the kinds of contributions that that you received?

Liudmila Klimanova: Absolutely, yes. And I think that you were not the only one surprised. Lara and I, we both found that the articles that were submitted to the special issue featured many interesting, new and innovative pedagogies, new innovative theoretical approaches to teaching in multimodal contexts, which was different from what we initially thought semiotics in CALL was about. Our initial idea about the visual side of learning quickly expanded to include different types of materialities - the material side of learning - the tools, how the tools and the configurations of tools can determine different outcomes for language learning, and how even the [vertical or horizontal] orientation of personal devices may affect the way learners perceive and process information online. So, this wide range of topics - the breadth - both conceptual and theoretical and pedagogical practical applications - is really astounding. There's so much going on right now with our technology that we sometimes overlook this [material] part of learning. With this special issue, we want to bring these considerations into focus again and encourage scholars and CALL specialists to consider the visual side of learning more seriously - starting from the tools through which learners perceive, view, and see the reality in which they live and function, but also the ways of enriching students’ learning experiences by playing with various tools and diversifying the visuals in digital devices and digital platforms, engaging them in a more rich and transformative language learning.

In a nutshell, this is what this special issue is about. There are many different topics, and, of course, we want to encourage LLT readers to have a look. Every article is really unique and it brings something new and something different to the study of semiotics, digital semiotics, and visual semiotics, in general.

Hayo Reinders: Yeah. And what, what really also struck me in addition to the range of topics that you, you've mentioned, is the range of methodologies as well and I was wondering, Lara, if you could say a little bit more about that, not just in terms of the methodologies used in these papers, but also how some of these tools can be used as research methodologies, perhaps even beyond the field of semiotics.

Lara Lomicka: Well, I think you know, really any of these articles, if you go in and kind of look at their approaches, their theoretical perspectives, the way that they've designed the studies you can draw from each of those and kind of apply it to other things. Um, you know, there's ecological approaches to, to learning and semiotics.

There's more quantitative type approaches. There's an article that has to do with learner memes you know, looking at the, the power behind the images that come out of the, the memes and the representations. Um, and so just looking at the images themselves and, and trying to figure out what, what can, you know, what can they tell us about culture? What can they tell us about language? How do we make those connections? How do we link all that back to the community, to the locality? Um, and you know, I think initially as Liudmila said, we were, we were thinking this, this special issue would be something a little bit more related to just kind of the strictly images and geolocating those images and linguistic landscapes and, and what it's become is this really nice mesh of you know, like you said, different methodologies different ways that we can approach semiotics that can be helpful I think, to those looking to explore the use of semiotics and what that can bring to the field of language learning and technology

Hayo Reinders: Yeah. And you know, I've, I've got 2 questions there because you mentioned it right at the beginning, Lara, something that I wanted to follow up on, which was that there was nothing like this, that there had not been any publication that that brings together the technology side and, and semiotics in this way. So that, that kind of surprised me a little bit because as an outsider to this specific field, it seems like such a natural fit. So maybe if one of you could say a little bit more about that and then the follow up question where do you see the field going from here? You know, what's next for this sort of combination of technology and semiotics?

Liudmila Klimanova: Well, I can start with the first question. If we think about semiotics in general, the field has been around for a long time, and what surprises me is that in the CALL field, we work with visuals all the time. We work with various visual representations of language and representations of culture on the screen. And somehow we often overlook this aspect of learning - that is, to what extent the way we position objects on the screen, how we engage students in the creation of digital images affects their learning and in what type of learning they engage when they have this rich multimodal input they receive [in digital spaces] as opposed to linguistic input that has been around in our field for, for a while. So, what does it mean to be engaged in multimodal input? I think bringing semiotics into the discussion of what we do in the field of computer-assisted language learning is a way to complexify digital communication. That is the reality of life these days. We communicate on the go, we take our phones everywhere with us, we show locations, we talk to our friends, family and colleagues. Those aspects of communication change the linguistic output, what we produce [as language speakers], how we say things. If there is a way for us to complement our language with visuals, with the semiotic resources that are available to us that also affect the language itself. The basic rule of language is to avoid redundancy [in communication]. We do not want to be redundant and repetitive when we express our ideas. So, in this sense, it's interesting to consider [digital] semiotics from multiple angles - from the point of view of instructional design (and we have a wonderful article in the special issue that talks specifically about the design of instructional materials with the focus on semiotics). It is interesting to look at semiotics from the point of view of communication - how our digital communication has changed the way we interact, the ways we convey the topic of our conversation, and how we utilize semiotic resources and visual resources in interaction. This is also a new reality, but this is also how we use second languages today as well. So, in this sense, I think it's interesting to observe this dynamic. We are seeing more and more publications and studies on embodied resources and visual resources as they are used in digital communication. I think this special issue creates this starting point to build a new orientation in our field and a new direction where we will be considering semiotic resources as valuable and significant for language learning and for language use in digital spaces.

Hayo Reinders: Very interesting. And of course this, this immediately raises all sorts of questions, and I know LIudmila, you and I spoke last.Uh, in, in this podcast about identity. Uh, and so my, my obvious next question maybe for, for Lara or for either of you is, how, how does all this tie into L2 identity formation and, you know what can we, what can we expect there? And, and in particular, maybe, you know, as there are more and more readily available tools that expand our repertoire as individuals, and I'm thinking in particular of augmented and virtual reality, and I know Lara, you've had a long term interest in, in those areas, if I'm not mistaken, you know, what are some of the kind of directions that you see emerging there?

Lara Lomicka: You know, when we have an image, what does this image tell us about? Um, you know, the, the people, the culture, the language. I'm thinking, you know, particularly one of the areas that I'd like to see us expand to a little bit more is linguistic landscapes and, and looking at, you know, signs to help us identify and find the identity of, you know, who belongs in these Spaces, what do the signs tell us about who belongs there, who doesn't belong there? Um, how that contributes to the, the culture of the community. When we walk into a space and we notice signs that are all around us, you know, what, what can we learn from them? Um, so, you know, going back to a, a couple of questions back and trying to link that into identities.

Hayo Reinders: Yeah. You mentioned the use of images and, and signs and what kind of landscape, you know, means, what it conveys, et cetera. But I guess what I was curious about, I don't know if either of you has any thoughts on it, is, is whether there's perhaps a democratizing potential for augmented and virtual reality type technologies where perhaps learners or people in general have at least the potential for more, you know, creative outlets to shape the landscape, question it, and maybe reshape it. And you know, I'm thinking of just, just simply, using things like digital storytelling in my own classes, allowing learners who don't yet have the linguistic means to fully express their ideas. But if you encourage them and enable them support them in using images and videos and cartoons and music mm-hmm, I'm just wondering if, if either, if you have any thoughts on, on that potential there.

Liudmila Klimanova: Well, there is definitely a lot of potential in utilizing multimodal resources, and not only in the way where they just supplement the message, but also in the way that they create an opportunity to critically observe and analyze the message. Going back to your idea of critical dimensions, we now have been talking a lot about critical digital literacy. Without the semiotic resources and visual resources, it would be hard to teach students to critically analyze digital content, to see the problematicity of some of the visual and digital designs. In that respect, bringing multimodality and semiotics resources to the teaching of languages creates new opportunities for us to teach beyond just the linguistic content, but also to teach students analytical skills, critical thinking skills; teach them to see [digital] inequality; teach them to see an unequal division of web resources, recognize those problems in our society and act upon them and propose technological actions to ensure a fair distribution of goods in digital spaces. But you also mentioned, and I thought it was very interesting, that by engaging the non-linguistic channels of communication in class activities, we also create opportunities for students, who do not have enough language to communicate in a rich and very personal way. Even with that minimal language that they have, by utilizing digital technology, by taking pictures, and sharing GPS locations, and discussing these locations with the minimal language (but the locations that are meaningful to them) they describe them at a particular time in a particular place and express their emotions and convey their way of living. [All of this] can help convey those messages to someone who may not be next door, but who may be across the ocean. So this, I think, is really fascinating - multimodal communication, and what potential it has for language instruction, especially for technology-enhanced language instruction.

Lara Lomicka: I just wanted to add that In addition to, you know, working with learners who may have limited communication and using those images to bring and to allow that communication to happen a little more easily and in a different way I think that the, one of the, the beauties of semiotics is that it also brings different tools that allow us to help learners to can bring the culture and the language to them.

So if you're thinking about, for example, working with 360 Media, you know, a very simple exercise might be allowing students to experience a, a space in a particular culture right in the classroom. And then the power of, you know, discussing what those images, what the signs they see, what the things they see around them, what those mean in that particular culture can also be a, another way of you know, kind of bridging the gap between the, the students who may not have a, a lot of communication skills, but also may not have the opportunities to go into other spaces and study abroad.

Hayo Reinders: Very interesting. Yeah, that was one of the things that really struck me in reading these contributions was that, that there's so much potential application to language teaching practice as well from, from a lot of these studies. You've got a, you've got a, a wide range of papers. I think there's eight if I'm I correct. Eight, eight papers included in the special issue. Will there be an introduction from, from you?

Lara Lomicka: Yes, yes, we do have an introduction in, in the works and it's, it's probably in the final stages of copy edit right now. I, I believe so should be wonderful.

Hayo Reinders: We are really looking forward to seeing it in well not print, but you know what I mean! So what, what's next for, for both of you Liudmila. Let's, let's, let's start with you. Are you continuing to work in this space, and if so, what are you working on?

Liudmila Klimanova: Absolutely. I think there is definitely a big future for semiotics in our field. I'm currently working on interesting project which sort of goes back to how you, what you asked before. I'm working with very low beginning learners. And I ask them to communicate with learners in other countries at the same level, but using only images and the visuals and video. And I'm looking at the dynamic and how they, different meanings that convey through visuals mixed sort of a mesh of language visual. Video and sound and how you know that communication is possible. So this is on the research side. Um, and uh, I'm also interested as Laura in expanding my work on digital mapping and interactive mapping. There is a lot of potential given that we have Google Maps and Google Maps literally, you know, put us at any location in the world and allow us to experience that location almost to the point that we feel that we are in that location. We can look around, we can see the buildings, we can see the cars moving around. And, and that, that is fascinating because it's an interesting alternative to study abroad to students who are not able to travel to study abroad. But this could be a great preparation so students can experience spaces in other countries without even leaving their home countries. So in this sense, I think it's very interesting to look more closely at what we can do with digital mapping. In fact, digital mapping. Um, as we have been working on this area for a while, we found that it hasn't been given, um that needed attention in our field and we would like. To, to continue to work in this in this area, to, to bring digital maps to the classroom for the study of culture, for the study of interaction to allow students to share their experiences more with other students in other parts of the world through virtual exchange. So those, those are great projects. Many projects that can come out of the semiotic, concept in computer assisted language learning. and endless source of inspiration for our listeners and, and readers and, and, you know, maybe graduate students.

Hayo Reinders: Right. A huge amount of work to be done, it sounds like! Very, very interesting work. And Lara, what are you working on?

Lara Lomicka: Well Liudmila and I are exploring place-based learning a little bit more. And we're kind of working on a couple of projects in, in that respect, we're going be doing something at Calico this summer….a workshop on place-based learning. And if time permits, we'll have learners kind of looking around the space on the campus in, in Minnesota, but, we'll, we'll, we'll see. But that's, that's coming up. But I'm also working on another project that involves Street Arch when students travel with me to Paris. We're going to be exploring a couple of areas where they have, it's kind of like being in an outdoor museum where they have murals all over the buildings and walls and students are going be kind of taking the slow walks to see how they understand and interpret these murals that have been commissioned and, and done in this area of Paris. And then they're going to work with native speakers to analyze and, and kind of deepen their understanding, to think more critically about what's going on with the art, what it's trying to say, especially in this particular neighborhood of Paris.

I've also been working on a project for a number of years that involves the LESCANT framework where students are looking when they, they're, they're noticing specific elements of culture, like language or authority and non-verbal things. And so they're documenting these, these things with pictures as they travel around in a Specific community. Um, and then discussing kind of the meetings behind some of the things that they see and notice with native speaker partners. So a lot of what Ludmila and I are, are doing parallels in many ways, which I think has lend itself well to the, the partnership and collaboration that we have embarked on over the past few years.

Hayo Reinders: Very nice. Sounds like there's gonna be another special issue at some point. After all the hard work that you've done at this point, you'll happily leave that thought behind for a little while. So, is there, before we wrap up, is there anything that you would've liked me to ask you about or that you want to say about any of the contributions to the special issue or any final words of wisdom you wanna share before we wrap up the conversation.

Lara Lomicka: Well, I think we just wanna encourage readers to, to have a look at the special issue because it's so unique and covers a wide variety of diverse perspectives theoretical frameworks and, and backgrounds and different methodologies. And I think that, you know, no matter what angle you're coming from, you're going find something interesting in this special IT issue that kind of grabs you and you want to, you think about a little bit more and grapple with a little more. And so you know, that's kind of the excitement behind a special issue is, you know, you, you propose a topic about semiotics and you think you're going get something, but you end up with a lot of different things that are, that are just magnificent.

Liudmila Klimanova: And to follow up on what Lara said, I want to reach out to our current graduate students working in the field of CALL. This special issue features many interesting methodologies on how to approach multimodal data. It’s not easy to capture multimodal data and analyze it in a very comprehensive way. And I think most of the articles in this special issue are research studies that could be used as model studies to pursue semiotics in dissertation research and in the new studies and investigations of the role of semiotics in language learning and teaching.

Hayo Reinders: Yeah, that's a, that's a beautiful exhortation. And with that, I, I'd like to congratulate of course both of you, Lara and Liudmila, and also the contributors to your special issue. And let's hope that it gets all the attention that it that it definitely deserves. So thank you again so much for tuning in and thank you everybody else for listening to our conversation, and we'll see you next time. Take care. Bye-bye. Thank you. Thank you.

Dec 15 2022 (24 Minutes)
A conversation with Robert Godwin-Jones

In this episode of Voices from LLT, Hayo Reinders speaks with Robert Godwin-Jones, contributor over a period of 25 years of some of the most widely-read and appreciated articles in the journal.

Reinders, H. (Host). (2022, December 15). A conversation with Robert Godwin-Jones (No. 4) [Audio podcast episode]. National Foreign Language Resource Center. https://hdl.handle.net/10125/104310

Host [H]: Hello, everybody, you lovely language and terrific technology people, to a new episode of Voices from LLT, the podcast of Language Learning and Technology. I’m your host, Hayo Reinders, and today, I have a very special guest, a rockstar in our field, Robert (Bob) Godwin-Jones. Welcome, Bob!

Robert Godwin-Jones [RGJ]: Thanks, Hayo.

H: It’s wonderful to be able to speak with you. I don't think you and I've ever actually met. I have, of course, like every other person in this field in the world, been familiar with your work for a very long time. Your contributions to Language Learning and Technology span many, many years, and I think it's safe to say that they are some of the most highly cited and most often referred to resources published in the journal. For the two-and-a-half people in the world remaining who don't know, Bob publishes a very regular–I was trying to see if it was a perfect record–but certainly a very regular, ongoing feature called Emerging Technologies that really does an amazing job of synthesizing what is known about a particular topic at the time. So if you're not already familiar with Bob's work there, please do have a look. Bob, my first question to you is, I had of course a little bit of a look at the old Google and found that your background is actually in French and German. So how did you end up in technology?

RGJ: Actually, my PhD is in comparative literature, but I studied, as an undergraduate, French and German predominantly and also British literature. So when I was in graduate school, I taught German and my first job at the University of Wisconsin was teaching French and German. At that time, I wasn't doing much with technology, playing reel to reel tapes maybe for my students. When I moved from Wisconsin to Virginia, now it's Virginia Commonwealth University–I’ve been there since 1979 so quite a long time–we didn't even have a language lab and so at one of the first departmental meetings, I said to the chair, “Maybe it would be nice to have a lab and students could listen to authentic voices on tape.” And the chair at the time said, “Bob, you must not know about language pedagogy because nobody uses language labs anymore.” But despite that I wrote a grant to get a cassette-based language lab, that we got, and I was the lab director. And then I became chair of the Department of Foreign Languages and we had a new dean coming in. The first meeting with the dean, the dean said, “What are you doing with technology in your department?” And I said, “Well, we have a really nice cassette language lab.” And he said, “No, no, no. I mean, what are you doing with computers?” This was in 1991, I believe. I said, “We’re doing quite a bit. Each professor is using word processing. We have Word Perfect 5.1, you know. We’re up to date and so our secretary doesn't have to type out the manuscripts from handwritten copies.” He said, “No, that’s not what I’m talking about at all. I’m talking about, how are you using it in teaching?” I said, “Well, we type up handouts and then we mimeograph those.” He said, “No, I want you to look into what you can really do with computers and language learning.” It turns out he came from Auburn University and George Mitrevski there in Russian had developed a Russian HyperTutor series, hypercard stacks, for learning Russian that were quite impressive and he was familiar with that coming from Auburn University. And so he really encouraged me to look into that. And just at that time, we had somebody from our Technology Center at VCU who was offering training sessions using Macintosh computers–I'd never seen a Macintosh computer much less used one–but I was curious. So I went over and she showed me Quicktime video. I thought, “Wow!” Even though it was tiny–it was the size of a postage stamp and pretty jerky in terms of playback–but you could already envision the potential of that kind of multimedia for language learning. It immediately seemed to me that this is something we need to look into. So I wrote a grant to get, in fact, Macintosh computers and was successful and wrote another grant the second year and got more computers. And it turned out nobody in the department was quite interested in doing this add-on to their already heavy workload in terms of teaching. So it fell to me to investigate what we could do and, as I said, I was familiar with George Mitrevski’s work and actually corresponded with him and got to know him and started writing hypercard programs of my own and doing workshops for my colleagues. And so that was kind of the beginning, you know, I was very interested at that time in tutorial CALL, in using computers to have students work with the vocabulary and the grammar outside of class. And you could do even back then nice things in hypercard with audio and video that were really motivating to students, and there was a potential there of having authentic voices in the target language other than the instructor. To me, that's always been really important for language learning.

H: What an interesting progression, and isn’t it amazing how being in the right place at the right time and having the right sort of support–or in your case, a dean coming over and kind of pointing the way and saying, “Go and investigate there”–it’s fantastic, isn’t it, this serendipity? Do you still teach German and/or French?

RGJ: French, these days, because we have more folks in French. We have fewer people in German. But yeah, I teach mostly German. I teach courses in intercultural communication as well, but mostly German.

H: Right. And so you've been at Virginia for almost 40 years. Amazing. And so your collaboration, your involvement with Language Learning and Technology, the journal, how did that come about?

RBJ: That came through Mark Warschauer. I met him at a conference at the University of Hawaii in, it must have been 94-95 and actually contributed a chapter to a volume that he edited early on in the mid-90s on language learning and technology. And at that time, the web was just becoming popular in 93, 94, 95, and just as I saw with multimedia, with digital video, digital audio, when I saw what you could do with the browser even in that early, early time, in terms of connecting people around the world, I was just amazed, blown away with Mosaic, which was the first browser. And then Netscape came along and it did even more things. I was interested when streaming audio and video became possible and interactivity through JavaScript. So I became very interested in using the web. I created a website called Language Interactive, to which I invited folks to contribute what were at that time server-based strips or browser-based interactivity. George Mitrevski from Auburn contributed a couple things. I had learned Pearl so I was doing what was called CGI scripts, server-based interactivity. And Mark was aware of that site, and on that site, I wrote up tutorials on how to use interactivity, server-based and browser-based, and he thought that what I wrote there seemed accessible to non-technical people. So when the idea for the journal was hatched, and Mark was one of the prime movers there, he contacted me and asked if I wanted to be involved in creating this peer-reviewed open journal. And I thought, wow, what a great idea. And so I was part of those early discussions. How are we going to do this? Are we going to do HTML? Are we going to do PDF? Are we going to have page numbers? Are we going to revise the articles once they’re published? You know, it’s different than a print medium. So there were all kinds of things. Were we going to do discussion forums linked to articles? So there were a lot of interesting early discussions on that. So at one point, Mark asked me, “Would you be interested in writing a column?” [That was] between 96 and 97, and you know the first few columns were fairly short and fairly technical, a lot about you know using tools and what tools were out there and what could you do with those tools. As time has gone by, the column has evolved away from hands on development of the early period and more towards how second language acquisition theory meshes with technology developments and what seems to be on horizon in terms of ongoing development of tools and opportunities there for language learning.

H: You mentioned AI and of course the special issue, and I just had a conversation with Jim Ranalli and Volker Hegelheimer, the editors of that special issue on automated writing evaluation. And in your column, you take a slightly broader view. Not to put you on the spot here, because it's such a difficult question, but when we look at the role, and especially the role in the coming years, looking at the medium-term, so not 50 years into the future and not five months but sort of five years from now, when you ask people in our field, you sort of go and fall into three camps: there are those who kind of don't know much about it and don't want to know much about it, there are those who say, well, you know, it's like any other technology that we've ever had, we've been told for so long that artificial intelligence will change everything but we've been told that about other technologies and it hasn't really happened, and then there are those who say, well, actually there's something that is in some ways fundamentally different about this collection of technologies all together labeled as artificial intelligence and it's developing in an exponential way and so we can't quite yet foresee the ramifications of it and it could be much more extensive and intensive than many people expect. Are you in any of those three camps? What is your thought on the development of AI?

RBJ: Well, I’m in the camp that thinks we can’t afford to ignore and that we have to look at what our students are doing with AI-based tools. For example, machine translation is now very sophisticated based on deep learning algorithms that take advantage of the huge collection of data that Google and other companies have collected and analyze that for patterns. And you know, this is something that performance there has to be improved as the AI systems have become more sophisticated. That’s true not just for machine translation, that’s true in other areas as well. And I think, you know, I have colleagues–and I imagine there are similar situations at other institutions–I have language colleagues that say to their students, don't use Google. You're not allowed to use Google Translate. Well, for one thing, how are they going to prevent students from doing that at home? And for the second, you know, you don’t want language learning to just be an artificial classroom experience. You want the students to really get interested, get motivated to learn a language as a lifelong skill. That's more of a challenge in the United States than it is maybe some other countries, in European countries, where the need to learn a second language is evident to anybody in everyday life. It’s not the case for a lot of US students. And so you know I think that what you need to do as a language teacher is encourage students to use any and all possible tools that will help them learn language and improve their language proficiency. And certainly there are ways to use Google Translate that are not effective pedagogically, if students just copy and paste assignments. But in that column, that most recent column that I wrote about intelligent writing systems, I looked at a number of articles that have been written in the last few years of people, of language teachers who have been very creative in how they have integrated machine translation and mostly Google Translate, teaching in a way that expands the students’ horizon in terms of language learning and makes them aware of the fact that this is not just an artificial exercise, it’s not just an academic requirement to learn a language, but this is a real language skill and that in real life there are tools that will help me improve my language ability and to maintain my language proficiency beyond the classroom. And I think there are other areas where AI can play a similar role, for example, in chatbots. I gave a talk at EuroCALL last month on chatbots and chatbots are becoming increasingly sophisticated and I think are familiar through a very fundamental way, through intelligent personal assistants like Siri and Google Assistant and Alexa, and those are getting very sophisticated backends in terms of AI systems that will allow them to do so much more than used to be the case in terms of interfacing with the user. Now these systems are not designed to be conversation partners for language learners. They’re designed to be transactional agents, where the user asks a question and in as brief a way as possible, the personal assistant gives a response to that inquiry. But there are systems that are being experimented with in language learning that take those basic systems like, for example, Alexa and there are add-ons that you can put in more specific language learning skills that are called Skills in Alexa that expand the capabilities of that particular personal assistant. And one of the things that I'm very interested in increasingly is student motivation and student autonomy. And I think, you know, one of the things that if chatbots can do what intelligent CALL has done for a long time, mainly build a personal profile of the learner that keeps track of conversations… And Siri and Alexa don't do that by default. There’s very little information that they save, which is because there are privacy issues of course. But if you have a profile that is built up so that the personal assistant remembers previous conversations, learns about what the learner interested in, what hobbies, what sports teams, what movies and so forth, then you can imagine, you know, conversations taking place with the learner outside of academic environment and maybe beyond the school experience. And I think this is part of where we're going with informal language learning which has exploded. If you look at research in the field of CALL, there's so much research now in the digital wilds, as it’s called, informal language learning, particularly for English but for the other languages as well. And I think AI-based systems like chatbots are going to figure into that because you have systems like LaMDA from Google or GPT-3 from OpenAI. This summer, one of the Google Engineers claimed that, based on conversations that he had with the LaMDA chatbot, he said that LaMDA is sentient. Which is ridiculous, but what that shows us in terms of language learning potential, if you have a human being who thinks that conversation with that AI system was so real that he had the impression he was talking to a real human being or at least a thinking human being, a sentient being, what does that mean for language learning? It doesn't matter if it’s sentient or not. But what it does mean is that there’s a lot [of] language taking place, a lot of give and take in that conversation and I think that's one of the things that we need to look at in AI is what are the developing opportunities for conversation partners, for these artificial partners to go beyond what the capabilities are today which are fairly limited.

H: And it's worth pointing out that the poor Google engineer of course got fired over his assertions. So be careful what you [say] about AI in language education.

RBJ: Another thing about AI systems that I’ll just mention briefly, somebody else who was fired 2 years ago from Google got into trouble because she was talking about issues of equity and fairness in AI systems. And that's something we need to be concerned with, the fact that, for example, the data collected by these huge systems like LaMDA or GPT-3 include a lot of hateful speech that’s on the internet. The internet’s full of junk and trash. Lots of good stuff, too, but you know these intelligent systems aren't intelligent enough to be able to filter out and then you can have a blacklist, it just doesn't work well. So you have the question of fairness and equity in terms of representation, of underrepresented populations for example dialect speakers, for one, of languages beyond English that don't get much coverage by AI systems. So there are a lot of concerns and terms of privacy and fairness and equity, I think, in AI development today.

H: We have a long and challenging and very interesting road ahead of us when it comes to all that. Plenty of columns for you to write in the future as a result. So that’s a nice segue to circle back to the journal. Any concluding thoughts about maybe publishing in general but specifically journals like Language Learning and Technology and your involvement, where do you see us going?

RBJ: Well, you know one of the reasons that my columns get cited is not necessarily because they’re good. It's because they're open access. And I think we're seeing more and more interest in Open Access in all academic fields. But I think in terms of, for example, language and technology journals, that's something I think that we're seeing increasing interest in that direction
and I think, you know, one of the things that I mentioned earlier that I think it's a good development for LLT are the special issues. One of the problems LLT has had is that there’s so much being written today and so much good stuff being written that it’s been difficult for us to publish the number of articles that we've accepted. And so that's why we've gone to this ongoing system now LLT publishes articles as they’re made ready, as they’re edited and ready for publication, and I think that's important because we have a fast-moving field and so we want articles that deal with new and recent technologies to appear in as timely a way as possible.
and I think that that's something that's really important for journals.

H: And can we look forward to many more columns from you in the years to come?

RBJ: Well, so far I continue to enjoy working on the columns, and it’s something that I look forward to. We’ll see.

H: And we look forward to reading them every time that they come out. Well, it's been wonderful talking to you. Thank you so much.

RBJ: Great.

Oct 31 2022 (12 Minutes)
A conversation with Liudmila Klimanova: The 2021 recipient of the Dorothy Chun Award for Best Paper in LLT Related Resource

Liudmila Klimanova talks about her LLT article, “The Evolution of Identity Research in CALL: From Scripted Chatrooms to Engaged Construction of the Digital Self,” which was the 2021 recipient of the Dorothy Chun Award for Best Paper.

Reinders, H. (Host). (2022, October 31). A conversation with Liudmila Klimanova: The 2021 recipient of the Dorothy Chun Award for Best Paper in LLT (No. 3) [Audio podcast episode]. National Foreign Language Resource Center. https://hdl.handle.net/10125/104268

A conversation with Liudmila Klimanova: The 2021 recipient of the Dorothy Chun Award for Best Paper in LLT (Transcript)

Host [H]: Hello, you lovely language and terrific technology people, and welcome to the next episode of the Language Learning and Technology podcast. I'm your host, Hayo Reinders, and today, we have another very special guest, Liudmila Klimanova. Welcome!

Liudmila Klimanova [LK]: Hello and welcome, everyone who is watching us, and thank you, Hayo, for inviting me.

H: Of course, my pleasure. Lovely to talk to you. Of course, you are very famous as an award-winning author for Language Learning and Technology. So this is an opportunity for everyone to kind of see the face that goes with the writing and just hear a little bit more about who you are and also to hear a little bit more about your work and your article. So tell us a little bit, you are at University of Arizona, is that right?

LK: That’s correct. So my name is Liudmila Klimanova and I’m a professor at the University of Arizona. And I specialize in Second Language Acquisition with a particular focus on computer-assisted language learning (CALL). So this is my area of research, and I have been doing CALL for a long time, I would say maybe fifteen years, if not more than that. And my specific area within CALL is… I started as a SCMC, computer-mediated communication specialist and later I developed interest in identity research, in particularly the area of digital spaces, in the area of CALL, computer-assisted language learning. And i have been working with identity research for quite a while now.

H: Right. Nice. And of course, your identity research is what won you the Dorothy Chun award. And for those of the readers of the journal who are not aware, this is an award that was… I think you won the first one. It was initiated last year, is that correct?

LK: I believe so. This is 2021 Dorothy Chun Best Journal Article in Language Learning and Technology Award. And I'm very honored because last year in particular was very rich in very high-quality articles in the journal. And my article on identity research was selected and was awarded, and this is a special recognition for me.

H: Yeah, indeed, and it's a great article. I had read it before and I've reread it just in preparation for our conversation. And for those of our listeners and viewers who haven't read it, of course you should go and do that right away. But essentially–correct me if I'm wrong–you’re essentially giving an overview of the development of identity research in computer-assisted language learning and also giving the readers some pointers as to where the field might be heading. So I'm particularly interested in that final piece because in the article you talk about the development from the 2000s onwards. You have a communicative turn and then social turn and then later on this little multilingual and critical turn. And I think you kind of position that as leading up to about 2020. So where are we now and where are we going next? What's the next turn in the road?

LK: Well, let me start just by saying that what I'm describing in the article is my past in identity research. I started off many years ago just looking at identity from the very basic approach of labeling and classifying students into groups based on their first language, their level of proficiency, their background. And right now, over the years, my perspective on identity in digital spaces has evolved tremendously. And of course my insight has been influenced by new theories and new approaches in second language acquisition and teaching that have also been developing quite intensively over the past two decades. But more recently, I think we have been paying more attention to multilingualism, big questions of power and equity and equality. And this has been a conversation for a long time in language acquisition, in language planning research, but as a field, computer-assisted language learning, I think we have been trying to look more closely at how we teach instead of paying attention to what the environment provides for our learners, and how our students interact in digital spaces outside the classroom, and how that experience influences the way they learn languages and they use languages. And I think the usage is now more important than before. So the thinking, and going back now to your question, identity research is blooming, so to speak, so we are getting more and more interesting studies, more theoretical frameworks, and more things become obvious as a necessary focus of research because of the way technology has transformed us as human beings. We use technology in a fundamentally different way these days. We use technology to communicate professionally and personally. Because of technology, we are able to engage in translegal communication, where sometimes we don't know the language but technology provides the necessary support for communication to take place. And that wealth and richness of noneducational online communication has informed the ways we approach communication, digital communication, in instructional settings as well. So we need to pay attention to what our students do outside the classroom environment, what types of technologies they use, but also how technology frames our communication, and how technology can facilitate or liberate language learners as well as… it can also restrict some of their use as well as the opportunities for them to express themselves freely in digital spaces. So these are in general, in broad strokes, this is what we’re thinking more about when we talk about identity research in CALL. But of course, the multilingual approaches in applied linguistics have influenced tremendously how we approach communication in digital spaces. The way how languages are now available in digital spaces, how we navigate through instant translations, how some language decisions are made for us by the platforms, by geolocation devices where sometimes languages are proposed to us as opposed to us choosing the languages that we want to use. All these factors influence the way we position ourselves in non-digital communication. So maybe in a nutshell, broadly speaking, I think this is where we are right now and where research is going when it comes to identity, self-presentation, self-performance in digital spaces in instructional as well as more noneducational settings.

H: So what’s next for you? What are you working on at the moment? And where is your research heading?

LK: Well, in line with what I just said, I’m working right now on the new notion of group identity.
And it is influenced by the fact that more and more these days we have been interacting in groups. And these groups are not accidental groups but groups that meet in the same configuration over time. And it’s interesting to see how language use transforms over time within each group. I’m mostly working different types of Zoom interactions where we have multilingual speakers interacting over a particular constructional task or a project. But it’s interesting to think about identity as not necessarily belonging to one individual in digital space, but identity as a dynamic way, transforming into a group belonging. And this is something that I find particularly fascinating, to see how language within a group changes and evolves as participants find some new way of communicating ideas to the group and how those ways get picked up by other group participants. And they develop one unified code that transforms into their group identity. This is my new research I’m working on right now and I'm very excited about some of the findings.

H: Sounds very interesting. See if you can win another award with that.

LK: Thank you.

H: For the benefit of any graduate students, PhD students watching or listening hear
any words of wisdom that you might like to pass on to them?

LK: Graduate students, of course I want to invite students who are still looking for topics to research for their dissertation projects to consider identity as a very rich, interesting topic that needs more attention, more research attention. We also need to pay more attention to the way how technology has transformed the ways we communicate. Before, I think, when the field of CALL, computer-assisted language learning, was still in its beginning stages, we were borrowing a lot of theory from applied linguistics. But given the amount of digital communication takes place today, I feel that as a field that we now are ready to supply theoretical assumptions and provide applied linguists with more data that they cannot see outside the digital space. And in that regard, doing more research in digital communication, in computer-assisted language learning is what we need to do. We need more graduate students to undertake such research and develop new theories so we can inform applied linguistics of all of the new ways how we communicate.

H: Couldn't agree with you more. Dear listeners, dear viewers, you heard it here directly from an award-winning author. This is an area that requires more attention, more people to contribute to it. So what are you waiting for? Liudmila, it’s been lovely talking to you. Thank you so much for your time.

LK: Thank you.

Sep 7 2022 (22 Minutes)
A conversation with Jim Ranalli and Volker Hegelheimer Related Resource

Jim Ranalli and Volker Hegelheimer, long-time contributors and guest editors of Language Learning & Technology (LLT), talk about the June 2022 Special Issue on Automated Writing Evaluation.

Reinders, H. (Host). (2022, September 6). A conversation with Jim Ranalli and Volker Hegelheimer (No. 2) [Audio podcast episode]. National Foreign Language Resource Center. https://hdl.handle.net/10125/102346

A conversation with Jim Ranalli and Volker Hegelheimer (Transcript)

Host [H]: Hello, everybody, and welcome, you lovely language and terrific technology people. My name is Hayo, and welcome to the second episode of the Language Learning and Technology podcast. Today, we have two very special guests, Jim Ranalli and Volker Hegelheimer.
Welcome to you both.

Jim Ranalli [JR]: Thank you. It’s great to be here.

H: Right. I understand that you’re both at Iowa State University. Jim, can you just very briefly tell us about your background and your research interests?

JR: Sure, I’m a newly-minted associate professor and I work in the TESOL Applied Linguistics MA program and the Applied Linguistics and Technology PhD program. I also teach undergraduate courses in the Linguistics major. In terms of my research interests, I'm really interested in academic writing and technology and the theoretical lens of self-regulation. My work takes place mostly at the intersection of these three things.

H: Yeah, that interest in technology. I’ve figured that one out, given that you’ve just guest edited the special issue for Language Learning and Technology. Volker, I think you are at the same Institution, is that right?

Volker Hegelheimer [VH]: That is correct. Yes, I'm also at Iowa State University. I’ve been here since 1998 so this is my one and only institution that I've worked for. And I currently also serve as the department chair so my research interests have taken a bit of a backseat to administering a large department. But I am interested in the use of technology in the classroom and trying to see how it can be effectively employed.

H: Wonderful, and great to be able to see you both. This is one of the nice things about doing the podcast for the journal, especially for our listeners and those who are watching this. You know, we see all these famous names like yours and now we have a face to go along with that so that's awesome. The reason why we have you on the podcast today is especially because you have a special issue of Language Learning and Technology coming out, or by the time this goes live, maybe this just came out. Maybe one of you, tell us very briefly, what is the subject of the special issue?

JR: Do you want to take that one, Volker?

VH: Sure, and this dates back to over two years ago when I got an email from the editors of LLT asking me if I was interested in hosting or being the guest editor for a special issue on automated writing evaluation (AWE). They had seen earlier work that I had done in 2016 and there hadn’t been a special issue on that topic since then. And I decided to ask Jim to help with the co-editing and together we took this on, and it's been two years in the making and everything pretty much kept on track.

H: Yeah, these things take a long time, don't they? Just for the benefit of some of our listeners who may not be familiar with the terminology, just very briefly, what is automatic writing evaluation?

JR: So I think it refers to basically technologies that can analyze machine-readable text and provide some kind of feedback on that text to the writers of that text.

H: And so from your own experience and the literature, if you had to summarize briefly, what are some of the main benefits for both learners and teachers of this technology?

VH: I can take a stab at that, just to get us started. I think the benefit of this technology really is to free up mental and intellectual resources. You know, when you have a technology that can provide feedback that doesn't have to rely on a human, as a teacher or student, if there is anything you can automate and you can do it well, that frees up resources that can then be directed at things that technology doesn't do such a good job at providing feedback for.

H: Yeah, you make a good point. It's a benefit for both the learner and the teacher, right? Because the teacher may have to spend less time maybe offering corrections or what have you and therefore can focus on other things. And the same goes for the learner, who perhaps doesn't have to process as much information so that’s very, very helpful. Now you mentioned earlier, Volker, that this is kind of a follow-up to an earlier special issue that I think… did both of you edit the 2016 issue or one of you? I think I saw one of your names there.

VH: Yeah, this is the one that I did with Ahmet Dursun and Zhi Li. So the three of us edited that. It was a CALICO Journal special issue.

H: Oh, right. Okay, so 2016, 2022. I mean, 6 years, it's not nothing but it's also not a tremendously long period of time. So why a new special issue on this topic? What have been some of the main developments, the main changes?

JR: I mean, technology obviously moves fast, and one of the changes that we've seen is in the focuses of the AWE systems that are coming out now. So you have systems in addition to those that analyzed text for things like grammatical correctness. You also have systems that can analyze adherence to genre conventions, and systems that can analyze students’ enactment of writing strategies, and more content-oriented focuses like students’ use of evidence and argumentation, which is the focus of one of the articles that we've got in the special issue. So that kind of focus of AWE system has sort of broadened out a little bit.
H: Because it used to be mostly grammar correction, spelling correction, and you're saying now it's really expanded to be a lot more useful in a lot more aspects of writing, is that correct?

JR: That's right.

VH: In addition to that, we've all seen the arrival of multiple additional players over the last five, six, seven years. You know, where in the past, there were a few prime AWE systems that were implemented and used. We now have systems that are more ubiquitous in terms of accessibility. So we have systems that are no longer only accessible after you’ve paid some money, so we have some free versions. So I think there's been a breadth of additional resources that are making a difference here.

H: Yeah, that's a very good point that has, of course, potentially some major pedagogical consequences which I'd like to come back to a little bit later with a couple of questions for you. One thing that I read–I had a little sneak peek prior to publication of your introduction to the special issue–and you point out that it was hard to get a large number of submissions. I think you have 30 or 35 or so and eventually invited only 13 or 14 and ended up with only four. I had a similar experience last year on a special issue on artificial intelligence or big data, I believe it was. And the other thing you pointed out was that the four papers you ended up with all came from the Chinese context so I wanted to ask you about that. Firstly, why did you think that there was relatively little uptake initially, given that technology is so important and especially this topic could be seen to be particularly important at the moment, and secondly related to that, why does there seem to be so much activity especially in China?

JR: I can take the first part of the question. So I think frankly part of it has to do with the global pandemic. I mean, my own research productivity has kind of taken a hit, and I assume that's the case for quite a lot of other people. So I guess, in that sense, the timing of this special issue wasn’t ideal. But I know that there is a lot of work going on out there in this area so maybe just people weren't able to bring it to fruition and share with us this time.

VH: In terms of the second part of the question, why so many studies come out of China. I think that we see a lot of resource investment in China in terms of English learning, in terms of the development of some of these systems. And it is an ideal training ground so to speak for some of these systems in the Chinese context, where you just have access to a lot more learners and you can really do things, you know, so I think that's one benefit here.

H: Yeah, that was sort of what I was getting to with my slightly leading question. I get the impression and I wanted to get your feedback on this, that in certain parts of the world, there does seem to be a lot more understanding of the importance, going into the future, of not just of the area of your particular special issue but natural language processing in artificial intelligence and everything else that comes out of that, more so than in other parts of the world, and as a result of that, perhaps also considerably larger amounts of funding being available for primary research. That is just an impression that I had when I read your introduction. I was thinking, do you feel that maybe that has something to do with that? Are we investing enough perhaps say in, for example, your institution or the context where you work in these areas?

JR: I don't know. I mean, I could always use more funding.

VH: If the question is are we investing enough, I don't have the numbers ready, but my very personal perception is that there's not a whole lot of funding that goes into this here. Plus, I mean, the context in which we operate deals with English language learners. And if you look at the past 3 years or so with the pandemic, we’ve had a significant decrease in English language learners coming into the US that actually predates the pandemic a bit in terms of how difficult it was or has been for learners to come in so that there's also a much smaller population of learners that are coming into the US. I think that we’ve seen an uptick of that at our university certainly and I think nationwide but that has something to do with this as well. And I think this speaks to Jim’s response to the first part of the question in terms of… you know, there's just not that much out there. We have an intensive English program that at some point had 100-200 students and now it's down to single digits or maybe teens.

H: Exactly, and that’s the case in many contexts around the world. Some of our listeners, especially those who are not deeply familiar with the topic but it’s something that I hear a lot from teachers I speak with, may be concerned about some of the implications of the types of technologies that are being described in the papers in the special issue and in the field at large. The implications of, for example, machine translation, and students using that to not just to do spelling checks and grammar checks but potentially even to generate text that in some cases are almost indistinguishable from authored texts.What would you say to teachers who have question marks around the feasibility, maybe whether it’s even ethical to use these types of technologies in language education?

JR: That's a tough one. I feel… and again this goes back to the reason for the impetus for this special issue. I mean, automated writing evaluation is now a fixture, I think, on the second language writing landscape. So it's not something that we can ignore or pretend doesn't exist even if we were inclined to do so. But Robert Godwin-Jones, who writes the Emerging Technologies column for [LLT] and has a special column on AWE and intelligent writing assistance technologies in our special issue, talks about this and he talks about machine translation, automatic text generation, and basically he says these new technologies, we have to explore ways to integrate these into the classroom and this integration has to be guided by situated practice and established goals and desired outcomes. So basically I think we have to approach these things with an open mind and look for opportunities to harness their pedagogical value rather than holding onto maybe outdated notions of what writing is supposed to be and what role technology is supposed to play.

H: Volker, did you want to add anything there?

VH: I could add a few sentences here. I think Jim is right on target. What we have seen also is a little bit of a shift of AWE tools that have been used for second-language learners. Now these AWE tools are now also being integrated immensely for first language learners, for native speakers of English, for example. Jim is running a large project at Iowa State University that integrates one of those tools not just in second language classrooms but also for all the students. So I think we have to broaden the base of how we view these tools. So it's no longer just a tool for second language learning but it's also a tool for language learning, for becoming more proficient in your first language. So I think that’s an important aspect that we're trying to explore here.

H: Yeah, I think you both make a very good point, and that's something that perhaps is sometimes easy to forget, especially when you're heavily involved in the pedagogical side of things, is that the nature of of writing as you put it, Jim, itself has changed, and just like 20 years ago, people were complaining about the language of SMS messages and texting on phones and how it wasn't proper language and so on. Now perhaps we need to also reconsider what writing is and then also ask, how does that change how pedagogically we support the necessary processes and maybe that involves harnessing, using the technology for the better. And that leads me to my next question which involves the teachers. What sort of support do teachers need, in your view? I know that goes a little bit beyond the scope of the special issue, but to be able to work meaningfully with these new technologies.

JR: Teachers obviously have a very important role because there's research that shows that teachers’ attitudes and practices regarding AWE influence students’ attitudes and practices. So first of all addressing the kind of concerns that we were discussing in response to the earlier question, you know, more traditional views of the role of technology and the idea that AWE makes students lazy or undermines learning. We need to address those, and then also getting them to see the possibilities, to understand how use of AWE may contribute to second language development. I think that the empirical evidence is still out on whether you know that can actually be a factor in language learning. But I think a prevailing attitude is AWE, to the extent it focuses on grammatical and usage concerns, is just a proofreading tool. It's something that can only contribute to the polishing of the current text. Whereas in applied linguistics, we've been waiting a long time for these sorts of technologies to come along. And one of the reasons why is because of the prospect that it can actually help students improve their language proficiency. So the transfer of learning from the current writing task to subsequent writing tasks. So again, getting teachers to understand that there's a wider perspective here and that they can be partners in exploring the potential of this new and ever-expanding area of writing systems technology.

[Unfortunately, this is where Volker Hegelheimer’s internet connection gave up…]

H: Nice one. Jim, I have one final question for you. You’ve got four articles plus your own introduction plus Robert Godwin-Jones’ column in the special issue. The four articles, I read them with gusto, they were very interesting, and they led me to this completing question. Based on those contributions and your own work, where do you see the field going next, especially in terms of research in say the next few years?

JR: While we do have kind of a narrow cross-section of research in this special issue for the reasons that we were discussing earlier, but as I said, I think there is a lot of work going on out there and I'm excited about seeing this work eventually published. And I'm talking about work… you know, a lot of the research has focused on tertiary settings, so the use of AWE in college and university classrooms. I'm really interested to see how it's being used at the secondary level and maybe professional contexts as well and also systems that analyze languages other than English, I think is an important area.

H: Yeah, that’s a big gap, right? And I like what you say about the secondary level because if we are hoping that these technologies can be used more developmentally rather than just as a corrective tool then it's all the more important that learners learn how to work with these technologies in a productive, meaningful way, and the earlier you can start with that, the better, right?

JR: That’s right.

H: Before we wrap up, is there anything that either of you want to say about the special issue or about any of the particular papers that were included?

JR: Just that we're very excited. I don't mean to take anything away from the quality of the work of our four contributing sets of authors. I think it's a nice cross-section of work, and we're very grateful to them for their contributions and for working with us through the peer-review process to bring this special issue to the fore. So I hope everybody enjoys it.

H: I hope it gets as wide an audience as it deserves and hopefully this podcast will help with that. Jim and Volker, thank you so much for joining us.

JR: Thank you, Hayo.

Jul 14 2022 (21 Minutes)
About the Language Learning & Technology Journal

Language Learning Technology (LLT) journal Editors-in-Chief, Dorothy Chun and Trude Heift, discuss how the journal works, its open access philosophy and tips for submitting manuscripts for review.

Links mentioned during the interview:
The Affordances and Challenges of Open-access Journals: The Case of an Applied Linguistics Journal. [Chapter 8]

Research Guidelines

Reinders, H. (Host). (2022, July 14). About the Language Learning & Technology Journal (No. 1) [Audio podcast episode]. National Foreign Language Resource Center. https://hdl.handle.net/10125/102124

About the Language Learning & Technology Journal (Transcript)

Host [H]: Welcome, all you lovely language and terrific technology people, to a very, very special first episode of Language Learning and Technology, our first ever podcast with two very, very special guests, Dorothy Chun and Trude Heift. Dorothy and Trude, welcome.

Dorothy Chun [DC]: Hello.

Trude Heift [TH]: Hello.

H: How are you both doing? You're good?

TH: Good. Thank you.

DC: Doing very well. Thanks.

H: Excellent, excellent. So today we're going to learn a little bit about this wonderful journal that you both edit. We're going to learn a little bit about the history, its aspirations. But before we do all of that, Dorothy, I'm gonna start with you. Just very briefly, where did your interest in technology and language education come from?

DC: Well, you know, I've actually always liked gadgets. I loved the IBM Selectric typewriter when it first came out. My PhD is actually in historical Germanic linguistics.

H: Wow.

DC: But I knew early on that I really wanted to do more applied linguistics. So my dissertation was about intonation in German, Chinese and English. And I did lots of recordings of learners of Chinese and learners of German. And I had a trusty little cassette tape recorder, I would have to play, stop, rewind, play, stop, rewind, trying to listen to the tones and the intonation in the pitches. And I realized, I need better technology for this. So that was my first need that got me interested in using technology for language learning. And then I had to write this dissertation. And I know I'm dating myself, but I wrote it on an Apple II computer. And I needed 40 floppy disks for my dissertation. And I again thought, there has got to be a better way than this. So from the very beginning, I've always wanted to learn about the newest technologies to make my life easier, to make my career in applied linguistics easier.

H: Wow, that's a lovely story. My supervisor in Holland, when I was a master's student there, told me that she used a computer back in the 70s. I suppose that you still had to use kind of paper cards that you inserted into them. I never actually used one of them. She was telling me that she was riding home on her bicycle–I mean, this is Holland, after all–and that a big gust of wind came and threw all of her computer cards and everything out over the road and she lost her dissertation there. So yeah, good reason. All right. Trude, how about you?

TH: I actually ended up in computational linguistics for my master's degree already and then found a wonderful supervisor with whom I did my PhD focusing on computational linguistics and applied linguistics by designing programs for language learning.

H: Wow.

TH: And when you talk about years ago, so my supervisor, he graduated with his PhD in the early 80s. Naturally, computational linguistics involves parsing, and he told me that in the 80s, they would submit a sentence in the evening and then could pick up the parse the next morning. That tells you how computer speed and volume have increased over the decades.

H: Yes, indeed. Yeah. And it's having a massive impact on so many and so many levels, hasn't it? Right. Okay, Dorothy. I'm sure our readers are interested to hear a little bit about the background of this journal. Where did it come from?

DC: Well, Language Learning and Technology was founded in 1997 by Mark Warschauer. He was actually a graduate student at the time, but he had the foresight and the courage to begin this completely open access, fully online journal. And it was the first fully online journal for CALL. It has remained completely open access and fully online. It has been supported by three different National Language Resource Centers. One that has been the ongoing center is the one at the University of Hawaii. And they have been a sponsor for the 25+ years that LLT has been in existence. The other two centers are one at MSU, Michigan State, and most recently, University of Texas at Austin. The bulk of the funding from these Centers is really for a graduate student who is the editorial assistant or the managing editor, whatever you want to call that person, and for a webmaster. One of the, I think, key features of our journal is that authors always retain copyright of their work. So unlike the commercial journals where the journal owns the copyright, our authors own their own copyright. Since 2020, though, we've switched to a Creative Commons license for all the work published in LLT. If you want more of the gory details, Trude and I wrote a chapter for Carl Blyth and Joshua Thoms' open access book. The title of that book is Open Education and Second Language Learning and Teaching. We will provide a link for that book in which our chapter appears on the LLT website. And if there's a way to link it to this podcast, we will also do that.

H: That's great. Great. You've mentioned open access, and the journal being the first in our field to provide open access. You've also mentioned the Creative Commons. Trude, can you tell us a little bit more about this because not all listeners may know what these terms exactly entail.

TH: When you look at open access, a journal like that has huge advantages over articles being published by a journal. So one of them is, for instance, broad dissemination, meaning that the works that are published are not only accessible by developed countries but also by developing countries, and therefore you have a huge distribution without having them require huge subscription fees. For the journal itself, there are advantages such as you can track readership, quantitatively and geographically. You can actually see where your readers come from. Naturally like with any online environment, you have ease of hyperlinks, content that can be linked within an article. You have an unlimited virtual space. And LLT lately has implemented a rollout model, meaning that unlike a print journal, where you have to wait until the next issue comes out, LLT now publishes articles as they are processed without waiting two years for an article to come out. And that naturally is a huge advantage for authors. So we just started that in January and are basically still working on a backlog. But we are catching up. And these articles are now appearing as they get processed.

DC: I just wanted to add one point about open access. There are journals today that will say, okay, these articles are open access, but they require APCs, or author processing charges. And that is very different from our journal, which has no author charges. So it's free to the authors, and it's free to the readers. And that's different from some of the models of publishers that will charge these APCs, allow the readers to read it for free, but the authors are actually paying a fee.

H: Exactly, which potentially excludes a large number of people from publishing their work in widely read journals like LLT. So that's a really important point about this journal. Dorothy, there are different types of submissions that authors can make to the journal, different types of articles that get published. Can you just briefly talk us through what they are?

DC: Yes, well, the bulk of the items are, of course, the research articles, and they are first processed by our managing editor, who at the moment is Skyler Smela. These submissions are then vetted by the two editors-in-chief, namely Trude and myself. And then if they pass the initial review and we think they're worthy of an external review, we assign them to one of the eight associate editors, and our associate editors at present are Philip Hubbard, Meei-Ling Liaw, Lara Lomicka Anderson, yourself–Hayo Reinders, Jonathon Reinhardt, Jim Ranalli, Shannon Sauro, and Nina Vyatkina. So in addition to these regular articles, we also publish special issues on cutting-edge topics. And these special issues are guest-edited by experts on these particular topics. They are also fully vetted research articles. In addition to the research articles, we have book and multimedia reviews, currently edited by Ruslan Suvorov. We also have an emerging technologies column, and this is edited by Bob Godwin-Jones. His articles historically have been the most popular and the most downloaded of all of the pieces in LLT, and they deal with the cutting-edge technologies. And he also reports on emerging research on these technologies. Obviously, they're not fully expanded research articles, but he gives a flavor of what is to come and the important things to be thinking about. We also have forums. There's a forum on language teaching and technology. There's also a newish forum on language teacher education and technology. The former is edited by Greg Kessler, and the one on teacher education is edited by Mimi Li. These are shorter, more teacher-focused articles. And so those are the main types of pieces that we publish in LLT.

H: Brilliant. Okay, that's very helpful. So I guess the next question is the one that some of our listeners have been desperately waiting for. And we're going to ask you, Trude, how do I get my work published in this journal? What is the secret advice you can give me as editors?

TH: Well, there are a few tips I can give potential authors. Frst of all, I think what applies to every journal is you gotta select an appropriate journal, meaning we sometimes get submissions that do not include technology. They naturally get rejected because our journal is all about language learning and technology. With regards to writing the actual article, draft an introduction by focusing on: What is the problem? What is the question I want to answer? Then sometimes we get articles with very lengthy literature reviews. Well, provide a focused literature review that basically focuses on: What do we already know? What is your contribution? And then list or define your succinct research questions. What does this article actually want to answer? And they should be really precise and, at the same time, comprehensive. Then comes the methodology. Describe it in detail. Who are the participants? How were the data collected? And then describe the methods. Provide details of the materials, instruments and interventions, and especially how learning was measured. Because in the end, we are interested in getting a contribution that empirically measures how these learning outcomes were achieved. Then we report on the results. Provide the actual data, what was in the pretest, post tests and so on–you may include it as an appendix–and then you report on the results. And naturally, very important, you have to contextualize those results with regards to previous findings. How do the results you obtained actually fit in the existing literature? Is there something new we found? Or is it in contradiction to previous results? Focus on that one, and then conclude with your contribution. And the very last step is you would write an abstract. We often get submissions, and they might be rejected, or at least authors will be asked to have it proofread by somebody familiar with academic prose. Because naturally, we don't want to send out an article to reviewers which is not polished because reviewers really don't appreciate that. And stick to the word limits we are providing on our submission guidelines. Again, a reviewer doesn't want to read an article with 15,000 words if the word limit is 8,500. And that's basically all what authors should pay attention to. We have a few slides also on our website which outline this in some detail and which authors could consult.

H: Very nice. And we'll make sure to include that in the show notes, and people can can look that up. I think that's a very helpful set of guidelines. Now, despite the fact that a lot of authors do follow those guidelines, more or less, Dorothy, we have, as a journal, a really quite low acceptance rate. So leaving aside issues with somebody submitting an article that is really outside of the scope of the journal and so on, what are some of the reasons for that low acceptance rate?

DC: Well, one of the key requirements of LLT is that articles have to include empirical research that focuses on learning outcomes. And we have this in our guidelines for contributors, but not everyone reads this or takes it to heart because we receive many articles that present the results of surveys or questionnaires, for example, questionnaires about teacher or learner attitudes about using a particular technology, did they think it helps them to learn, and that is actually not empirical data that focuses on learning outcomes. And so that is one of the most common reasons that we reject articles and do not even send them for external review. We also do not accept descriptions of, you know, using technology in the classroom, or if they don't include learning outcomes. So it's not enough to describe this great online course that you just did during the pandemic or whatever it was. But if you don't focus on measuring actual learning outcomes, that is another reason that we would reject it initially and not even send it for external review. Trude mentioned that sometimes the literature review is too long and unfocused. Well, sometimes it's often too short and it doesn't contain CALL references to CALL articles. You'd be surprised how many articles we get where authors have not cited relevant literature from CALL journals. Another reason for rejecting articles often in the initial phase is that they don't have a solid research methodology. Some don't have a control group, for example, or they haven't controlled for relevant variables. And this is something that, in this day and age, is a little bit surprising. Also, in this day and age, you know, we're not so much trying to compare using technology with not using technology, we've gone way beyond that, to the point where we are looking for what are the particular affordances of a given technology for learning. And we're not trying to compare using paper and pencil versus using VR, for example. That's just a sort of gross example. Another reason that I often will reject a submission outright is that they focus so heavily on the statistics, pages and pages of this statistic and that statistic, but they haven't really defined the instruments and they haven't really analyzed what these statistics are telling us about the kind of learning that took place. And then finally, well, two more things. One is that, you know, because of all these statistics, they are not analyzing deeply enough. And so maybe that was redundant, maybe I just said that. Then the final item is that some submissions lack a discussion of the pedagogical implications. So they're so heavily focused on the statistics that they fail to help the readers understand. Okay, this is what these statistics have told us about learning. And this is how we're going to implement this in our teaching practices, in our classroom instruction, or out of classroom instruction for that matter.

H: Yeah, that's a good final point, perhaps to emphasize. That's also something that I know I myself and colleagues on the editorial board look for is the answer to the question: So what? What is this article actually trying to say? What does it mean? What are the implications or interpretations that can be drawn from this? Has it been put into a broader context and so on? And I think that's worth emphasizing to the listeners. I think between the two of you, you've given some really good guidelines. Now just looking towards the future. Trude, what is on the horizon for the journal? Any exciting new developments that we need to know about?

TH: Well, LLT never stops thinking. And this is just due to the wonderful team we have. Hardly a week goes by without getting suggestions from someone, be it an associate editor, be it our managing editor or our sponsors. We are really pleased about our new rollout model, which we have now in place. But I think what we are looking at currently is how to improve our presence on social media. How can we put LLT out there even more than it is already at this point? So one of these is, for instance, the wonderful podcast you're doing with us because this is one way of reaching out, and in another way, than just providing a paragraph on what we are doing. The podcasts and also maybe some interviews with current authors publishing their work in LLT, or we have our wonderful special issues editors who report on a particular topic by including articles. So it would be nice to actually include some videos there where we interview our special issues editors and give us a little bit of insight on where that particular topic is currently at. So these are some ideas. Naturally, mostly we are all volunteers and our time is limited. We try and move forward as we can. But with a big team like LLT, it is certainly possible to always innovate, what I think we have been doing in the past.

H: Wonderful, yes. And I think this is a good place as any to invite our listeners and the readers of the journal to also send us their ideas. You know, where would you like to see us be more present is, you know, is it on Facebook or Twitter or Instagram? Or, you know, where do you consume your academic information? And how can we play a role in that? And also let us know what you think about this podcast? Is it helpful? Is it interesting? Should we do something else? We'd love to hear from you. Thank you both so much for making the time to join this podcast, Dorothy and Trude, and most importantly, thank you both so much for your hard work because as you've said, and you didn't say it about yourself, so I will. A lot of the work that we do, a lot of the work that you do, is volunteer work, and it's really thanks to contributions like the ones that you make that we can have this wonderful community, this resource for teachers and researchers and students around the world. So thank you both very much for joining and for your hard work.

Dorothy Chun & Trude Heift, Editors

Published by the National Foreign Language Resource Center (NFLRC) with additional support by the NFLRC and the Center for Language & Technology at the University of Hawai‘i at Mānoa.

Rankings

3.80

The latest ISI Journal Citation Reports® Ranking for 2022 showed that LLT had an Impact Factor of 3.80, placing it 14th out of 194 Linguistics journals and 55th out of 269 Education and Educational Research journals. Read more about our rankings here.

 

9.0
2022CiteScore
 
99th percentile
Powered by  Scopus

 

The latest CiteScore for LLT is 9.0 and is ranked 10th out of 1001 (99th percentile) in Language and Linguistics. Read more about our CiteScore rankings in other subject areas indexed by Scopus here.

Best Paper Award

News & Announcements

Voices from LLT

Volunteer to Copy Edit

Subscribe

Donate